USF Libraries
USF Digital Collections

An instrumental case study of the phenomenon of collaboration in the process of improving community college developmenta...

MISSING IMAGE

Material Information

Title:
An instrumental case study of the phenomenon of collaboration in the process of improving community college developmental reading and writing instruction
Physical Description:
Book
Language:
English
Creator:
Gordin, Patricia C
Publisher:
University of South Florida
Place of Publication:
Tampa, Fla
Publication Date:

Subjects

Subjects / Keywords:
Community of practice
Faculty learning community
Institutional research
Quality enhancement plan
Assessment
Student learning outcomes
Dissertations, Academic -- Higher Education -- Doctoral -- USF
Genre:
bibliography   ( marcgt )
theses   ( marcgt )
non-fiction   ( marcgt )

Notes

Abstract:
ABSTRACT: Focusing upon the intersections between community college faculty and assessment professionals (e.g., institutional researchers) in improving student learning outcomes, the purpose of this study was to describe, analyze, and interpret the experiences of these professionals as they planned for and conducted student learning outcomes assessment in developmental reading, writing, and study skills courses. This instrumental case study at one particular community college in Florida investigated the roles played by these individuals within the larger college effort to develop a Quality Enhancement Plan (QEP), an essential component of a regional accreditation review. The methodology included individual interviews, a focus group interview, a field observation, and analysis of documents related to assessment planning.^ There were several major findings: · Assessment professionals and faculty teaching developmental courses had similar professional development interests (e.g., teaching and learning, measurement). · While some faculty leaders assumed a facilitative role similar to that of an assessment professional, the reporting structure determined the appropriate action taken in response to the results of assessment. That is, assessment professionals interpreted results and recommended targets for improvement, while faculty and instructional administrators implemented and monitored instructional strategies. · The continuous transformation of the QEP organizational structure through research, strategy formulation, and implementation phases in an inclusive process enabled the college to put its best knowledge and measurement expertise into its five-year plan.^ · Developmental goals for students in addition to Florida-mandated exit exams included self-direction, affective development such as motivation, and success at the next level. · Faculty identified discipline-based workshops as promising vehicles for infusing instructional changes into courses, thus using the results of learning outcomes assessments more effectively.A chronological analysis further contributed to findings of the study. This researcher concluded that the College's eight-year history of developing general education outcomes and striving to improve the college preparatory program through longitudinal tracking of student success had incubated a powerful faculty learning community and an alliance with assessment professionals. This community of practice, when provided the right structure, leadership, and resources, enabled the College to create a Quality Enhancement Plan that faculty and staff members could be proud of.
Thesis:
Dissertation (Ph.D.)--University of South Florida, 2006.
Bibliography:
Includes bibliographical references.
System Details:
System requirements: World Wide Web browser and PDF reader.
System Details:
Mode of access: World Wide Web.
Statement of Responsibility:
by Patricia C. Gordin.
General Note:
Title from PDF of title page.
General Note:
Document formatted into pages; contains 237 pages.
General Note:
Includes vita.

Record Information

Source Institution:
University of South Florida Library
Holding Location:
University of South Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
aleph - 001929946
oclc - 213096914
usfldc doi - E14-SFE0001742
usfldc handle - e14.1742
System ID:
SFS0026060:00001


This item is only available as the following downloads:


Full Text

PAGE 1

An Instrumental Case Study of the Phenom enon of Collaboration in the Process of Improving Community College Developmen tal Reading and Writing Instruction by Patricia C. Gordin A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy Department of Adult, Career, and Higher Education College of Education University of South Florida Major Professor: Jan Ignash, Ph.D. Michael Mills, Ph.D. Robert Sullins, Ed.D. James White, Ph.D. Date of Approval: November 1, 2006 Keywords: community of practice, faculty l earning community, institutional research, quality enhancement plan, assessm ent, student learning outcomes Copyright 2006 Patricia C. Gordin

PAGE 2

ii Dedication I dedicate this dissertation to my husband, Larry Gordin, whose patience and unconditional support has propelled me t hough this program; to my boss, Maureen McClintock, who has shown me that a leader can also be a friend; and to my parents, Harold and Audrey Hayward, who invested in their children’s fu ture by providing all three of them with a baccalaureate education.

PAGE 3

iii Acknowledgements I wish to acknowledge the contributions of the members of my program of study committee: Dr. Jan Ignash, Chair; Dr. Michael Mills, Dr. Robert Sullins, and Dr. James White. Further, I am indebted to the people of Sunshine State Community College, whose habits of self-reflection enabled me to conduct this research on their campus.

PAGE 4

i Table of Contents List of Figures ............................................................................................................... ....iv List of Tables................................................................................................................. ......v Abstract ...................................................................................................................... .......vi Chapter One Introduction and Background ......................................................................1 The Crisis in Developmental Education..................................................................2 Improving the Effectiveness of Assessment Practice..............................................4 Statement of the problem.........................................................................................9 Purpose of the Study................................................................................................9 Significance of the Study.......................................................................................10 Research Questions....................................................................................11 Limitations.................................................................................................11 Definition of Terms....................................................................................12 Summary................................................................................................................16 Chapter Two Review of the Literature............................................................................17 Mandates of the Higher Education Act..................................................................18 New Pressures from Accrediting Agencies...........................................................18 The Scholarship of Assessment.............................................................................25 One Corner of the Community College Stage.......................................................31 Faculty........................................................................................................32 Institutional Researchers (IR ) as Assessment Professionals......................37 Other Players: Theories of Organizational Change...............................................40 Structure and Function: Organi zations as Bureaucracies..........................40 Power Relationships: The Political Frame.................................................41 Professional Development: The Human Resources Frame.......................42 Values, Beliefs, and Symbols: The Cultural Frame...................................42 Re-framing: Understanding th e Complexity of Organizational Change.......................................................................................................45 Loose-tight Coupling an d Organizational Change....................................46 Colleges as Schools that Learn..................................................................49 Grassroots Model: the Anti-Strategic Plan................................................51 Action Research and Organiza tional Change: Case Studies..................................52 Student Learning Problems and Measurement Opportunities...............................56 Preparation for College..............................................................................57

PAGE 5

ii Study Attitudes...........................................................................................58 Engagement................................................................................................58 Diversity.....................................................................................................59 Growth and Development..........................................................................60 Best Practices in Developmental Education..............................................61 Policies Governing Developmen tal Education in Florida.....................................63 Sunshine State Community College Measures......................................................67 Summary and Synthesis of the Literature Review.................................................70 Chapter Three Methods....................................................................................................72 Assumptions of Case Study Research....................................................................72 Case Worker’s Orientation to Community College Assessment...........................75 Conceptual Framework for Current Study.............................................................77 Research Design.....................................................................................................78 Research Questions....................................................................................79 Population/Unit of Study/ Sampling..........................................................79 Data Collection Procedures/Timetable......................................................82 Data Analysis Process................................................................................91 Analysis Procedures...................................................................................93 Ethics ........................................................................................................98 Reliability and Validity: Ensuri ng Trustworthiness of the Data................99 Summary..............................................................................................................100 Chapter Four Results......................................................................................................101 I. Synopsis of Findings......................................................................................102 II. An Introduction to the Actors........................................................................107 III. A Brief Timeline for the Gene sis of the College’s Learning Improvement Focus.......................................................................................114 IV. Findings on Research Questions....................................................................118 V. Findings on Topical Issues.............................................................................154 VI. Chapter Four Summary..................................................................................168 Chapter Five Major Findings, Conclu sions, and Implications for Theory, Practice and Research..........................................................................................180 Major Findings.....................................................................................................181 Conclusion...........................................................................................................192 Limitations...........................................................................................................198 Implications for Theory.......................................................................................199 Implications for Practice......................................................................................205 Implications for Research....................................................................................215 References..................................................................................................................... ...217 Bibliography................................................................................................................... .228

PAGE 6

iii Appendices Appendix A: Individual Interview Consent Form...............................................231 Appendix B: Focus Group Interview Consent Form...........................................234 Appendix C: Participant Recruitment Brochure..................................................237 About the Author...................................................................................................End Page

PAGE 7

iv List of Figures Figure 1 The Nexus between Faculty a nd Assessment Professional/IR Roles........31 Figure 2 QEP Phase I – Research (Year 1: March-December)..............................128 Figure 3 QEP Phase II – Develop Strate gies (Year 2: January-October)..............129 Figure 4 QEP Phase III – Implement (Year 3).......................................................131

PAGE 8

v List of Tables Table 1 Foreshadowed Themes...............................................................................77 Table 2 Individual Interview Protocol....................................................................85 Table 3 Relationship of Interview Questions to Research Questions.....................87 Table 4 Focus Group Semi-Structured Protocol.....................................................89 Table 5 Documents.................................................................................................90 Table 6 Thematic Representation of Interview Questions......................................94 Table 7 Analytical Categories with Qualifiers........................................................95 Table 8 Local Definitions of Plan ning and Assessment Processes and Documents...............................................................................................113

PAGE 9

vi An Instrumental Case Study of the Phenom enon of Collaboration in the Process of Improving Community College Developmen tal Reading and Writing Instruction Patricia C. Gordin ABSTRACT Focusing upon the intersections between community college faculty and assessment professionals (e.g., institutional re searchers) in impr oving student learning outcomes, the purpose of this study was to describe, analyze, and interpret the experiences of these professionals as they planned for and conducted student learning outcomes assessment in developmental readi ng, writing, and study skills courses. This instrumental case study at one particular comm unity college in Flor ida investigated the roles played by these individuals within the larger college effort to develop a Quality Enhancement Plan (QEP), an essential com ponent of a regional accreditation review. The methodology included individual interv iews, a focus group interview, a field observation, and analysis of documents re lated to assessment planning. There were several major findings: Assessment professionals and faculty teaching developmental courses had similar professional development in terests (e.g., teaching and learning, measurement). While some faculty leaders assumed a facilitative role similar to that of an assessment professional, the reporting structure determined the appropriate

PAGE 10

vii action taken in response to the result s of assessment. That is, assessment professionals interpreted results and recommended targets for improvement, while faculty and instructional admini strators implemented and monitored instructional strategies. The continuous transformation of the QEP organizational structure through research, strategy formulation, and imp lementation phases in an inclusive process enabled the college to put its best knowledge and measurement expertise into its five-year plan. Developmental goals for students in a ddition to Florida-mandated exit exams included self-direction, affective devel opment such as motivation, and success at the next level. Faculty identified discipli ne-based workshops as promising vehicles for infusing instructional change s into courses, thus usi ng the results of learning outcomes assessments more effectively. A chronological analysis further cont ributed to findings of the study. This researcher concluded that the College’s eight-year history of developing general education outcomes and striving to improve the college preparatory program through longitudinal tracking of st udent success had incubated a powerful faculty learning community and an alliance with assessment pr ofessionals. This community of practice, when provided the right struct ure, leadership, and resources, enabled the College to create a Quality Enhancement Plan that f aculty and staff members could be proud of.

PAGE 11

1 Chapter One Introduction and Background “Democracy arose from men thinking that if th ey are equal in any respect they are equal in all respects” (Aristotle, Politics c.322 B.C., in Frost-Knappman & Shrager, 1998, p.90). Over the years, assessment professionals at Sunshine State Co mmunity College (a fictitious name) have worked with faculty us ing an assortment of measurement tools to identify and target problems impeding the su ccess of students in developmental reading and writing. This challenge has called upon th e resources, commitment, and ingenuity of college faculty, administrators, and institutional research staff to examine evidence of student learning and work out new solutions to learning prob lems. The focus of this study is upon the interactions between faculty memb ers and assessment professionals, such as institutional researchers at S unshine State Community College, that lead to improvement. In combining their expertise within a stude nt learning outcomes assessment process, these educators reexamine th eir long-held beliefs about effective teaching based upon evidence and subsequently reformulate learni ng strategies. In desc ribing, analyzing, and interpreting these experiences, this case study describes aspe cts of the problem solving process undertaken by faculty and assessment pr ofessionals as they strive for student learning improvement in devel opmental reading and writing.

PAGE 12

2 The Crisis in Developmental Education According to George R. Boggs, Presid ent of the American Association of Community Colleges, the higher education instit utions that educate almost half of all undergraduate students nation-wide are caught in a perfect storm of increasing enrollment and declining funding (2004). Although gove rnments have realized the economic importance of an educated citizenry to thei r states, they continue to slash support for higher education while ratcheting up perf ormance accountability. According to Dennis Jones, President of the National Center for Higher Education Management Systems (NCHEMS), growth in state funding for higher education over the ne xt eight years will continue to lag expenditures for all state programs by 5.7% (2005, p. 4). Apparently, this funding disparity is largely due to the competing demands of mushrooming programs like Medicaid. At the same time, performan ce pressures upon both colleges and students continue to grow as states hi ke tuition instead of increasi ng direct funding to pay for the growing costs of operations. Peter Ewell, an assessment scholar, has commented that accountability, the measurable demonstration of improvement, is the consequence of postsecondary education institutions not adequately explaining why students don’t succeed in their learning goals (1997). Th e piecemeal, isolated progress many colleges have made to organize for learning impr ovement has precipitated pressure from employers, politicians, and citizen groups to accelerate the pace of improvement. Clara Lovett (2005), former president of Northern Arizona University, recently editorialized on the reasons why higher educ ation is caught in this squeeze. The new consumers of higher education products, sh e says, are low-income parents who can’t afford the price tag the middle class has l ong been willing to pa y for quality schools.

PAGE 13

3 While they do not understand the complexiti es involved in large-scale productivity improvement needed to keep co sts from rising more quickly than inflation, they have put their legislators on notice that their kids must have an education. Unfortunately, it is these students, often the first in their families to attend college, who are under-prepared for the academic and emotional hurdles of the higher education experience. What we know from educational research thus far is that many students fail because poor academic preparation keeps them from adequately mastering course outcomes (Windham, 2002). Of the over 42,000 first time in college, degree-seeking students who matriculated in Florida commun ity colleges immediately after high school in 2001 and took an entry level test, 73% failed at least one subtest in reading, writing, or math (Florida Department of Education, 2004, p.1), making them ineligible to take many college-level courses without remediation. Fort y-three percent of this cohort failed the reading subtest. Reading preparation, in particular, is the gatekeeper to success in all other coursework, even college preparator y courses like writing and mathematics. Supporting this assertion, President Freeman Hrabowski of the University of MarylandBaltimore County, a leading advocate for Africa n American student achievement in math and science, recently commented on PBS News Hour that nothing was more important for student success than critical reading sk ills because the ability to understand math problems depended upon it (Hrabowski, 2005). One aspect of preparing students for acad emic success is helping them acquire good study attitudes. Of the 6,250 students w ho took the Community College Survey of Student Engagement (CCSSE) in Spring 2004 and were matched to the Florida Community College Student Data Base, 35% of those with high GPA (3.0 or higher)

PAGE 14

4 never went to class without completing read ings or assignments, a measure of effort. However, only 19% of those with low GPA (l ess than 3.0) reported the same superlative effort (Windham, 2005, CCSSE highlights, p.1). Also, students who gr aduated that term with associate in arts degrees said they re gularly communicated with their instructors. Reinforcing the notion that student-faculty communication is a valuable strategy for student success, only 3.5% of A ssociate in Arts (AA) graduates in this matched sample of CCSSE respondents stated that they neve r asked questions in class (p. 2). The impact that improvements in course success could have upon an entire college system should not be underestimat ed. Improving the passing rate in any one course from 50 to 80% increases a student’s cu mulative probability of success. For a full semester course load (five courses), the pr obability of success in this example improves from 3 to 33%, or ten-fold (Shugart, 2005). Fa ilure in individual courses can lead to withdrawal from college. For an academically challenged female, the opportunity cost of failure to earn an associate de gree can be as high as 44% of her earnings as a mere high school graduate (Bailey, Ki enzel, & Marcotte, 2004, p. 11). Improving the Effectiveness of Assessment Practice In his imperative to higher education leaders, Peter Ewell admonished “To get systemic improvement, we must make use of what is already known about learning itself, about promoting learning, and about institutional change” (1997, p. 3). Following this reasoning, to accomplish the genuine transforma tions necessary to help more students succeed, colleges need to re-think how their various functions work together to accomplish goals for student learning. An advocat e for this type of educational research

PAGE 15

5 is Trudy Banta, who said that effective collaborative research on assessment would include “Integrating the value frameworks of ot her disciplines with those inherent in the professional role of assessment practitioner…as well as studying how the faculty role and criteria for performance intersect with thos e of practicing professionals in educational research, evaluation, and measurement” ( 2002, p. 98). The development of effective faculty-staff collaboration strategies on asse ssment may take up to seven years (Larson & Greene, 2002). However, colleges subject to the 2001 Principles of Accreditation of the Southern Association of Colleges and Sc hools have recently had to develop and implement Quality Enhancement Plans requiring college-wide participation within one to two years. Traditionally, the role of institutional research (IR) staff has been to transform data into meaningful information and to report it through institutionally determined channels, feeding planning and evaluation cycl es. When given appropr iate attention and resources, this process should lead to inst itutional effectiveness. However, a recently released study by Columbia University’s Community College Research Center (CCRC) advocates a stronger role for IR in commun ity college leadership and involvement in student learning issues (Morest, 2005) This recommendation followed from the observation that institutional re search currently has a more limited function in community colleges than in four-year institutions. Chapter Two will further explore the role of institutional research and ot her professionals in supporting the assessment efforts aimed at improving teaching and learning. Kezar and Talburt (2004) advocate broade ning the repertoire of approaches in educational research to provide more timel y information concerning effective teaching

PAGE 16

6 and learning strategies to othe r practitioners. For example, collaborative partners such as learning evidence teams that include assessmen t professionals such as IR staff members produce new knowledge through action research. This advocacy is important because “a proliferation of research approaches offers valuable forms of knowledge and insight to those concerned with the study and practice of higher education” (p. 1). In support of diversifying research principl es, David W. Leslie, Chancellor Professor of Education at the College of William and Mary, said at a 2002 meeting of the Association for the Study of Higher Education that he was discouraged by his colleagues’ unwillingness to accept theories and methods of inquiry from other disciplines such as po litical science. He suggested that limiting higher education resear ch models this way may be yielding “Trees without fruit” (Keller, 1985 in Leslie, 2002, p. 2). In fact, he advocated improving research by encouraging his coll eagues to explore big ideas. One “big” idea is that new roles in the management of research findings obtained through assessment, a form of action resear ch, may help to transform colleges. The language of organizational learning and the new ways of examining student learning connect the process of solving student le arning problems with knowledge management, a relatively new discipline. Faculty practitioners who meet to share information about their craft create new knowledge (Wegner, 1998) about how to help students in these communities of practice. However, knowledge management must be about more than simply cataloging what colleges know. It must also be about using what colleges know to drive curricular changes and resource a pplication, thus improving both learning and assessment. A further discussion of action re search, the development of cross-functional

PAGE 17

7 communities of practice within community co lleges, and the role of both in knowledge management will be explored in Chapter Two of this study. Establishing effective assessment practices within institutions is a complex task. A comprehensive literature review undertaken in the late 1990s yielded a conceptual view of organizational and administrative support for learning assessment (Peterson, Augustine, Einarson, & Vaughan, 1999). In this model, support for assessment policies and practices, assessment culture and climate, external influences (such as accreditation), internal and external uses of assessment, and institutiona l context (such as public or private control and size) shaped a college’s in stitutional approach. The effectiveness of assessment thus had many dependencies, a nd factors outside of the classroom often impacted student learning. Chapter 2, Lite rature Review, contains a more complete discussion of theories of or ganizational transformation as they relate to assessment. According to Richard Voorhees, a past president of the Association for Institutional Research, an alte rnative job of IR is to feed networks (2003). New ideas may germinate in unpredictable ways from the s eeds of ideas planted by a catalyst member. These networks self-perpetuate, grow from the edges (rather than the center), and innovate more often when they exist within active, diverse communities. Thus, expertise required to conduct institutional resear ch has expanded beyond measurement and reporting to effective brokering of knowledge and nurturing of networks. The capability within colleges to do this kind of “out of the box” thinking has become necessary to cope with the squeeze from the external environmen t. This view of strategy formation as an unplanned process is similar to Birnbaum’s (1988) anarchical institution model and to

PAGE 18

8 Mintzberg’s (1989) grassroots model. These and other models of institutional change will be further elaborated in Chapter Two. Researchers at Columbia Teacher’s Co llege, have recently advocated a stronger role for institutional researchers in enrich ing college dialog about assessment (Morest, 2005). Moreover, Bailey, Alfonso, Calcagno, Je nkins, Keinzl, & Leinbach (2004) have advocated that institutions having higher than expected completion rates based upon their institutional characteristics should be studied for policy environments that may favor student success. Sunshine State Community College is one such case, with completion rates among full-time first-time in college st udents 4.6% higher than expected based upon institutional characteristics (B ailey et al, Florida Community College results as reported by Chancellor J. David Armstrong, p.1). Thus, th e results of this study could potentially be important to the 275 or more large (w ith 2,500 or more students) rural community colleges in the U.S., a profile similar to that of Sunshine State Community College. These colleges represent 25.7% of 1,070 publicly cont rolled two-year co lleges in the U.S. (Katsinas, 2003, p. 21). Also, Sunshine Stat e Community College recently completed its Southern Association of Colleges and Schools Quality Enhancement Plan site visit and was thus in an excellent pos ition to provide information re lated to accreditation-driven changes to other community colleges through this research. In conclusion, helping students successf ully complete a college education has recently become an urgent mission. One of the stumbling blocks many students must overcome in this journey is college prepar atory reading. Colleges are therefore using student learning assessment a nd learning evidence teams to improve students’ chances for success. As the use of assessment become s more prevalent because of accreditation

PAGE 19

9 requirements, institutional research and t eaching functions are moving closer to one another and learning more about the schol arship of teaching. The “measurement” intersection between their prof essions is where the data collection and interpretation process takes place. This process is critical to the use of data in ensuring the appropriate application of college resources to solv e persistent problems in student learning. Statement of the Problem Colleges, according to higher education policy analysts (Ewell, 1997) have not adequately explained why students don’t su cceed in their learning goals. However, establishing effective assessment practices that would enable widespread continuous process improvement within institutions is a complex task. To do this, colleges are rethinking how their various functions work toge ther to improve stude nt learning outcomes. Thus, while colleges may seek to use measurem ent professionals more effectively to aid faculty in improving student le arning outcomes, the picture of how the two disciplines, from different institutional cultures, can qui ckly establish a working relationship to accomplish accreditation goals is incomplete. Purpose of the Study Focusing upon the intersections between community college faculty and assessment professionals in the task of pla nning for the improvement of student learning outcomes, the purpose of this case study wa s to describe, analy ze, and interpret the experiences of these professiona ls as they built a culture of assessment in developmental reading and writing.

PAGE 20

10 Significance of the Study To begin the long evolution toward beco ming cultures of evidence, colleges need to more effectively promote dialog about evidence of student learning (Maki, 2004). Pressure from governing boards for increased accountability and ne w accreditation rules have created a place where faculty, and assessm ent professionals such as institutional researchers, and quality enhancement leaders must strive to improve performance within the colleges they serve. According to K ezar (2001), while cultural models at the institutional level have show n great promise in explaining the success of specific change strategies, the higher education community knows very li ttle about how department or functional level cultures affect organizational change (p. 130). If institutional researcher s or other assessment profe ssionals are to perform an expanded role in student learning inquiries, they must be prepared to follow faculty members into their communities of practice. Ch apter Four of this study presents findings and discusses specific collabora tive approaches to identi fying, analyzing, reporting, and improving student learning outcomes used at Sunshine State Community College. Chapter Five then discusses implications of these findings in terms of professional development for faculty and assessment staff, operational and polic y changes needed to implement effective student learning outcome s assessment strategies, and the cultural conditions that determine the intersecti on between faculty members and assessment professionals.

PAGE 21

11 Research Questions Eight research questions were investigat ed in this case study of Sunshine State Community College. 1. How is the professional preparation and educational background of a developmental education faculty member like that of an a ssessment professional, and how is it different? 2. How is the assessment role of a developmen tal education faculty member like that of an assessment professional, and how is it different? 3. Which collaborative strategies serve to create common ground for faculty members and assessment professionals to work together on assessment plans? 4. Which strategies cause estrangement between faculty members and assessment professionals? 5. What role, if any, does an assessment prof essional play in determ ining how the results of student learning outcomes assessme nt will be used for improvement? 6. Have faculty members at the college beco me more like assessment professionals and assessment professionals more like faculty members in terms of their assessment roles since they began collaborating on student learning outcomes assessment? 7. If so, how have they become more alike? 8. From the perspective of respondents, wh ich assessment approaches have shown the most promising results? Limitations While the co-curricular cont ributions of other community college faculty and staff members (as in learning resources) and academic administrators can greatly contribute to

PAGE 22

12 student learning, this study is focused more narrowly on the interactions between faculty members teaching developmental education cour ses and assessment professionals such as institutional researchers. A study of the phenomenon of collaboration, this research offers focused insights into a small but important se gment of a much larger set of strategies needed to conduct effective assessmen t within a community college. Another limitation is that the boundaries of this case and th e authenticity of experience to each individual reader may or may not permit “naturalistic generalizations” (Stake, 1995, p. 86) concerning the applicability of asp ects of the case to the reader’s own college. The technique used to enable reader s to generalize from one case to another in qualitative research is the use of “rich, thick descript ion” (Merriam, 2002, p.29). This technique, according to Merriam, “is a major strategy to ensure for external validity” (p. 29) in a qualitative study. Cronbach calle d these context-specific cases “working hypotheses” (1975, in Merriam, 2002, p. 28). Pr ecedents for these are case law and medicine (p.29). Each reader must eventually decide on his or her own what portions of a case apply to another and which do not. Definition of Terms This section describes terms used in th is study of the studen t learning outcomes assessment process. Assessment: …is an ongoing process aimed at understanding and improving student learning. It involves making our e xpectations explicit and public; setting appropriate criteria and hi gh standards for learning quality;

PAGE 23

13 systematically gathering, analyz ing, and interpre ting evidence to determine how well performance ma tches those expectations and standards; and using the resulting information to document, explain, and improve performance. (Angelo, 1995, p. 7) A white paper on assessment by the L eague for Innovation in the Community Colleges (2004, p.12) differentiated vari ous types of assessment by purpose. These are paraphrased below: Diagnostic assessment determines students’ pr ior knowledge. Examples of these are entry-level placement tests such as the SAT. Formative assessment measures and gives st udents progress reports on their learning. Examples of these are in-class quizzes. Needs assessments perform a gap analysis in either a student or institutional context. That is, such an assessmen t may determine the difference between students’ existing skills, at titudes, and knowledge and the level desired. A needs assessment may also determine the need for a particular program of instruction within a college. Reaction assessments measure students’ opini ons of learning or learning support experiences. Examples of th ese are course evaluations. Summative assessments are measurements of st udent learning that will determine the assignment of a grade or completion of a milestone. Examples of these are midterm or final exams.

PAGE 24

14 Assessment Professional: A non-faculty employee or consultant of the college who provides measurement, technical, or organi zational skills to the implementation of student learning outcomes assessment. Cognitive Dissonance: Festinger (1957) defined this as a mental state in which a new experience or phenomenon clashes with one’s be liefs or expectations. According to this theory, dissonance causes discomfort, motiva ting the person experien cing it to find ways to reduce it, often leading to ch anges in beliefs or attitudes. Models of inquiry/research: The research model chosen reflects the environment in which the subjects are studied, the researcher’s orientation to the subject matter, and whether the inquiry seeks to explain or to understand (S take, 1995). The following are terms that describe methods of inquiry: Action: Research on site conducted by research ers or collaborative partnerships, such as learning assessment teams, provides valuable information concerning effective practice to other practi tioners (Kezar & Ta lburt, 2004). Case study: The purpose of case study, a form of qualitative research, is to understand human interaction between act ors within a social unit, a single instance bounded by the case worker in th e process of designing the research (Stake, 1995). Organizational culture: A college’s culture is an invisibl e web that connects individuals through its most cherished values, beliefs, and symbols (Peterson et al 1999). In colleges, culture manifests itself in mi ssion, decision-making processes, orientation to educational change, responsibility for curriculum, and co mmitment to educational quality (Peterson, 2000). Culture may be inferred from “wha t people do (behaviors), what they say

PAGE 25

15 (language), and some tension between what th ey do and what they ought to do as well as what they make and use (artifacts)” (Spradley, 1980 in Creswell, 1998, p.59). Sources of organizational power: “Power is the ability to produce intended change in others, to influence them so th at they will be more likely to act in accordance with one’s own preferences. Power is essential to coordi nate and control the ac tivities of people and groups…” (Birnbaum, 1988, pp. 12-13). The foll owing are terms that describe power relationships within organizations: Coercive: This type of power “is the ability to punish if a person does not accept one’s attempt at influence” (p. 13). Expert: This type of power “is exercised when one person accepts influence from another because of a belief that the ot her person has some special knowledge or competence in a specific area” (p. 13). Legitimate: This type of power “exists when bot h parties agree to a common code or standard that gives one pa rty the right to influence th e other in a specific range of activities or behaviors and obliges the other to comply” (p. 13). Referent: This type of power “results from the willingness to be influenced by another because of one’s iden tification with the other” ( p. 13). Such is the power of peer groups. Reward: This type of power “is the ability of one person to offer or promise rewards to another or to remove or decrease negative influences” (p. 13). Tacit knowledge: What a person understands about the world that cannot be expressed in words or symbols is tacit knowledge (Polanyi, 1962).

PAGE 26

16 Summary This chapter has highlighted some of the reasons why helping students successfully complete a college education has recently become an urgent mission. Stumbling blocks many students must overcome in this journey are college preparatory reading and writing. Therefore, colleges ar e using student lear ning assessment and learning evidence teams to improve student s’ chance for success. As the use of assessment becomes more prevalent, institut ional research and teaching functions are moving closer to one another and learning mo re about the scholarship of teaching. The “measurement” intersection between their prof essions is where the data interpretation process takes place. This process is a lynchpin in the use of data to ensure the appropriate application of college resources to solv e persistent problems in student learning. Chapter Two will describe the corner of the stage upon which the actors in this case, faculty and assessment professi onals, conduct student learning outcomes assessment while taking their cues from governing boards and accreditation agencies. The chapter will discuss governmental and ac creditation causes of the acceleration of assessment process adoption, faculty and institu tional research roles, the scholarship of assessment, relevant organizational change theo ry, examples of action research that have the potential to improve student learning a nd success, measurement opportunities for colleges, the Florida policy environment in which Sunshine State Community College operates, and challenges curren tly faced by their students.

PAGE 27

17 Chapter Two Review of the Literature Although the policy and accreditation envi ronments are squeezing colleges for improved student achievement, genuine transformation of institutional processes has been slow in forthcoming. As assessment scholars have long attested (E well, 1997; Peterson, Augustine, Einarson, & Vaughan, 1999; Banta, 2002), creating the conditions under which institutionally transforming assessment fl ourishes is a complex leadership process. A change in mental models (Senge, Kleine r, Roberts, Ross, & Smith, 1994) used to conceptualize and solve problems usually preci pitates this transformation. Further, the college’s cultural foundations that erode duri ng the change process need to be shored up with new symbols, rituals, and practices (Weick, 1995). Th erefore, this chapter will describe relevant organizational change theories It will also explor e the evolving roles of faculty and assessment professionals and describe the common interests of these professions. These writings should help to place the conclusions from interviews, field observations, and documents into a larger context: where actors at individual colleges must assess student learning out comes and communicate their inte rpretations of that data to help college leaders navigate through a maze of strategic choices, thus improving student learning.

PAGE 28

18 Mandates of the Higher Education Act The U.S. Congress has been taking an increasingly active role in developing standards for accreditation (Wergin, 2005). In the 1992 reauthor ization, lawmakers created a required list of items to be incl uded in evaluations (such as college mission). The proposed extension pushes colleges to focus upon learning outcomes. If approved, the legislation would force increased transp arency of accreditation reports to the public. In this demanding environment, accrediting agencies must not only ensure quality through peer review, but improve quality, as we ll. It is in this arena that continuous quality improvement through assessment has become de rigueur. New Pressures from Accrediting Agencies Community colleges which underwent scruti ny from the Southern Association of Colleges and Schools (SACS) in 2004 say that the agency is now shooting with real bullets: the outcomes of general education must be defined and each college must provide proof it has an ongoing learning assessment pro cess. The SACS accreditation principles approved by the Commission on Colleges in 2001 institutionalized continuous process improvement through the adoption of Quality Enhancement Planning (Core Requirement 2.12). According to experts on assessment, th e development of effective faculty-staff collaboration strategies on assessment may take up to seven years (Larson & Greene, 2002). However, colleges subject to the 2001 Principles of Accreditation of the Southern Association of Colleges and Sc hools have recently had to de velop and implement Quality Enhancement Plans to measurably improve stude nt learning within a span of one to two years. Required elements of a Quality Enhancement Plan include:

PAGE 29

19 (1) a focused topic (directly re lated to student learning), (2) clear goals, (3) adequate resources in place to implement the plan, (4) evaluation strategies for determin ing the achievement of goals, and (5) evidence of community development and support of the plan (SACS Resource Manual, 2005, p. 21) Forging a path through this undiscovered country, the North Central Association of Colleges and Schools (NCACS), in conjunc tion with the American Association for Higher Education (and funded by the Pew Char itable Trusts), developed a document in 1999 called Levels of Implementation (Lopez, 2000) that describe progressive changes to organizational behavior observed in colleges as they adopt a culture of assessment. After reviewing 432 case studies (representing 44% of member institutions) from review teams, NCACS developed a matrix of institutional culture variables. Cultural variables included shared values, college mission, shared res ponsibility (among facult y, administration and board, and students). Institutional support variab les included resources and structure. The fourth and final variable was efficacy of a ssessment. As did Peterson et al (1999), Lopez found that institutional culture played a pivotal role in the successful use of assessment for improvement. Institutions in the Lopez study were classified as having attained one of three progressive stages for each of the in stitutional culture variables: beginning, making progress, or maturing stages of continuous impr ovement. This evaluative tool can be used by institutions in conducting self -assessments of their intern al processes in preparation for re-accreditation (p. 3).

PAGE 30

20 The Lopez (2003) matrix contains three broad measures of institutional progress in improving assessment culture: Beginning, making progress, and maturing stages of continuous improvement Within this framework, a college making progress on improving the collective values of its institutional culture, the first component of the matrix, should demonstrate the following behavior s (relevant to the sc ope of this study): …Student learning and assessment of student academic achievement are valued across the institution, de partments, and programs. (p. 71) A college making progress on improving its expressed mission, also part of institutional culture, should match this description: Some but not all of the institution’s assessment efforts are recognizably expressive of the sentiments about th e importance of assessing and improving student learning found in the Mission and Purposes statements. (p. 72) For a college to be making progress on the second component of the matrix, shared responsibility, faculty could be characterized by the following statements: …Faculty members are taking responsibi lity for ensuring that direct and indirect measures of student lear ning are aligned with the program’s educational goals and m easurable objectives…. Faculty members are becoming knowledg eable about the assessment program, its structures, components, and timetable. Faculty members are learning the vocabul ary and practices used in effective assessment activities and are increa singly contributing to assessment discussions and activities.

PAGE 31

21 After receiving assessment data, faculty members are working to “close the feedback loop” by reviewing assessment information and identifying areas of strength and areas of possible improve ment of student learning. (p. 73) For a college to be making progress on the second component of the matrix, shared responsibility, s tudents could be characterized by these statements: …There is student representation…on th e assessment committees organized within the institution. The institution effectively communicates with students about the purposes of assessment at the institution and their ro les in the assessment program. (p. 75) The final piece of the shared responsibility component of the matrix is that of the administration and Board. In order to be ma king progress, the ad ministration and Board should demonstrate the following behaviors: The Board, the CEO, and the executive officers of the institution express their understanding of the meaning, goals, ch aracteristics, and value of the assessment program, verbally a nd in written communication…. The CAO [Chief Academic Officer] arranges for awards and public recognition to individuals, groups, and academic units making noteworthy progress in assessing and impr oving student learning. (p. 74) The third component of the Lop ez matrix is institutional supp ort. In order to be making progress in institutional support, a college must provide resources. These are the characteristics of institutional support re sources when a college is making progress:

PAGE 32

22 …In institutions without an Office of Institutional Research (OIR), knowledgeable staff and/or faculty members are given release time or additional compensation to provide these services…. Resources are made available to support assessment committees seeking to develop skills in asse ssing student learning. Resources are made available to departments seeking to implement their assessment programs and to test changes intended to improve student learning. The institution provides resources to support an annual assessment reporting cycle and its feedback processes. Assessment information sources such as an assessment newsletter and/or and assessment resource manual are made avai lable to faculty to provide them with key assessment principles, concep ts, models, and procedures. (p. 76). In order to be making progress in instituti onal support, a college must also provide structures. These are the characteristics of instit utional support structures in such colleges: There is an organizational chart and an annual calendar of the implementation of the assessment program…. The CEO or CAO has established a standing Assessment Committee, typically comprised of faculty, academic administrators, and representatives of the OIR and student government.

PAGE 33

23 The administration has enlarged the re sponsibility of the OIR to include instruction and support to the Assessme nt Committee, academic unit heads, and academic or program faculty…. Some or many academic units and th e Curriculum Committee are requiring that faculty members indicate on th e [course] syllabi…and programs the measurable objectives for student lear ning and how student learning will be assessed. Members of the Assessment Committee serv e as coaches and facilitators to individuals and departments working to develop or improve their assessment programs and activities. The Assessment Committee is working w ith unit heads and with faculty and student government leaders to develop effective feedback loops so that information (about assessment results…) can be shared with all institutional constituencies and used to im prove student learning. (p. 78) The fourth and final component of the Lop ez matrix is the efficacy of assessment. Colleges making progress in improving the efficacy of assessment demonstrate the following characteristics: …The data the assessment program collect s are not useful in guiding effective change. Assessment data are being collected and reported but not being used to improve student learning.

PAGE 34

24 Faculty members are increasingly engage d in interpreting assessment results, discussing their implications, and recommending changes in academic programs and other areas …to improve learning…. Assessment findings about the state of student learning are beginning to be incorporated into reviews of the academic program…. The conclusions faculty reach after re viewing the assessment results and the recommendations they make regarding proposed changes in teaching methods, curriculum, course content, instructional resources, and academic support services are beginning to be incorporated into…planning and budgeting processes. (p. 79) Colleges may be rated differently among the various components of the Levels of Implementatio n matrix. However, the making progress ratings described above represent the mi dpoint of the scale, with the beginning implementation falling below and the maturing stage of continuous improvement rising above. The matrix may be used by “Consultant-Evaluators on Evaluation Teams” (Lopez, 2000, p.6), by instituti ons studying the progress of their individual units, or as the basis of furt her institutional research. By comparison to the 2001 SACS Principles of Accreditation, a college that did not receive a recommendation on any of the Core Requirements (2.1 2.12) and did reasonably well on the Comprehensive Standards (3.1.1 – 3.10.7) after re-accreditation review would likely be at least making progress on most of the components of the NCACS matrix.

PAGE 35

25 Robert Mundhenk (2004), the former director of assessment for the American Association for Higher Educati on, believes that community colleges are uniquely suited to balancing what Th omas Angelo (1999) calls the tension between assessment for accountabilit y (governance) and assessment for improvement (accreditation). Examples of this suitability include vocational programs (certificate and Associate in Applied Science degrees), which are typically responsive to program adviso ry committees, and transfer programs (Associate in Arts/Associate in Science degrees), which are beholden to state universities for the quality of their students’ general education. Through proactive efforts to reorganize for learning asse ssment and thus improve student success, community colleges may thus be able to stave off threats to funding and autonomy, keeping their open-door colle ges (Roueche & Roueche, 1994) open. The Scholarship of Assessment Establishing effective assessment practices within institutions is a complex task. A comprehensive literature review undertaken in the late 1990s yielded a conceptual view of organizational and administrative support for learning assessment (Peterson, Augustine, Einarson, & Vaughan, 1999). In this model, support for assessment, policies and practices, assessment culture and climate, external influences (such as accreditation), internal and external uses of assessment, and institutiona l context (such as public or private control and size) shaped a college’s in stitutional approach. The effectiveness of assessment thus had many dependencies, a nd factors outside of the classroom often impacted student learning.

PAGE 36

26 The targets of these assessments were “cognitive, affective, and behavioral dimensions of student performance and development” (p. 2) In effective colleges, the results of assessment were used for improving instru ction, and there was a common understanding of purpose. Peterson et al (1999) conducted this comprehensive study of U.S. colleges and universities offering associate and bachelo r’s degrees in January, 1998. The researchers received 1,393 surveys (out of 2,524 mailed) for a 55% response rate (1999, p. 10). Out of the surveys received, 548 were from co mmunity colleges. A profile of student assessment activity at community colleges em erged from this research. Seventy-three percent of community colleges said they ha d a governing body to oversee planning and policy change for student assessment (p. 73). Sixty-seven percent said that institutional research was a part of that group (p. 74). Forty-nine percent of community colleges indicated that institutional research staffe rs had operational res ponsibility for student assessment related activities (p. 75). Howeve r, where institutional researchers were engaged in student assessment activity, they rarely assumed a leadership role. Only 18% of community colleges said that an institutional research officer had executive responsibility for the institution-wide asse ssment planning group and only 20% indicated that an institutional research staffer had approval authority over changes to student learning assessment (p. 74). Further, only 2% of institutional resear chers with operational responsibility for student assessment reported to an institutional research officer. Instead, 37% reported to the chief executive officer and 43% reported to the chief academic officer (p. 75). This indicates, as expecte d, that researchers in most colleges play a support role in assessment. However, more than half of community colleges reported that

PAGE 37

27 they did not maintain an office to support facu lty in developing curriculum or assessment strategies. Clearly, not all college “assessm ent” activity was tightly connected with classroom activity. The researchers also found that commun ity colleges faced different kinds of assessment implementation problems than did other kinds of institutions because of differences in governance (p.6). For exampl e, while the faculty had less power and autonomy, administrators wielded more power th an in four-year colle ges. This imbalance of power, relative to other types of colle ges, may have found administrators making decisions about whether (or how) faculty s hould do assessment, which opportunities they would have to learn how to do it, and how much support they would receive. Other features unique to community colleges were the diversity of both the community college mission and the student population (in compar ison with four year colleges). These differences, the authors said, should be ta ken into consideration when developing a college-wide assessment plan. The study also found differences between community colleges and all institutions in the types of assessment data collected. Fi rst, community colleges were less likely to collect cognitive data on student’s higher order thinking, general education, and major field progress and more likely to collect data on basic and voca tional skills. Second, community colleges were less likely to asse ss students’ affective and behavioral development in areas related to personal a nd affective, institutiona l involvement, student satisfaction, and academic progress than all institutions. Community colleges, however, measured academic intentions more often than did all institutions. Third, community colleges were less likely than all institutions to link student engagement with academic

PAGE 38

28 performance. The researchers therefore conc luded that community colleges were not fully engaged with student assessment (or as engaged as four year institutions), perhaps because of the difficulties in conducting assessments on largely part-time commuter student populations. While Peterson et al (1999) provide d a snapshot of the way colleges had organized to do assessment from the perspectiv e of policy, structure, and climate, other assessment scholars studied the improvement of assessment practice through action research. The assessment “movement” began to capture the attention of higher education in the early 1990s with the publication of the nine “Principles of Good Practice for Assessing Student Learning” in a volume of Assessment Update (Banta, 2004). A decade later, Trudy Banta published a set of hallmarks characteristic of organizations that are fully engaged in effectively planning implementing, improving and sustaining assessment. Almost a dozen assessment scho lars contributed their wisdom to this effective practice matrix. The major activiti es (planning, implementing, and sustaining) are similar to those in action research: “plan, act, observe, and reflect” (Suskie, 2004, p.8). In Banta’s (2004) model, the sustaining act ivity is comparable to a combination of the observation and reflection activity in action research, enabling the college to learn not only from the assessment measures, but also fr om the assessment process. These effective practices, synthesized below, include: Planning: External Influences. Soliciting support from key stakeholders outside of the college including governing boards, empl oyers, and community representatives

PAGE 39

29 Engaging stakeholders. Encouraging involvement from internal constituents including administrators, faculty, staff, and students Focusing on goals. Stating a clear purpose and rela ting strategies to institutional goals and values Developing a plan. Incorporating assessment a pproaches based upon explicit program objectives Allowing Time. Scheduling sufficient time to de velop plans in response to a recognized need Implementing: Methods. Using multiple measures to allow triangulation of findings Faculty Development Developing faculty and staff expertise to implement assessment and use findings Leadership. Selecting knowledgeable leaders who make assessment everyone’s job, maintain unit level responsibility for the process, and conduct assessment on processes, not just outcomes Sustaining: Interpreting Findings. Providing a continuously supportive, non-judgmental environment and communicating conti nuously (in plain English) with stakeholders and participants about pro cesses, outcomes, and findings that serve as guides to improvement Reporting Results. Documenting valid and reliable evidence of student learning and institutional effectiveness, thus dem onstrating institutional accountability to students, Board members, and to the community at large

PAGE 40

30 Using Results. Using results of ongoing assessments to improve programs and services Recognizing Success. Recognizing individuals w ho contribute and celebrating unit success stories Improving Assessment. Institutionalizing evaluation and improvement of the assessment process itself (Banta, 2004, pp. 2-8) These hallmarks of good practice in learni ng assessment describe d by Banta provide a number of foreshadowed questions (Stake, 1995) for the field research within this case. Among these is the hallmark “interpreting finding s.” This aspect of sustaining assessment planning makes a crucial connection between the analysis and reporting of data, and between reporting and the use of results. The first instance of interpretation, following analysis, is conducted from a measurement pers pective. For example, “Was the finding of practical significance?” The second instance of interpretation is conducted from an institutional perspective. This occurs when assessment is linked to program review (Smith & Eder, 2004) or when institution-wi de effectiveness committees try to make sense of the information (Birnbaum, 1988) in li ght of what they beli eve about the college and its students. Interpretation from the in stitutional perspective is thus a critical connection to the application of reso urces in the use of results. A larger view of sense-making, however, is that it signifies much more than interpretation. Sense-making is actually a process of invention (Weick, 1995, p. 11), reducing the dissonance created when individuals are confront ed with new realities. The stories people create to expl ain circumstances and events bring new perspectives to be

PAGE 41

31 shared. This creative process forms new culture, replacing the broken symbols and antiquated rituals that institutional change has left in its wake. One Corner of the Community College Stage To improve teaching and learning in comm unity colleges, college leaders provide professional development opportunities and en hance the conditions in which faculty members and institutional researchers exchange insights and formulat e new strategies for curriculum, instruction, and assessment. By pooling their measurement expertise and teaching intuition and trying out new solutions to student learning problems, faculty members and researchers may gradually move the college toward graduating greater numbers of students. The development of an assessment culture, in which measurement is formative and faculty and staff members feel free to share their results (good or bad) without fear of recrimination, is essential to this partnership. As shown in Figure 1, the nexus between faculty and assessment professi onal roles as they collaborate on student learning, instruction, and measuremen t is the subject of this study. Figure 1: The Nexus between Faculty and Assessment Professional Roles Faculty Role Assessment Professional /IR Role A ssessmen t

PAGE 42

32 Faculty Cohen and Brawer (1996) commented that “as arbiters of the curriculum, the faculty transmit concepts and ideas, decide on course content and level, select textbooks, prepare and evaluate examinations, and ge nerally structure lear ning conditions for students” (p. 73). This description implie s great faculty control over the learning environment. However, an analysis of th e 1999 National Study of Postsecondary Faculty (Wallin, 2005) determined that 63% (p. 21) of U.S. community college faculty were parttime. This portion of the faculty grew from only 52% (p. 15) in 1988. In Florida community colleges, part-timers accounted for 75% of 2003-2004 faculty headcount, but taught only 45% of all course sections (Windham, 2005, Number and percentage, p.1). Wallin (2005) portrays the faculty community as a house divided between the haves (fulltimers) and have-nots (part-timers), who t ypically work for meager compensation (and few benefits) to maintain the financial viab ility of colleges. They often receive little orientation or pedagogical training before teaching a class and spend little time on campus, thus minimizing opportunities for student faculty interaction (one of the most important determinants of student succe ss). Community college policy makers are understandably concerned about this trend. Wallin therefore recomm ends a variety of strategies to bring part-time faculty into the heart of their co lleges through hiring practices, professional development, involvement in college committees, and collaboration on curriculum with full-time facult y. As it appears that the part-time faculty phenomenon is here to stay, it would behoove community colleges to involve them as partners in assessment activities with full-time faculty and institutional researchers. According to a recent study of the Florida Comm unity College system, this is particularly

PAGE 43

33 true for faculty teaching developmental co urses (Windham, Number and percentage, 2005). A study of Fall 2003 course sections by a cademic category revealed that part-time faculty taught 63% of all college preparator y sections state-wide. This percentage for college preparatory instruction stood in cont rast to the Advanced and Professional (AP) category, in which the vast majority of cour ses carried college credit and university transfer attributes. Part-time faculty taught only 40% of AP course sections state-wide (p. 5). Recently, with a renewed push by organizations such as the Lumina Foundation, colleges have been striving to greatly impr ove the chances for under-prepared students’ success, despite their disadvantages in fa mily background, income, culture, and work status (Tinto, 2004). While a variety of academic support and student socialization strategies have improved su ccess rates among at-risk students (Roeuche & Roeuche, 1993), faculty members have remained ultima tely accountable for student learning. While a college faculty member has traditionally developed classroom in struction for students as a “solo act,” there is a new emphasis by acc reditation agencies such as the Southern Association of Colleges and Schools (SAC S, 2001) upon inquir y-based curricular processes. Faculty members and others who ta ke part in learning communities (Milton, 2004) are documenting their systematic inquiri es into student learning in courses and programs for the benefit of their institutions and their peers. Meaning, practice, community, and iden tity, the components of Wenger’s (1998) social learning theory, are exemplified by f aculty learning communities. First, meaning can be either individual or collective, but the way people experience life and the world around them is continually changing. This is particularly true for colleges transforming

PAGE 44

34 under external pressures. Second, practice “is a way of talking about the shared historical and social resources, frameworks, and perspec tives that can sustain mutual engagement in action” (p. 5). It is through practic ing the art of interpretation among multiple stakeholders that a college is able to connect its needs with resources that can meet those needs. Third, community lends value and recognition to individual and collective pursuits. By recognizing faculty and staff me mbers who are assessment “success stories,” each member of the institution learns to place value on the effort. Fourth, identity provides a framework for considering indivi dual growth in th e context of one’s community. Faculty members who have taught fo r many years no longer need to feel that they’ve hit a plateau and can advance no furt her. Assessment for internal improvement provides mature faculty a means of con tinuing professional growth and improving stature. All of these experiences are avai lable to faculty who actively share knowledge about assessment within local communities of practice. It is within this culture, with resource support from administrators and technical support from assessment professionals, that improving student lear ning outcomes through assessment activities becomes possible (Banta, 2004). Supporting this finding, Grunwald & Pete rson (2003) found that the strongest predictor of faculty satisfaction with and use of assessment was the college’s intention to use student assessment for improvement (as opposed to accountability). This variable accounted for 29% of their model’s varian ce (p. 193). Their study included a mix of community college and four-year institutio ns, randomly sampling a set of 200 tenuretrack faculty (at each larger institution) and all admini strators involved with assessment activities.

PAGE 45

35 The instrument used in the study, the In stitutional Climate for Student Assessment (ICSA), took a snapshot of the attitude s and behaviors of faculty, institutional researchers, and academic administrators as i ndicators of institutional climate. Peterson (2000) used these measurements as backgr ound information for developing case studies in student learning outcomes assessment. By measuring these indicators over time, researchers could potentially ga in insights into the changes in organizational culture that favored the use of and satisfaction with asse ssment for the improvement of teaching and learning. Grunwald & Peterson’s (2003) national st udy of seven colleges and universities known for promoting the use of assessment fo r decision-making identified institutional context variables as well as faculty and institutional characteristics, as reasons for the amount of faculty satisfaction and involvement in assessment processes. Satisfaction with approach to institutional assessment was greates t where faculty perceived that student assessment was frequently used for improvements in academic programs, student achievement, and instructional effectiveness. Likewise, satisfaction with approach was greatest where faculty believed that a ssessment had a major impact upon student retention, graduation, and satisfa ction, or secured valuable external benefits such as grants or accreditation. However, overall faculty satisfaction with approach to institutional assessment was minimal where external accountability or governance was the major focus of assessment efforts. Institution-wide activities such as fa culty and student governance committees on assessment produced high satisfaction with support for assessment However, faculty instructional impact was the highest predicto r of satisfaction with institutional support.

PAGE 46

36 This suggested that an avid interest in teaching and changes to instructional methods accompanied greater levels of satisfaction with institutional support for assessment. The lowest levels of faculty satisfaction with support for assessment, however, came from those who reported that institutional assessment had a variety of educational uses such as student affairs activities, di stance learning initiatives, a nd resource allo cation between institutional units. Faculty involvement with assessment was enhanced by the use of assessment results in making decisions about tenure, pr omotion, or salary. However, by far the greatest predictor of involvement in assessmen t was faculty attitude toward assessment. Where a faculty member believed that asse ssment would lead to improved student learning, greater accommodation for diverse learning styles, and enhanced teaching effectiveness, faculty reported more i nvolvement in assessment activities. Enhancing a faculty member’s predisposit ion to use assessment toward improved course performance is an important step in professional development that is often neglected. Although seminars on assessmen t techniques usually produce a lot of enthusiasm, Kurz and Banta (2004) found that th ey could convince faculty of the value of using assessment with some individual gui dance from instructional experts. These experts helped faculty in at least two ways. First, they assisted faculty in breaking down impasses in course learning into tiny step s, identifying the necessary pieces of prerequisite knowledge and pr actice and determining the se quence. Second, the experts suggested a number of simp le Classroom Assessment Techniques (CATs; Angelo and Cross, 1993) for detecting small changes in l earning, either in partic ular students or in groups of students. Examples of CATs us ed were minute paper, muddiest point, and

PAGE 47

37 concept map (Angelo and Cross, 1993 in Kurz and Banta, 2004, p. 89). Other faculty used segments of exams or quizzes as pr eand post-assessments of learning. The researchers found pre-/postmeasures to be effective in providing clear and convincing evidence of changes in students’ learning. Furt her, some participating faculty remarked that students “spontaneously expressed gr atitude for the feedback provided by the assessments, and others commented that thei r students clearly felt empowered by these experiences” (p. 93). The conclusion of the study was that successful classroom assessment should be “simple and closely tied to the course and its learning experiences” (p. 93). Institutional Researchers (IR) as Assessment Professionals While researchers typically occupy a support role in learning outcomes assessment, a recently released study by Co lumbia University’s Community College Research Center advocated a much stronger ro le for IR in community college leadership and involvement in student le arning issues (Morest, 2005) th an is typical in colleges today. Morest, a presenter at the 2005 League for Innovation in the Community Colleges Conference, had these questions in mind when surveying a national sample of colleges: What are the capabilities and poten tial of institutional research? What data sources and methods are typically used? What are the priorities of instit utional research, who sets these priorities, and what are the an ticipated audiences? (p. 3) Eighty-five out of a sample of 200 college s responded to an el ectronic survey, and researchers personally interviewe d staff from 30 colleges in 15 states (p. 2). Morest found

PAGE 48

38 IR functions at these colleges to be thin ly supported in a preliminary report on the research. Only 27% of colleges had IR depart ments of 1.5 full-time equivalent employees or more, 40% had a single IR position at the college, and 19% of co lleges split IR with other duties (p.5). Researcher s calculated frequencies and rates, but didn’t typically perform detailed analyses of data. The t op priorities of IR we re 1. accreditation, 2. retention, 3. graduation, 4. program review, and 5. enrollment. While more than half of responding colleges had faculty on IR committees, only 20% reported faculty involvement with projects, and 25% reported li ttle or no faculty involvement with IR at all (p. 6). Fully 85% of those surveyed indi cated a need for additional staff and almost 32% needed “upper level college administrati on to utilize instituti onal research” (p. 10) already gathered. Morest there by concluded that at community colleges, “the focus of IR appears to be primarily related to coll ege management, not research” (p. 11). For colleges that wish to make better use of institutional research, the expertise required to conduct IR has evolved from me re reporting of college inputs and outputs (such as enrollment and graduation) in the 1960s to advanced measurement skills and knowledge gained from sources like the Associ ation for Institutional Research and the certificate program at Indiana University (among others) in 2005. Today, IR may be called upon to convert knowledge and understand ing from its tacit (task-oriented) form into explicit (transferable) form in a process called knowledge management (Treat, Kristovich, & Henry, 2004). Th is explicit knowledge may then be transferred to other units of the college or to ot her colleges (i.e., through resear ch journals, co nferences, and professional discourse). However, one does not always have to convert tacit into explicit knowledge to transmit its unspoken wisdom to others. Sometimes institutional processes

PAGE 49

39 are not at all rational, but “m essy.” By providing rich descri ption and interpretation of an instance involving people and events, research ers may create case studies to enable others to understand the situational applic ation of specific tacit knowledge. Because new knowledge changes the ol d order of cultural foundations and political connections, colleges need to c ontinually renew themselves by creating new culture and connections. In that case, people who work together engage in sense-making (Weick, 1979), a four-stage process. In or ganizing for this process of socially constructing meaning, people in an institution first experience something new in their environment (ecological change). In the sec ond stage (enactment), they realize that the new phenomenon requires their atte ntion. In the third stage, these occurrences take on a name (selection). This enables the college in the fourth stage to retain a common vocabulary and mutual understanding of what the occurrence means (retention). These constructed meanings filter people’s focus so that they see only these defined patterns within their environment, thus reinforcing socially constructed meanings. People who study phenomena in the field (such as socially constructed knowledge) may describe such an instance of complex relationships between people, objects, and institutions. This is known as a case study (Stake, 1995). Educationa l researchers have begun to incorporate case study methodology into their repertoire of inquiry methods (Southeastern Association of Community College Researchers, 2005). According to Richard Voorhees, a past president of the Association for Institutional Research, an alte rnative job of IR is to feed networks (2003). New ideas may germinate in unpredictable ways from the s eeds of ideas planted by a catalyst member. These networks self-perpetuate, grow from the edges (rather than the center), and

PAGE 50

40 innovate more often when they exist within active, diverse communities. Thus, expertise required to conduct institutional resear ch has expanded beyond measurement and reporting to effective brokering of knowledge and nurturing of networks. The capability within colleges to do this kind of “out of the box” thinking has become necessary for them to cope with the squeeze from their external environments. Other Players: Theories of Organizational Change Colleges are complex organizations whose purpose may differ depending upon the perspective of the observer (Birnbaum, 1988). One benefit of studying organizational theory is to allow a college decision maker to try on the perspectives of others when weighing the potential conseque nces of his actions. Decidi ng how to get things done means choosing which types of power to wield to accomplish college goals and meet the expectations of stakeholders College leaders may coerce staff members to comply with distasteful edicts, but in most cases the leaders will suffer a backlash of staff resentment as a consequence. While leaders may be ab le to use legitimate power with people who agree with their subordinate status, facu lty members are more likely to respond to influence exercised through a leader’s status as a knowledge expert. The next paragraphs will discuss the various means by which coll ege leaders may influence organizational behavior. Structure and Function: Organi zations as Bureaucracies Bolman & Deal (2003) use the concept of reframing to und erstand the complex nature of organizations by looking at them from multiple perspectives. One such

PAGE 51

41 perspective comes from looking at colleges as if their organization charts defined them. Assumptions of the structural frame include management by objectives, division of labor, coordination and control, rational decisi on-making, form dependent upon task and technology, and structural cha nge as a remedy for performa nce deficiency (Bolman & Deal, 2003). This scientific approach to ma nagement was first advocated in the early 1900s by industrial researcher Frederick Tayl or (p. 45). A complex structure known as a matrix came into popularity in the 1960s to accommodate the growth of global corporations. While a corporat e matrix might have countries on one axis and product lines on another, colleges might have campus es on one axis and functional departments on another. On the other hand, many college s operate as “profe ssional bureaucracies” (p.77). Instead of a characteristic pyramid, the organization has a flattened structure because there are many workers (professors) an d few levels of authority between them and the college Chief Executive Officer. Re structuring may occur to accommodate changes in the environment, technology, growth, or leadership. In Pa rticular, changes to college organizational hierarchies may occur in response to accreditation requirements. Power Relationships: The Political Frame A second frame for understanding organizati onal behavior is political. While the coercive, ugly side of politic s is what often springs to mind, “Politics is simply the realistic process of making deci sions and allocating resources in a context of scarcity and divergent interests” (p. 181). The political frame helps to e xplain the uses of power, the resolution of conflict, and the formation of influential coalitions with like “values, beliefs, information, interests, and perceptions of reality” (p. 186) that may serve to

PAGE 52

42 redistribute power within an organization. Here, the boundary between the transformation of culture and the de velopment of influential coaliti ons of like-minded individuals becomes blurry. However, as college culture changes to accommoda te student learning outcomes assessment, so may the distribution of political power shift within the college (Kezar, 2001, p.41) Professional Development: The Human Resources Frame The human resource frame looks at the developmental needs of the humans (Bolman & Deal, 2003) who carry out the college mission and either directly or indirectly influence student learning outcomes. Facu lty influence learning outcomes directly through their roles as “arb iters of the curriculum” (Cohen and Brawer, 1996, p. 73). Assessment professionals, on the other hand, influence learning outcomes indirectly through their support roles in the evalua tion of curriculum and instruction. Professional development is a driving need within an organization undergoing rapid change (Bolman & Deal, 2003, p. 372), either because of internal feedback from learning assessment results or external accoun tability demands. Learning new assessment concepts and methods toward fulfilling accred itation requirements, for example, eases the tensions caused by the upheaval of faculty roles within th e college during periods of change. Values, Beliefs, and Symbols: The Cultural Frame Finnish philosopher Georg Henrick von Wright (1971) distinguished between explanation and understanding in the st udy of human interaction, saying that

PAGE 53

43 understanding had a humanist emphasis. It allo wed researchers to consider the aims and purposes of the actors in the course of unf olding events and the symbolic significance of cultural symbols and rites. While Bolman a nd Deal separated the symbolic frame from the structural, political, and human relations frames to explain or ganizational behavior from different points of view, other researchers viewed these frames as cultural types. A view of organizations from this perspective came from Wilkins & Ouchi (1983). Their study involved ethnographic data collection and observation, exploring the relationship between culture and performance. The author s created a paradigm for understanding the contribution of culture. Th e paradigm explained: under what conditions organi zations developed a wealth of shared social knowledge, the relationship of culture to organizational efficiency, and how these perspectives could help res earchers to understand cultural change within organizations. Conditions that favored the developm ent of shared knowledge within an organization might be determined by the exch ange system operating within the culture. Wilkins and Ouchi found that a theory of tran saction costs best e xplained why one type of culture was more efficient than others for certain types of exchange. In a market system of cultural governance, competition forc es parties to establish a fair price for commitments. Market systems assume that a fair pr ice can be determined. Where the pricing system is more ambiguous, a bureaucratic system may evolve. This has the advantage of reducing uncertainty for employees, who receive a regular pay check. For these wages, they submit to supervision, allo wing the employer to minimize the potential

PAGE 54

44 for behavior characterized by self-interest. Cl an systems, however, deal with the problem of self-interest in a different way. By shari ng a rich base of social knowledge, they see the exchange of favors as congruent, if not alwa ys equal. The trust that results from this belief in goal congruence causes them to believe that they will come out even in the longrun. The clan system requires the most work to maintain its culture. Factors that favor clan culture include stable membership, the lack of cultural alternatives, and the extent of interaction between members. Efficiency in clans comes from the shar ed knowledge they bring to solving new problems, reducing miscommunication a nd misunderstanding. Also, their goal congruence establishes trust. Their shared st ories give them a foundation for believing in the superiority of the collective. Clan culture most often associated with faculty members in colleges, is the most efficient form of organization for work that is complex, ambiguous, or where there are interdepe ndent exchanges involved. Market or bureaucratic cultures, most often associated with college business officers, are more efficient for exchanges that are simple and unambiguous. Clan cultures tend to be more adaptive to change, especially those that focus more on principles than practice. A shared comm itment to practice gives members more flexibility in solving unforeseen problems Thus, understanding how change affects various cultural types differently helps to expl ain the benefits of using multiple leadership strategies within culturally complex institutions Clan culture is an appropriate model for viewing faculty and their perceptions of or ganizational change within U.S. community colleges.

PAGE 55

45 The values, rituals, and belie fs of faculty cultu re, however, stand in contrast to those of the college administra tor. “A major frustration of life in loosely coupled systems is the difficulty of getting things to work the way the administrator wants them to” (Birnbaum, 1988, p. 39). While the exercise of pow er is necessary to produce changes in individual behavior, coercive power alienate s those subjected to it. More acceptable to faculty members are referent (influence) a nd expert (competence) power. According to Kezar, “Administrative power is based upon hi erarchy; it values bureaucratic norms and structure, power and influen ce, rationality, and control and coordina tion of activities” (2001, p. 72). Community colleges are especially likely to have bureaucratic decisionmaking processes, says Kezar. Faculty participa tion is often limited to what is determined through collective bargaining. While attempting to maintain internal stability and control, administrators are buffeted by powerful e nvironmental influences such as funding, enrollment, accountability, and accreditation. A po litical frame can sometimes be used to describe the manner in which such organizations function. “Political or dialectical models sometimes share assumptions with cultural models. Political models examine how a dominant culture shapes (and reshapes) organiza tional processes; this culture is referred to as the power culture ” (p. 41). The very different cu ltures in which faculty and administrators (e.g., institutional researchers) typically operate set th e scene for a clash of potentially competing interests. Re-framing: Understanding the Comp lexity of Organizational Change Bolman & Deal (2003) have advocated th at leaders consider a broad range of strategies for undertaking organizational change. In implementing student learning

PAGE 56

46 outcomes assessment plans, not dealing with an important aspect of the transition can spell failure for college initiatives. First, to overcome the anxiety people feel about organizational change, leaders should provide support, engage people in the process, and ensure professional development in assessment strategies and techniques. Second, to deal with the loss of stability, leaders shou ld communicate with people, negotiating new policies and processes (includi ng position responsibilities and institutional rewards). Third, to ease tensions between the empower ed and the powerless, leaders should create venues for negotiating issues and interests. Examples of these are Institutional Effectiveness or Quality Enhancement Pla nning committees. Fourth and finally, leaders should cope with the loss of meaning by saying goodbye to the past while welcoming the future. For instance, Davidson County Co mmunity College said goodbye to their discarded strategic planning system by sh redding old planning documents and burying them under a tree (Lobowski, Newsome, & Brooks, 2002). The symbol of their new planning process became the flip chart, re presenting the broadly participative process they had just adopted. Collectiv ely, these four are essential strategies for dealing with change in the structural, huma n resource, political, and symbolic frames (Bolman & Deal, 2003, p. 372). Loose-tight Coupling and Organization Change Taking an alternative view of organiza tional change within colleges, Robert Birnbaum (1988) described academic institutions as collections of in teracting subsystems that are either tightly or loosely coupled. Tight coup ling permits units to act upon requirements of their college’s environmen t with a timely and rational response. For

PAGE 57

47 example, the collective functi ons of a college’s business office (finance, accounting, purchasing, human resources, and payroll) are a tightly coupled subsystem. This type of intersection between units allows the college to remain in compliance with federal, state, and professional regulations, thus keeping the college’s doors open to students. The business office subsystem typically resemb les a bureaucratic organizational model, characterized by rational decisi on-making processes, hierarch ical structures, specific roles and responsibilities tied to job descrip tions, and the exercise of authority through legitimate power. These staff members are reminiscent of Wilk ins & Ouchi’s (1983) bureaucratic culture. Faculty members, on the other hand, are loosely coupled with the administrators to whom they “report.” Faculty have great autonomy and academic freedom in their collegial subsystem and “The collegium’ s emphasis on thoroughness and deliberation makes it likely that a greater number of approaches to a problem will be explored, and in greater depth, than would be true if greater attention we re paid to efficiency and precision” (Birnbaum, 1988, p. 99) As such, they closely resemble members of Wilkins & Ouchi’s (1983) clan culture. In the collegial subsystem, individuals are considered equals. Therefore, faculty members are won over through the use of expert or referent power. In contrast to bureaucratic systems, the exercise of legitimate power within collegial units is considered a threat to faculty autonomy. Furthe r, the exercise of coercive power carries the risk of alienati ng the very people who carry out the teaching and learning mission of the college. Instead, Birnbaum (1988) suggests that institutional leaders should follow these rules in leadi ng collegial subsystems effectively (pp. 102104):

PAGE 58

48 Conform to values treasured by gr oup members to establish trust. Make the timely decisions the group expects of a leader. Use customary channels for communications with group members. Before issuing an edict, make sure that the terms are fair to the group. Listen without judgment as members talk, argue, and express their points of view. Reduce status differences by deemphasizi ng the gulf between leader and faculty. Encourage self-governance thr ough conformity to group norms. Another perspective on colleges is to view them as “organized anarchies” (p.153). Institutions can be characte rized as anarchical if they have ambiguous goals, unclear reasons for the way they accomplish tasks, a nd fluid participation in decision-making groups such as task forces and commi ttees. Loose coupling throughout the college permits a flow of streams containing “problems, solutions, [and] participants” (p. 160). In these institutions, “garbage cans” (p. 165) ab sorb the problems that decision-makers would rather avoid. Examples of garbag e cans are long-range planning committees. Often, people who advocate a particular solution and are willing to spend time on fleshing it out will be positioned for selection if their particular issue suddenly becomes a “choice opportunity” (p. 160). Although contradict ory approaches to l eadership seem like a chaotic way to govern an organization, orga nized anarchy often serves colleges well. Over the years, it has enabled colleges to cope in an environment of conflicting and multiple expectations from various stakeholders. There is a downside to this compromi se, however. These loos ely-coupled systems are stable, but contain highly complex and multiple variables that defy rational explanation or control. Their stability “is achieved through cybe rnetic controls – that is,

PAGE 59

49 through self-correcting mechan isms that monitor organizational functions and provide attention cues, or negative feedback, to par ticipants when things are not going well” (p. 179). John Tagg, an associate professor of Eng lish at Palomar College in California (and assessment scholar), believes that the problem w ith cybernetic systems is that they cannot re-program themselves when necessary ( 2005). To achieve long-term improvement in institutional effectiveness, Tagg has two recommendations: First, core activities, including ch ange processes, need to be decoupled from the ritual classificatio ns that now define organizational integrity and success. Second, the co re activities of the institutions need to be more tightly coupled with significant learning outcomes. (pp. 39-40) Tagg believes that the data colleges currently collect, analyze, and report does not even begin to explain what colleges’ real problems are. Placing res earch activities closer to the classroom through action research may help co lleges to establish clearer links between faculty work and student outcomes. Colleges as Schools that Learn To “see” the connections between teachi ng and learning, it may be necessary for faculty and researchers to find new ways of looking at familiar problems. By cultivating the five disciplines of learning organizations (personal mastery, mental models, shared vision, team learning, and systems thinking) colleges can develop a deep learning capacity (Senge, Kleiner, Roberts, Ross, & Smith, 1994). Of these five disciplines, mental models are perhaps the most powerfu l tools of change. Humans develop mental

PAGE 60

50 representations of how the world works that are reinforced by other humans around them. Where this process can go awry is when peopl e climb a “ladder of inference” (p. 242) to arrive at these mental mode ls based upon assumptions that: “Our beliefs are the truth The truth is obvious Our beliefs are based on real data The data we select are real data” (p. 242). To validate or change mental models, Senge et al advocate 1. reflecting upon one’s own thinking, thereby exposing personal assumpti ons, 2. making one’s reasoning visible to others, and 3. inquiring into others’ thinking to uncover their assumptions. What a college would gain through this long-term development process is new skills and capabilities, new awareness and sens ibilities, and new atti tudes and beliefs (pp. 18-20). However, the development of this cap acity is dependent upon the integrity of the architecture used to build it, a triangle of 1. guiding ideas; 2. theories, methods, and tools; and 3. innovations in infrastructure (e.g., polic ies, procedures, and reward systems). In other words: People will work as a team and cooperate when they share common goals, receive proper information, have the skills to r ecognize, utilize and balance each others’ strengths and weaknesses, value teamwork, are rewarded for doing so, [and] are recognized as a team for doing a good job. (Hill’s Pet Nutrition, Inc. gu iding principles in Senge et al, p. 41)

PAGE 61

51 In the specific case of facilitating collabora tions between institutiona l research staff and faculty members conducting assessment, it woul d certainly help if each saw the other as peers, had similar professional developmen t opportunities, liked working together, and received similar recognition and rewards from their collaboration. Grassroots Model: The Anti-Strategic Plan Mintzberg (1989) proposed his grassroots m odel of strategic pl anning to counter a philosophy of leadership that st rategies must be carefully an d deliberately cultivated by a careful leader. The grassroots model has elements that mirror Birnbaum’s (1988) organizational anarchy and Senge et al’s (1994) l earning organization. Richard Voorhees, a past president of the Associ ation for Institutional Research, may have been thinking of the grassroots model when he said that an alternative job of IR was to feed networks (2003), which then grow in unant icipated directions. The prin ciples of this model are: 1. Strategies initially grow like w eeds in a garden, they are not cultivated like tomatoes in a hothouse. 2. These strategies can take root in all kinds of pl aces, virtually anywhere people have the capacity to learn and the resources to support that capacity. 3. Such strategies become organizational when they become collective, that is, when the patt erns proliferate to pervade the behavior of the organization at large. 4. The processes of proliferation may be conscious, but need not be; likewise, they may be ma naged but need not be.

PAGE 62

52 5. New strategies, which may be emerging continuously, tend to pervade the organization duri ng periods of change, which punctuate periods of more integrated continuity. 6. To manage this process is not to preconceive strategies but to recognize their emergence and in tervene when appropriate. (p. 214-216) Action Research and Organiza tional Change: Case Studies Student learning outcomes assessment is a form of action research. The purpose of action research is “to impr ove one’s own work rather than make broad generalizations. Assessment’s four-step cycle of estab lishing learning goals, providing learning opportunities, assessing student learning, and using the results to improve the other three steps mirrors the four steps of action resear ch: plan, act, observe, and reflect” (Suskie, 2004, p.8). As such, the case studies included in this section provide a glimpse into the world in which research and teaching and learning intersect and suggest successful strategies for carrying out that collaboration. Maricopa Community Colleges. Larson & Greene (2002) studied faculty involvement in developing and measuring st udent learning outcomes. They wanted to know how measurement professionals coul d facilitate faculty use of outcomes assessment. In doing so, they developed a cas e study of assessment development at Mesa Community College in the Maricopa Community College District, Arizona. The authors documented the college’s seven-year evolut ion of its present college-wide outcomes

PAGE 63

53 assessment program and their most recent effort to assist faculty in the refinement of numeracy outcomes assessment. Mesa Community College holds an annual assessment week during which assessment measures are administered to a sample of classes. For example, a numeracy assessment (the student’s ability to use numbe rs as the basis for decision-making) would take place in an English class, rather than a math class, to maintain the institutional (rather than course-based) natu re of the assessment process. To develop a faculty-owned assessment pr ocess, the college must agree on the outcomes of education. At Mesa, the se ven outcomes of general education are communication, numeracy, inquiry, information literacy, critical thinking & problem solving, cultural awareness, and art hist ory & humanities. Workplace outcomes for occupational programs are critical thinking, organization, technology li teracy, team work, ethics, and personal & professional development. The authors noted that the college had r eached a mature stage in the development of its assessment process, characterized by the extent of faculty adoption and use of assessment practices in making decisions about course and program improvement. Critical success factors in implementing eff ective learning outcomes assessment for Mesa Community College were: Faculty own the process and the results o Interdisciplinary faculty teams develop outcome measures o Faculty members administer the assessments Administrators support assessmen t as a faculty-driven process

PAGE 64

54 Institutional research provides techni cal support in designing assessments with good measurement principles Faculty, staff, and administrators use a common language for discussing assessment and using it for instructional improvement Collaboration requires time, effort, and tenacity among all parties (Larson & Greene, 2002, pp. 19-20) The outcome of this effort has been the de velopment of a culture of assessment within Mesa Community College. However, the fact that it took this college seven years (p. 3) to develop a mature approach to assessment (e .g., an assessment culture), should be an eyeopener to the uninitiated. Practitioner as Researcher. In other research, Bensimon, Polkinghorne, Bauman, & Vallajo (2004) studied the impact of action research involving diverse students’ success upon the development of a culture of evidence. They developed a case study using a “practitioner as resear cher” model to show the ways in which the measurements of student outcomes confront educators and mo tivate them to develop strategies to help failing students. Measurement consultants assi sted work teams of faculty, staff, and administrators at colleges in developi ng balanced diversity scorecards for their institutions. One of the key elements was the use of graphic illustrat ion, always in color. The success of practitioner research lay in the act of discovery more than in the measurements themselves. While the authors had little experience with providi ng this type of consulting service to colleges, most (but not all) of their “clients” be came successful users of data for institutional improvement and became advocates for measurement. The numbers

PAGE 65

55 confronted educators’ institutional myt hologies, producing the motivation to make changes in strategy. When team members beca me aware of the inequities in educational outcomes, for many it was a real epipha ny. “Overwhelming,” one woman said. The authors found that while measurement introduced college faculty, staff, and administrators to the reality of their st udents’ academic outcomes, their emotional response to the data motivated them to find solutions to the students’ learning problems. Assessment professionals at community colleges may thus use this strategy to introduce a new generation of practitioners at their own institutions to the impact of well-designed and meaningful measurement processes. Collaborative Analysis of Student Learning In the previous example, researchers acted as external consultants in facilitating the development of communities of practice. However, a more systematic method of creating and sustaining these learning communities within an institution is Collabora tive Analysis of Student Learning (CASL) (Langer, Colton, & Goff, 2003). The proponent s of CASL say that transformative learning that results from reflective inquiry changes both the practice of teaching and learning and the teacher’s knowledge and beliefs. The mechanism through which reflective inquiry drives th ese changes is cognitive di ssonance (Festinger, 1957). In CASL, dissonance arises when teachers “pose questions, view situations from multiple perspectives, examine their pe rsonal beliefs and assumptions, and experiment with new approaches” (Langer, Colton, & Goff, 2003, p. 27). This benefit does not come without personal risks, however. CASL processes must take place within an atmosphere of collaboration and trust to secure the honest y and openness necessary among participants in the discovery process.

PAGE 66

56 Rather than focusing upon a “best pract ices” one size fits all approach to professional development, the CASL process causes a teacher to engage in reflective inquiry when determining how to best he lp individual students over their learning hurdles. This self-awareness is a defense ag ainst “habituated per ception” (p. 33), which occurs when teachers see only what they e xpect to see and miss important clues that could lead students to learni ng breakthroughs. According to Senge et al, the mechanism that causes this blindness is the teacher’s me ntal model (Senge, Klei ner, Roberts, Ross, & Smith, 1994) of how student learning takes pl ace. It is only by ve rbalizing her thought processes with a supportive group of peers that the teacher’s assumptions can be discerned, challenged, and revised. Student Learning Problems a nd Measurement Opportunities Policy makers’ dissatisfaction with student graduation rates has created a “crisis in developmental education” (Brothen & Wambach, 2004, p. 16). According to Clifford Adelman, Senior Research Analyst for the U.S. Department of Education, although current minority students have greater access to higher education, “the degree completion gap remains stubbornly wide at 20% or higher” (1999, p. 4). This sec tion of the literature review will discuss educati onal research methods and findings that serve to enlighten community college approaches to assessing student learning outcomes at particular colleges. Colleges that adopt these approaches study individuals or groups of students for these characteristics th at matter to learning.

PAGE 67

57 Preparation for College Using a transcript analysis method to gather student academic resources (Carnegie units), remediation, and other variable s on the High School and Beyond/Sophomore (HS&B/SO) longitudinal cohort form 1980 to 1993, Adelman found that of students who attained a bachelor’s degree by 1993, the amount of remedial course work required had a negative association with completion. While those with no remedial courses (50.7% of cohort) completed at a rate of 68.9%, students needing any remedial reading course (10.2% of same cohort) completed a bachelor’s degree at a rate on only 39.3% (p. 74). In Adelman’s words, “Academic preparation, c ontinuous enrollment, and early academic performance… prove to be what counts” (p. 83). Parents’ level of education, a component of the variable “socioeconomic status” contributed a small but signifi cant amount of variability to Adelman’s final model. A just-released study of first generation in co llege students reinforces this finding. About 43% of 1992 high school graduates who entere d postsecondary education within eight years and whose parents had not attended college had left without a degree by 2000 (Chen & Carroll, 2005, p. iii). While 68% of students who had at least one collegeeducated parent graduated within this eight -year period, only 24% of first generation in college students had done so. At least some of the blame for their lower success rates may be found in weak high school academic preparation for college. As many as 55% of first generation in college students required at least one remedial course, while only 27% of students with a college-educat ed parent did so (p. iv –v).

PAGE 68

58 Study Attitudes What seems to be central to student su ccess in many cases is a drive to succeed by utilizing whatever sources of support the stude nt can muster. Illustrating this concept is the work of Karl Boughan of Prince George ’s Community College in Maryland (2000). Boughan used a 1990 cohort of first-time college entrants that included 43 variables encompassing socioeconomic information from census tracts, university articulation, attendance, remediation, course performance, study progress, financial aid, and use of support services to study the determinants of student achie vement over a six-year period. An analytical process using Structural E quation Modeling (SEM) re vealed the centrality of study attitudes in studen t success and found two semi -independent paths. The effort trail began with characteris tics of traditional students and progressed through transfer program orientation, institutional support, course load, and enrollment persistence. On the other hand, the performance trail began with the students’ so cio-economic attributes and proceeded through level of preparation fo r college. Study attitude, positioned in the center of the model, had a relatively high probability and connections to many other factors. The strength of this factor’s association with so ma ny others was an unanticipated finding of the study (p. 11). Engagement In answer to the need to study institu tional practices adhe ring to the Seven Principles for Good Practice in Undergradua te Instruction (Chickering & Gamson, 1987), assessment scholars at Indiana University de veloped and piloted th e National Survey of Student Engagement (NSSE—pronounced “nes sie”) in 1999 (Marti, 2004). After two

PAGE 69

59 years of NSSE field tests in four-year colleg es, these same scholars, in association with the Community College Leadership Program at the University of Texas, adapted the CCSSE (pronounced “cessie”) for use in comm unity colleges. The surveys are close cousins, as the instruments share ap proximately 71% of content (p. 1). The Community College Survey of Stude nt Engagement (CCSSE) is one method of assessment that colleges can use to benc hmark student engagement with learning from year to year. The developers of the CCSSE answered the community college challenges described by Peterson et al (1999) by developing an “i ndirect” assessment tool for measuring student engagement with learni ng based upon the Seven Principles for Good Practice in Undergraduate Instruction (Chickering & Gamson, 1987). The survey provides community colleges an opportunity to benchmark their improvement process, providing indirect measures of student learning experience and telling a story of the students’ personal goals, progress towa rd intellectual a nd personal growth, developmental needs, and ba rriers to full participation in college. A total of 152 institutions across 30 states participated in the 2004 admi nistration of the Community College Survey of Student Engagement, incl uding all 28 Florida Community Colleges. Diversity Hardin (1998) identifies a set of seven profiles for the diverse collection of students who have arrived at developmenta l education’s doorstep. Among these are the student who makes poor academic decisions, the adult student, the student with a disability, the ignored student, the student w ith limited English proficiency, the academic system user, and the extreme case. About twothirds of college preparatory students are

PAGE 70

60 White, the other one-third comprised of mos tly African American and Hispanic students (Boylan, Bonham, & White, 1999). College prepar atory students are thus a cultural “grab bag.” They are greatly diverse, not only in terms of demographics, but also in terms of their expectations from and their academic foundations for developmental course work. Growth and Development Student experiences and study habits reflect student growth in self-image, selfesteem, internal locus of control, intellect ual and personal growth, and family support of the student’s academic career (Harvey-Smith, 2002). According to Harvey-Smith, these characteristics are associated with higher se lf-reported GPA and overall satisfaction with college experience, particularly among mi nority and at-risk students. There are two competing philosophies in the scholarship of student development. One would prefer to centralize college student development to more efficiently funnel students through a single screening system that triages and th en develops student deficits (Roueche, Roueche, & Ely, 2001). The other would diffu se its benefits throughout the components of the college curriculum by having faculty cont inually reinforce beneficial habits such as reflection, self-monitoring, and active learning (Bothen & Wambach, 2004). Today, many students seeking to improve their chances for success by achieving a degree must overcome the hurdle of completi ng college preparatory courses. Although minority students have greater access to highe r education than ever before, their chances for college success remain lower than thos e of white students (Adelman, 1999). Without an overall college plan to deal with student s who are just enteri ng college (and needing developmental course work), students may not understand the longterm benefits of

PAGE 71

61 taking a master student course They may thus overlook the opportunity to improve their ability to learn and develop the social connect ions that can help them succeed in college. On the other hand, if college educators cannot prove significant benefits of orientation courses or first-year programs to entering st udents, they will be hard-pressed to create policies and allocate resources toward impr oving students’ access to self-improvement resources. Well-crafted faculty-driven action research can make the connections between students, instruction, and success. Beyond coping skills, however, in helping students become be tter acquainted with their institution, assessment experts advo cate publishing policies on and talking to students about learning assessment in support of institutional improvement (Lopez, 1999). Apparently, not involving students as ac tively participating stakeholders in the assessment process can leave students feeling cynical and disenfranc hised. “Students in colleges and universities where they have not been purposively educated about their institution’s assessment program….have no way to make the connection between a nationally normed test and the published goals for the curriculum”(p. 21). On the other hand, students who have read about the purpose of assessments in college publications or discussed them with faculty members can become strong advocates for using assessment for curricular improvement, particularly when provided their scores on these tests to use as formative feedback. Best Practices in Developmental Education The ideal program in developmental edu cation helps all students, regardless of their level of competency when they ente r college (Boylan, 2002) According to the

PAGE 72

62 National Association for Developmental E ducation, it helps “underprepared students prepare, prepared students advance, and advanced stude nts excel” (p. 3). The most important contributions that institutional researchers may make to developmental education programs are in the areas of stra tegic planning, program evaluation, and grant research. Other key components include cen tralized coordination of developmental activities, systematic delivery of a specific set of services, and th e teaching of critical thinking skills. The impact of community college collaboration between faculty and assessment professionals on these best pr actices is discussed in Chapter Five. Strategic Planning According to Boylan, “develop mental programs with written statements of mission, goals, and objec tives had higher student pass rates in developmental courses than programs without such statements” (p.19). Further, students in such programs tended to pass state-mandate d tests and continue their enrollment more often. Program Evaluation. “Few program components are more important than evaluation”(p. 39). Consistent reporting on the successes, fail ures, and problem of these programs institution-wide kept developmental education visible, thereby reinforcing it as an institutional priority (p.23). Grant Research. “Title III grants are designed to strengthen institutions with large numbers of economically disadvantaged st udents” (p. 29). Other grant opportunities include Title IV federal programs like TRIO (s tudent support services for first generation, disabled, or low-income students), Talent Search, and Upward Bound. In addition to federal grants, there are phila nthropic organizations (such as Lumina) that provide grant money toward the improvement of educational opportunities.

PAGE 73

63 Centralized Coordination While highly coordinated decentralized services can sometimes come close to the performance of centralized services, centralized programs perform best and typically have: 1. Subject areas (e.g. reading, writing, math) coordinated under a single department, 2. a single philos ophy to guide the deliv ery of services, 3. support services and la bs within the department, and 4. a single program director for college-wide developmental education. Systematically De livered Services. The set of services that all entering students should find available include entry-level testi ng, advising, courses or workshops on study strategies, tutoring, supplemental instruction (often by computer), and learning labs with assistance available (p.27). Advising a nd teaching are well-coordinated, and one reinforces the other (p. 60). Critical Thinking. Developmental courses should strive to give students not only basic skills, but also teach “applica tion, transfer, and thinking skills ” (p.94) in order to prepare students for success in cred it-level course work. Policies Governing Developmental Education in Florida Florida Community College system approach es to student development have been shaped by Florida statutes, State Board rule s, Southern Association for Colleges and Schools accreditation, an d curricular innovation. Student Success and Institutional Characteristics. Although student academic preparation plays a strong role in students ’ chance for successful graduation, researchers at the Community College Research Center (CCRC) at Columbia University have begun to look to institutional characteristics and polic ies as correlates of student attainment by

PAGE 74

64 studying IPEDS variables of entire state sy stems of community colleges (Bailey, Alfonso, Calcagno, Jenkins, Keinzl, & Lei nbach, 2004). Underwritten by the Lumina Foundation’s “Achieving the Dream: Commun ity Colleges Count” program, the study employed explanatory variables such as refere nce group (urban, suburban, or rural), size (in terms of full-time equivalent enrollment (FTE)), racial/ethnic make-up, percent parttime students, percent female, mix of certificat es and associate degrees, average in-state tuition, federal aid, and expenditures for instruction, academic support, and student services. Nationally, California, Florida, and Nebraska had the highest adjusted graduation rates, ostensibly because of par ticular legal and institutional factors. In Florida, for example, students are at a four-y ear transfer advantage when they complete an associate degree. In partic ular, Sunshine Stat e Community College’s actual graduation rate in 2002-2003 was 4.6% higher than predicte d by these model variables (Bailey et al, 2004). However, because IPEDS does not currently collect it, the model lacked information that many other studies have found to be important determinants of student success, such as student academic preparation and income. Vincent Tinto’s (1993) student integration model served as one of the theoretical bases for CCRC’s model. This view of student attrition, the tendency to leave college, is founded in students’ social and intellectual in tegration into college life. The ultimate purpose of the ongoing Bailey et al research is to identify colleges graduating students at higher rates than their charac teristics might suggest, narrow ing the number of colleges to be studied in greater depth for institutional practices related to “changes in organization, teaching methods, counseling and student serv ices, relationships to the community, and organizational philosophy” (Bailey et al, 2004, p. 6) that might sugge st strategies for

PAGE 75

65 college-wide reform. Centraliz ed college student support programs such as those at the Community College of Denver and the Fede ral TRIO program provide orientation, specialized counseling, and mentoring for stude nts and have been identified as effective solutions to helping under-prepared students in specific environments. Even so, this study of community college integration of resear ch with practice “is not a search for the definitive answer of ‘what works.’ Rather, it is a constant and continuous process and conversation within and among colleges, and wi th outside researchers and policy makers, as practitioners try to improve their practi ce in the context of a constantly changing environment” (p. 63). Study Skills Courses. While the Florida Statewide Course Numbering System (2005) lists dozens of “Student Life Skills” Co urses, SLS 1101 courses are offered by 16 (57%) of Florida Community Colleg es. The common course title is Orientation to the Institution and its Resources and its description reads: A program of orientation to coll ege that includes an overview & discussion of the organization, personnel, regulations of the institution, & resources available to the student. The goal of the course is to assist the student to adapt and cope with a new environment. (p. 2) While student life skills cour ses in Florida colleges go by a number of names and are intended to provide the student with co llege life coping skills, another learning outcome at some colleges is to help students learn how to learn, a set of skills often referred to as learning strategies (Rouech e, Roueche, & Ely, 2001). Other study skills courses teach the “art of cl ose reading” (Paul & Elder, 2003, p.36). Often referred to

PAGE 76

66 as “critical thinking,” clos e reading is the ability to go beyond comprehension of individual words in a passag e and to read with purpose. Diagnostic Testing. Florida annually meas ures the “Readiness for College” of its high school graduates who matric ulate in college the year af ter graduation. The State of Florida mandates assessment upon entering a fi eld of study (Florida Statute 1008.30, 4a). While students may submit the results of SAT II or ACT-E entry level tests, the vast majority of students entering a Florida community college take the Florida Common Placement Level Test (CPT) upon matriculation. The Florida Legislature determines cutoff scores on all entry-level tests for placemen t into college preparatory instruction. The exit test from college preparatory instruc tion approved by the Counc il on Instructional Affairs (a governing entity of the Florida Association for Community Colleges) is the Florida College Basic Skills Exit Test (Bendickson, 2004). Passing scores on the exit test, however, are determin ed by individual colleges. In 1998, half of all entering degree-seeking students required at least one college prep course. Sixty percent of females had pa ssed entry level tests in each of the three tested areas: reading, writing, and math. On ly 40% of males were similarly ready for college (Windham, 2002). In Fall 2001, of the over 42,000 first time in college degreeseeking students who matriculated in Florid a community colleges im mediately after high school and took an entry level test, 73% of males and females combined failed at least one subtest in reading, writing, or math (Florida Department of Education, 2004, p.1). Furthermore, of the 14,173 students who had fa iled the reading subtest, only 71.4% had completed their required reading preparatory co urses within two years of matriculation.

PAGE 77

67 Transcript Analysis. The Florida Community College system has prepared institutional researchers to both conduct college transcript research and to share SAS programs within a SAS User’s Group that meets bimonthly. The researchers use an annually released data base of Florida-wide student information in which the student identifiers have been scrambled through an algorithm. One of these programs is called “Longitudinal Tracking System,” and allows a college to trace the academic performance of its students from entry to exit. This program has been particularly helpful in examining the college credit course success of developmental education students. In the view of some researchers, it is Florida’s policy environment that requires a common course numbering system, standardized placement criteria for placement in college preparatory instruc tion, and support for statewide research efforts that has allowed its students relatively great er success than in other systems. Sunshine State Community College Measures While classified as “large” (over 2,500 students) by Ka tsinas (2003, p. 21), Sunshine State Community College falls into the “medium” category by Community College Survey of Student Engagement meas ures. As with all Florida colleges their measurement approaches are dictated to some extent by state governance. Other approaches are determined by the college its elf. For example, the standards for entry level testing and the cutoff for college pr ep placement are legislatively mandated. However, while the Council for Instructional Affairs-approved exit exam must indicate successful passage out of prep, the passing score is determined by the College. The following are other measurement approaches used by the college.

PAGE 78

68 Engagement. The College participated in the 2004 administration of the Community College Survey of Student Enga gement. Although they were not among the top performing colleges in 2004, student respon ses placed them above the national mean (50) in Active and Collaborative Learning and Student Effort. The College placed below the mean in Academic Challenge, Student-F aculty Interaction, and Support for Learners (Member Profiles, CCSSE, 2004). Improving th e benchmark Student-Faculty Interaction among college preparatory students should prove especially challenging, as the College had a higher percentage of part -time faculty teaching college prep course sections than did the Florida Community College sy stem in Fall 2003 at 63% (Windham, 2005, Number and Percentage, p.1-5). Diversity. Sunshine State Community College’s headcount grew at almost double the rate of the Florid a Community College system from 2001-2002 to 2004-2005 (Florida Department of Education, Student headcount 2005, p.1). Most of the growth was in minority populations, especially Asian and Hispan ic ethnicities. By contrast, most of the state-wide growth occurred in African Amer ican (20%) and Hispanic (30%) ethnicities. The College had lower percentages of African American and Hispanic first time in college students placing into college prep aratory instruction th an did the Florida Community College system as a whole in Fa ll 2001 (Florida Department of Education, 2004, p.1). Prep completion rates (within two year s) in Reading were comparable to the system rates for African American students, while writing completion rates for African American students were lower. However, w ith Hispanic students, just the opposite was true. Prep completion rates (within two years) in writing were comparable to the system rates for Hispanic students, while reading completion rates for Hispanic students were

PAGE 79

69 lower. The college’s greatest college prep completion challenge for all ethnicities was Mathematics. The College’s average completion rates in Mathematics were comparable among African American, Hispanic, and White students, but were all well below the Florida Community College average. Transcript Analysis. The College has used a consulta nt-written system to track the academic progress of students emerging from college preparatory reading and writing courses with a “C” or better. The program an alyzes Student Data Base files and produces follow-up reports by ethnicity and gender with hours attempted and grade distributions for these cohorts that are either by term or by cumulative terms. The IR office also merges student identifiable data from CCSSE administrations to obtain a more complete profile of these students than the College would obtain from academ ic history only (Key informant, personal communication, May 20, 2004). Progress on Assessment. As one of the first college s to undergo re-accreditation under the SACS 2001 Principles of Accreditat ion, the college has had its initiation by fire. Assessment plans for the outcomes of general education and The Quality Enhancement Plan (Core requirements 2.7.3 and 2.12) have required Herculean efforts to complete. Recalling the Lopez (2000) study of hundreds of colleges in the North Central Accrediting Region, a matrix of institutional culture variables indicated progress toward the development of a mature student le arning outcomes assessment culture. Having completed their milestone of re-accreditati on, the College is now “making progress” in fully implementing its student learning outcome s assessment process. This study provides a picture of that progress from the viewpoi nt of faculty and assessment professionals involved in the implementation of their Quality Enhancement Plan.

PAGE 80

70 Summary and Synthesis of Literature Review Higher education policy makers are greatly dissatisfied with student graduation rates and expect colleges to ma ke greater efforts to gradua te academically under-prepared students. Current research indicates that family bac kground, high school preparation, student effort, early success, and study att itudes are variables th at place a student on a trajectory toward earning a degree. Colleges use a multitude of strategies to help underprepared students, but the efforts are ofte n uncoordinated and do not scale effectively college-wide. While a centralized model that marshals all student development resources at a single entry point has been effective in some schools, a model in which student development is diffused throughout the curricul um works well in others. To discover the determinants of student success at their pa rticular institutions, faculty members and institutional researchers conduct action rese arch called outcomes assessment. With the pace of change quickening in their macro-e nvironments, colleges must continually find better ways to enhance teaching and learning. The evolution of this partnership is the subject of this case study. To improve teaching and learning in comm unity colleges, college leaders provide professional development and enhance the c onditions in which faculty members and assessment professionals exchange insights and formulate new strategies for learning outcomes, instruction, and assessment. By pooling their measurement expertise and teaching intuition and trying out new solutions to student learning problems, faculty members and researchers may gradually move the college toward graduating greater numbers of students. The development of an assessment culture, in which measurement is formative and faculty and staff members feel free to share their results (good or bad)

PAGE 81

71 without fear of recrimination, is essential to this partne rship. This nexus between the roles of faculty and assessment profe ssionals is also a focus of this study. Within these cross-functional communities of practice, members discuss the results of assessment. This discussion may act as a catalyst, spurring faculty members to re-organize their mental pictures of how readi ng development takes place. In this process, the new information is rejected, acted upon, or deferred for future action. If deferred, it flows in the streams of a sometimes “an archical” institution, waiting for a golden opportunity to deploy. College l eaders that actively seek novel ways to make leaps in student success have their ear s to the grapevine for promis ing strategies coming out of assessment plans. These leaders design feedback loops such as program reviews linked to assessment data and commission task forces whos e job it is to bring a wide variety of stakeholders into the discussion of what work s and how it can be scaled up. Leaders also schedule time for faculty and staff to make sens e of what they have learned in renewing the college mission, telling new stories and establishing new symbols of institutional pride. The literature review has identified the institutional roles in which faculty and assessment professionals operate. The common ground for these professions is the source of foreshadowed questions, bot h issue (research questions) and topical (background) that shaped the design for the collection of interviews, observations, and documents in this study.

PAGE 82

72 Chapter Three Methods This chapter describes the process by which I conducted a case study on the phenomenon of collaboration on student learning outcomes assessment between faculty teaching developmental reading, writing, or study skills courses and assessment professionals at Sunshine State Community College. Assumptions of Case Study Research The purpose of a case study is to unders tand human interaction within a social unit, a single instance bounded by the case wo rker in the process of designing the research (Stake, 1995). While an intrinsic study may be undertaken to learn about a person or phenomenon that we simply want to know more about, an instrumental case study is developed to promote an understandi ng of specific issues. This paper is an example of an instrumental case study because it is the intersections between faculty and assessment professionals in improving teaching a nd learning that this researcher wished to understand. Case study is an interpretive-hermeneutic category of research that falls under the more general umbrella of qualitative methods, hermeneutics being “the art and science of interpretation” (Yeaman, Hlynka, Anderson, Damarin, Muffoletto, 2001, p. 254). Stake

PAGE 83

73 (1995) cited three major differences between case study and quantitative (i.e., survey) research methods: 1) While the purpose of inquiry is explanation in quantitative research, the purpose of inquiry is understanding in case study research. 2) Although the role of the researcher is impersonal in quantitative research, the role of the researcher is personal in case study research. 3) Knowledge within quantitative research is discovered However, knowledge within a case study is constructed. (Stake, 1995, p. 37) Finnish philosopher Georg Henrick von Wright (1971) further elaborated upon the difference between explanation and unde rstanding and the personal role of the researcher, saying that understanding had a humanist emphasis. By seeking to understand (and going beyond explanation), one was able to empathize with the humans under study. Also, understanding allowed case workers to cons ider the aims and purposes of the actors in the course of unfolding events and the sy mbolic significance of cultural symbols and rites. Finally, in te rms of discovering vs. constructing knowledge, measuring and seeing is characteristic of quantitative research and assumes that what one sees in the field can be described with measurements already de veloped (i.e., surveys). On the other hand, case study, a qualitative method, requires the researcher to see, and then measure (Stake, 1995). The researcher tries to understand a nd make an interpretation about what something means. The essence of case study is therefore describing wh at it’s like to be there. To accomplish this, case studies ar e characterized by rich description and interpretation of circumstances and events.

PAGE 84

74 Sources of research questions studied within a case may be either “-etic” issues – those of the researcher – or “-emic” issues – those emerging from the actors during data collection (Stake, 1995). Further, there are tw o types of questions – issue and topical. Issues are dilemmas needing to be resolv ed, whereas topical questions relate to information that provides background and context for the case. While quantitative research can be carefully de lineated, “emic” issues within a case study of ten cause the direction of the case to gradua lly transform (p. 21) while the research is in progress. The analysis of qualitative data is an inductive process that begins by the recognition and coding of broad themes and proceeds through the more specific connections between data collected and the research questions at issue. Generalization in case study and evaluati on is different than in quantitative research and can take thr ee different forms (Adelman, Jenkins, & Kemmis, 1976). The first is from the instance in the case to othe rs in the same class. For example, a case describing eighth grade student behavior in one public elem entary school may generalize to eighth grade student beha vior in another. The second is a generalization from the instance in the case to others in different cl asses. For example, the case describing student behavior in a public elementa ry school may generalize to students of any age at other institutions. The third is genera lization about the case itself. In this type of generalization, the boundaries of the case are permeable, a nd the instance is no t seen as a bounded system. This often occurs in studies that do not associate a case instance with a particular class. Generalizations in case studies are not just stepping ston es to theoretical development and empirical research. They are va luable in their own right as stories that appeal to people who can rela te to the tacit knowledge (Pol anyi, 1962) demonstrated in

PAGE 85

75 these instances. These are what Robert Stake calls “naturalistic ge neralizations” (1995, p. 86). While generalization outside the instan ce studied is a major goal of explanatory research, “the real business of case study is particularization, not ge neralization” (p. 8). Thus, studying the particular en ables a case study worker to capture human experience in all of its complexity. Reliability and validity issues in case studi es are referred to as “trustworthiness” of the data. Justification of wa rranted assertions (generalizati on) is necessary within this methodology because each study frames its own view of the world within the body of the work, which must stand or fa ll on its own merits (Bartlett, 2005). “Trustworthiness” of the researcher’s conclusions relies upon triangulation, multiple sources of information, instruments, and methods. While the form of qualitative research known as the “case study” has a different set of rules governing appropriate research conduct than do quantitative approaches, it is a tool that faculty and asse ssment professionals undertaking action research can use to document their experience, interactions, a nd knowledge to pass on to others. In that sense, it is an essential part of what people and institutions must do to engage in knowledge management. Case Worker’s Orientation to Community College Assessment I have been involved with community college institutional research since 1994. I currently serve as District Director of Institutional Effectiveness and Program Development for Edison College, located in Sout hwest Florida. I have served in many statewide organizations including the Res earch Committee of the Florida Community

PAGE 86

76 College System, The Florida Association for In stitutional Research (2004 President) and currently serve as the Chair for the Florida Association of Community Colleges (FACC) Institutional Effectiveness Co mmission. Participating in all of these organizations has allowed me to gather accreditation news from a vast network. Some colleges that won’t seek re-accreditation for years have not ye t fully recognized the need for changes to organizational processes required by the 2001 SACS Principles of Accreditation. There is now an increasing curiosity about thes e new requirements, as shown by the vast numbers of people who attend ed the SACS Annual Meeti ng in Atlanta December 3-6, 2004. It was the first time I had ever seen the auditorium overflow for the opening keynote address. As a long-time Institutional Effectiveness (IE) staff member, I am becoming aware that my IE Commission must change the way it talks about and rewards exemplary practice or it will fail to interest instructional administrators and faculty, new partners in institutional effectiveness practice. I became acquainted with the assessment literature while preparing to conduct this study and have now had one year of direct experience with student learning outcomes assessment. I began to work with faculty on assessment in June 2005, when Edison College received State Board of Education a pproval on its application to offer a sitebased Bachelor of Public Safety Management degree. With subsequent SACS approval of the College’s substantive change applicati on to grant baccalaureat e degrees, faculty members have been challenged to lead cu rriculum development. They completed the process of updating all course outlines to conform to th e set of general education outcomes approved in August 2005. On the heels of this exhausting process came the development of an assessment plan for th ese outcomes during the Spring and Summer

PAGE 87

77 2006 terms. The Student Learning Outcomes Committee, of which I am a member, completed a pilot assessment of written communication outcomes and authored a complete set of rubrics for the assessment of general education competencies in Summer 2006. A SACS visit to the College is scheduled in February 2007. Conceptual Framework for Current Study As discussed in Chapter Two, faculty and assessment professionals work together in communities of practice to bring about effective assessment of student learning outcomes, establishing a common language a nd mutual understandi ng of the process. From a recap of that work by faculty and a ssessment professionals indicators of group dynamics, communities of practice, and orga nization changes have surfaced. These themes, listed in Table 1, form the issues and theoretical framework of the case. Measurement issues also arose in the collected data. The interview questions, both individual and group, come from these foreshadowed themes emerging from the review of the literature in Chapter 2. These issues and topics determined not only the questions asked but the items of interest that were r ecorded during field observations and selected from documents in this instrumental case study. Table 1: Foreshadowed Themes Mutual Understanding Communities of practice (social learning) Development of meaning, practice, community, iden tity (Wenger, 1998) through rewards, recognition, and sharing; intrinsic motivation; professional development; communication habits Structure Organizational determinants and targets of successful measurement practice Targets (skills, attitudes, performances), growth and development; engagement (Peterson, Augustine, Einarson, & Vaughan, 1999; Banta, 2002; Lopez, 1999): Process Knowledge management within college networks as cycles of sense-making goal setting, and measurement Awareness (ecological change), attention (enactment), naming (selection), common vocabulary (retention) in the process of institutional sens e-making and emergence of a new cultural identity through symbols, rituals, and language (Weick, 1979)

PAGE 88

78 Table 1 is an example of a research map (Creswell, 1998, p. 95) in which the larger literature review is summarized into issues central to the study. I have used an interpretive approach to st udy the community of practi ce created by faculty and nonfaculty assessment professionals at S unshine State Community College through interviews, focus group discussion, field obs ervations, and documents. The case study method of research was chosen because the research questions in the case require understanding rather than explanation While limited to these players on the campus of Sunshine State Community College, the cas e is also bounded by the development cycle for the college’s Quality Enhancement Pla n, an essential component of its regional accreditation process, and the plan’s perceived antecedents in the genesis of the college’s learning enhancement focus. Research Design The study I have undertaken may be ch aracterized as a form of qualitative research known as case study. This particul ar study may also be characterized as phenomenological, because of its emphasis upon interpretation of meaning from the perspective of the humans and their inte ractions under study (Merriam, 2002; Stake, 1995). In the conduct of this study, I refined the interview protocols through review by an expert panel, individually interviewed ei ght faculty members teaching developmental education courses, one instru ctional administrator, and two assessment professionals, collected college planning documents and publications, observed an awards assembly, and interviewed two assessment professiona ls in a focus group session with a semistructured interview protocol.

PAGE 89

79 Research Questions Eight research questions ar e investigated in this ca se study of Sunshine State Community College. 1. How is the professional preparation and educational background of a developmental education faculty member like that of an a ssessment professional, and how is it different? 2. How is the assessment role of a developmen tal education faculty member like that of an assessment professional, and how is it different? 3. Which collaborative strategies serve to create common ground for faculty members and assessment professionals to work together on assessment plans? 4. Which strategies cause estrangement between faculty members and assessment professionals? 5. What role, if any, does an assessment prof essional play in determ ining how the results of student learning outcomes assessme nt will be used for improvement? 6. Have faculty members at the college beco me more like assessment professionals and assessment professionals more like faculty members in terms of their assessment roles since they began collaborating on student learning outcomes assessment? 7. If so, how have they become more alike? 8. From the perspective of respondents, wh ich assessment approaches have shown the most promising results? Population/ Unit of Study/ Sampling In qualitative researc h, the selection of the unit of study is based upon “purposeful” (Merriam, 2002, p.12), rather th an random sampling. In this case, the

PAGE 90

80 college referred to as “Sunshine State Community College” has recently had an opportunity for self-examination through the Southern Association for Colleges and Schools re-accreditation process. This experience honed faculty and assessment professional self-reflection skills, thus ma king them responsive interviewees. While Sunshine State has not been identified by any expert group as exemplary in its approach to assessment, its student success rates relative to institutional characteristics have been exceptional among the 28 Florida Commun ity Colleges. Bailey, Alfonso, Calcagno, Jenkins, Keinzl, & Leinbach (2004) have advocat ed that institutions having higher than expected completion rates based upon their in stitutional characteris tics should be studied for environments that may favor student succ ess. Sunshine State Community College is one such case, with completion rates among fu ll-time first-time in college students 4.6% higher than expected based upon institutional characteristics (Bailey et al, Florida Community College results as reported by Ch ancellor J. David Armstrong, p.1). Further, they are one of the first Florida Community Colleges to undergo re-accreditation under the 2001 SACS Principles of Accreditation, which have been enforced only since 2004. Thus, the college is a student learning out comes assessment pioneer in the SACS region. The knowledge attained through the re-accr editation process makes these assessment practitioners valuable informants of the process. The recruitment and selection of interv iewees for this study has likewise been purposeful. While I have used the term “assessm ent professional” to capture the broadest possible definition of college staff members who support faculty use of assessment, there is disagreement among the various communities that contribute to the research literature on assessment as to what these staff member s should be called. For example, while some

PAGE 91

81 researchers are calling upon Inst itutional Research to take on a larger role in supporting teaching and learning (Morest, 2005), the term “Institutional Research” itself connotes a strictly administrative function to many f aculty members. “Assessment professionals” targeted for interviewing answer to various titles such as “Director of Institutional Research,” “Assessment Coordinator,” and “Q EP Director.” Faculty members (full-time and part-time) recruited for interviews were those who ta ught developmental reading, writing, or study skills or who had been activel y involved in the data collection, analysis, reporting, and interpretation aspe cts of the College’s Quality Enhancement Plan, which is focused upon developmental education. The target number for individual interviews of faculty and reading/writing faculty supervisors was 12. During the data collec tion phase, I interviewed eight faculty members and an instructional administra tor. The target number for “assessment professionals” was a census of all such staff members. I interviewed two assessment professionals individually and a third dur ing a focus group session. A challenging but important part of the sampling process was obt aining a diverse mix of gender, ethnicity, and part-time faculty within the sample. Out of the individual interviews, three were male (27%), three were African American (27%), and two were part-time temporary faculty members (18%). The limited number of interv iews with part-time faculty reflected the difficulty in securing interviews with adj uncts. While part-timers taught most of the developmental classes, they typically had full-time jobs outside of the college. This limited their available time on campus. Als o, the unique circumstance of the limited numbers of assessment professionals interviewe d in comparison to faculty was due to the

PAGE 92

82 sparse staffing of these individuals on commun ity college campuses. This circumstance is discussed further in the Limita tions section of Chapter Five. Data Collection Procedures/ Timetable Upon Internal Review Board approval, I be gan to refine the interview protocol and document collection plan for all research questions through expe rt panel feedback. I sought advice from five colleagues and sc holars in academic affairs, institutional effectiveness, assessment, measurement, a nd educators associated with the National Association for Developmental Education. Expert Panel Review Process. The recruitment process for the panel was to talk to individuals about the subject of my study and ask for their participation on the panel. Those who volunteered received a copy of the recruitment lett er, interview protocols, and document collection plan. Most of the corr espondence was by email. My questions to each member were: In examining the interview protoc ols and document collection plan, If you were asked these questions, w ould you be able to answer them? If the phrasing is inappropria te, how would you re-word them? What am I not asking about successful coll aboration on assessment that I should be asking? Do you know of other reports or studies (b esides the ones listed) that would also inform the research questions?

PAGE 93

83 The data collection period, initially pla nned for February and March, slid from mid-March to early May because of the delay in getting feedback from the expert review panel: December 15th, 2005 -Obtained permission from the Sunshine Community College executive board to conduct this study. January, 2006 – Applied for USF Internal Revi ew Board approval. Defended proposal successfully on January 19th. Obtained written consent from College President and secured access to appropr iate College faculty and assessment/IE staff. Collected requested documents as they become available. February – Upon IRB approval, solicited advi ce from an expert panel on the content of research protocols. March-April Completed suggested revisions to interview script from expert panel. Provided final version of interv iew script to IRB and College. Conducted individual interviews with faculty me mbers (up to one hour each) and their supervisor. Conducted individual intervie ws with assessment/IR staff members and leaders (up to one hour each). The academic VP distributed copies of my recruitment letter to part-time faculty members (by department mail). May – Observed an annual college awards assembly. Conducted a focus group session (one hour) with 2 memb ers of the IE staff. Began returning summaries of interviews to participants for member checks (verificati on of responses). Contacted interviewees fo r additional follow-ups for clarification, as needed. Individual Interviews. Driving to the campus the first time gave me a sense that I was approaching a region with a unique charac ter. Horses trotted on a frontage road on

PAGE 94

84 the left and stables passed by on the right. Along the highway, there was a carpet of tiny pink flowers covering the median and shoulder. Through a clearing of trees, I could see the effects of the hurricanes that had passe d through the state ove r the past couple of years. The ground in one area had been cleare d for development, leaving only the larger trees. In another area, small, thin trees bowed away from the sun, bent over permanently from the high winds that had whipped them at least twice in recent years. At the entrance to the college on a busy road off the highway stood a tall office building dedicated to community outreach programming. The campus roads featured banners welcoming people to the college. The banners along the road communicated aspects of the college’s philosophy, such as “dignity.” Reflecting upon the impressions I had collected on my way there, I felt ready to lear n about the college’s approach to student learning outcomes assessment. I had designed the individua l interview questions to get college faculty and staff members to talk about their orientation to assessment as a tool for improving teaching and learning, to judge whether or not they ha d sufficient preparation for the task going in, how they had grown professionally (and in wh at ways), what the process looked like, what role(s) they played in making it work, and whether they felt as though it had made a difference for students. By examining their re sponses to these quest ions, I was able to probe the “assessment” intersection between the roles of facu lty and assessment professionals. I was very pleasantly surpri sed at how well the interviews went. The answers were, at times, exceeding my expecta tions. I felt that the interview protocol was appropriate for the subject matter and that the changes that had been made to the interview (both sequence and wording) through the expert panel review had helped to

PAGE 95

85 make the questions more understandable to th e participants. My impression was that none of the individuals alone seemed to have a co mplete picture of what “assessment” at the college looked like. Each one seemed to ha ve his or her own little window into the process. I thought about the purpose of doing this case study: to try to come up with what the larger picture looked like and see how the puzzle pieces fit together. I began to believe that through this study, I could enrich the larger view of assessment for the people who will carry the process forward. During the interviews, I assigned a sequen ce number to each participant (i.e., “I3” for interview participant 3). I then summarized the interviews and requested verification or corrections the participant might wish to make. Table 2 shows the interview protocol for indivi dual faculty members and assessment professionals. Table 3 provides a cross-reference between the resear ch questions and the individual interview questions. Table 2: Individual In terview Protocol (Revised according to expert panel feedback) Thank you for agreeing to talk to me today. Although your responses will be recorded on tape, what you say will remain confidential. The purpose of my study is to identify collaboration strategies and techniques among faculty and non-faculty assessment professionals that have helped you to build a “culture of assessment” in improving developmental communication skills. ( Note to Interviewer: Has the participant signed the consent form?___) Demographic questions: Do you have faculty status at your College? ___ If so, do you teach full-time or part-time?_____ Do you teach any developmental courses (reading, wr iting, or study skills) in a typical semester? ____ Comment: The use of assessment results for improvement of programs and services is one of the dozen SACS Principles of Accreditation th at all colleges reviewed in 2004 received recommendations on. It's something that colleges are trying hard to improve. Substantive questions: 01. a. If student learning outcomes assessment is measuring what students have learned for the purpose of improving instruction, do you participate in assessment activities? b. If so, please describe the process and your role. c. (If participating in SLO) Since your most recent assessment cycle, what improvements to the process have been made going into the next cycle?

PAGE 96

86 Table 2 (Continued) 02. How do you define student success? a. Are any state-mandated tests used in assessing communication skills? Commercially or locally developed assessments ? Institutional research reports? If so, please describe them. b. After reviewing assessment results, how do you determine what other information might be needed? 03. a. Have you learned anything ab out teaching students in developmenta l courses that you didn't know a year ago? b. If so, what is that? c. How did you discover this? 04. a) Have you learned anything about planning for the assessment of student learning outcomes that you didn't know a year ago? b.) If so, what is that? c.) How did you discover this? 05. a. Have any areas of your prof essional preparation or experience served you especially well in conducting outcomes assessment? b. If yes, which on es? c. Are there any areas that you wish you'd had more of? d. Are there any areas that you wish you'd had less of? 06. Do you have an assessment group? What are the various functions of its members? If people in your assessment group have different roles and responsibilities in meetings (such as timekeeper, organizer, advocate, peacemaker, data analyzer, or advisor,), what role(s) do you play ? Can you give me a recent example? 07. Have there been circumstances in which you’ve felt uncomfortable discussing the results of an assessment outcome with assessment/IE staff members? With faculty members? If so, how would you improve the process of information sharing? 08. If differences between members of th e group arise, how are they resolved? 09. Do part-time faculty members participate in assessment activities? If so, in what capacity? 10. What do you think developmental educators (i.e., faculty members, assessment staff) could do to more effectively use the results of student learning outcomes assessment for learning improvement? If so, what are they? 11 a. Do you feel that the College has had success so far in getting students through college preparatory courses? b.) If so, could you please describe for me one succes sful strategy that you th ink has been particularly effective? In your answer, please think about the steps you (or your department) took to make that success occur and describe those. 12. Is there anything else you’d like to add to complete the picture of how student learning outcomes assessment works at your college?

PAGE 97

87 Table 3: Relationship of Research Questions to Interview Questions Research Questions (Issues) Interview Questions 1. How is the professional preparation and educational background of a faculty member who teaches developmental e ducation courses like that of an assessment professional, and how is it different? 5 2. How is the assessment role of a faculty member who teaches developmental education courses like that of an a ssessment professional, and how is it different? 1, 2, 3, 4, 9, Focus Group 3. Which collaborative strategies serve to create common ground for faculty members and assessment professionals to work together on assessment plans? 1, 6, 9, 10, Focus Group 4. Which strategies cause estrangement between faculty members and assessment professionals? 1, 7, 8, 9 5. What role, if any, does an assessment professional play in determining how the results of student learning outcomes assessment will be used for improvement? 1, 6, 10, 11, Focus Group 6. Have faculty members at the co llege become more like assessment professionals and assessment professional s more like faculty members in terms of their assessment roles since they began collaborating on student learning outcomes assessment? 1, 3, 4, 5, 9 7. If so, how have they become more alike? 1, 3, 4, 5, 9 8. From the perspective of respondents which assessment approaches have shown the most promising results? 10, 11, Focus Group I recorded both individual and focus gr oup interviews with faculty and assessment professionals on an Olympus DS-2 Digital Vo ice Recorder and an audio tape recorder. When most of the intervie ws had been completed, I began to transcribe the voice recordings using the DSS Player software included with the recorder. A convenient feature for transcription, the soft ware has a control panel that allows playback as slow as 50% of normal speed without pitch distortion. Even so, each hour of interview took up to six hours to manually transcribe A summary of each intervie w took another two to three hours to complete, depending upon the amount of relevant informa tion the participant had provided. Participants rece ived summaries of the interviews with a request for verification of the information in the summar y. All but one of the pa rticipants provided

PAGE 98

88 such verification. The additions and change s participants return ed supplemented the interview transcriptions. Field Observations. Impressions of the campus, th e people, and the community were recorded as field observations begi nning in March and conc luding in May on the day of the college’s awards ceremony in th e college auditorium. The morning of the awards ceremony, I ran into some faculty me mbers I knew. They seemed to be torn between competing desires to check messages, locate colleagues to sit next to at the awards, and still maintain a friendly disposi tion toward me. Life was happening for some, but unraveling for others. It was hard to remain detached. After the awards, in the sunlit courtyar d next to the student services building, I spoke briefly with the president, thanking him for endorsing my st udy to the faculty. We briefly discussed my progress and he seemed genuinely surprised that I had managed to conduct as many as 11 individual interv iews that term (Journal 5/5/2006). Focus Group. Chen has advocated focus group interviews because the social environment in which they take place helps re searchers to “capture real life drama. In such a group setting, the part icipants let the researcher know not only what they think, but also why and how they think this way. Sometimes, their opinions even change or evolve during the meeting as a result of the exchanges and stimulations shared by group participants” (2004, p. 4). As this was an id eal way to capture dyna mics within this community of practice, a furt her set of data collected was a focus group session with IR staff members, following a fi eld observation of the College awards assembly. Through this process, I planned to de termine whether, from the perspective of the participants,

PAGE 99

89 they had become part of a “whole” that wa s greater than the sum of its individual participants. The focus group interview was an instance in which “emic” issu es within the case study can cause the direction of the case to transform (Stake, 1995, p. 21) while the research is in progress. While the origin al concept for the focus group interview was based upon the five disciplines of learni ng organizations noted by Senge, Kleiner, Roberts, Ross, & Smith (1994), it became clear to me after completing several individual interviews on campus that a more appropriate theoretical model was “communities of practice,” described by Wenger, Snyder, & McDermott (2002). A revised interview protocol was subsequently approved by the Internal Review Board before the May 5th interview. The semi-structured interview prot ocol for the focus group is shown in Table 4. Table 4: Focus Group Semi-Structured Protocol Thank you for agreeing to talk to me today. Altho ugh your responses will be recorded on tape, what you say will remain confidential within the context of my study. Please remember, however, that others in the room will hear what you say. The purpose of my study is to identify collaboration strategies and techniques among faculty and non-faculty assessment professionals that have helped you to build a “culture of assessment” in improving developmental read ing, writing, and study skills courses. ( Note to Interviewer: Have all participants signed the consent form?) Framework for substantive questions (copy provided to participant): “Communities of practice” are “group s of people who share a concern, a set of problems, or a passion about a topic, and who deepen thei r knowledge and expertise in this area by interacting on an ongoing basis” (Wenger, Snyder, & McDermott, 2002, p.1). The structure of these communities includes: 1. A well-defined domain of knowledge that helps to create a common identity and affirms the value of that knowledge to community members. 2. A community that provides the “social fabric of learning,” relationships built upon mutual respect and trust. 3. A practice that includes frameworks, tools, and ideas that members share. Which characteristics of a “community of practice” do you recognize in your group? Please tell me more about these.

PAGE 100

90 Documents. In addition to the collection of voice responses from interviews, I collected documents such as the college strategic plan, assessment plans and use of results, student learning outcomes reports, and position descriptions. These documents provided additional information on the coll ege’s assessment planning, implementing, and sustaining efforts (Banta, 2004). This third so urce of data permitted triangulation, helping to ensure the trustworthiness of the res earch. Table 5 lists documents collected to supplement data from individual and focus group interviews. Table 5: Documents Data Source Dissemination Method Role: What does it do for co nsumers of the information? Strategic Plan Intranet Provides direction for college faculty and staff activity Planning Documents Diagram (flowchart) Intranet Describes the relationship between various College planning and reporting processes. QEP Committee Roster Intranet Assigns responsibility for QEP membership Quality Enhancement Plan (Developmental Education) Intranet Describes strategies and five-year timeline for improving the success of college prep students Response to the Report of the Reaffirmation Committee Intranet Responds to suggestions for improving specific plan elements Learning Outcomes Assessment Task Force Report (General Education) Intranet Evaluates progress on measuring and improving college learning outcomes The Institutional Learning Outcomes Process Intranet Recounts the timeline of events leading to the current general education outcomes assessment process. What We Know About Student Learning Intranet Aids budget development. Position Descriptions Internet Assigns tasks Faculty Handbook Intranet Explains faculty roles and responsibilities; communicates policy and procedure College-wide And Faculty Newsletters Internet Informs college (since 1985) Informs faculty (since 2000) Student Newspaper published monthly Hardcopy tabloid Informs students; has features and advertising of interest to the wider community Longitudinal Tracking System Hardcopy Informs external constituents of an institutional research practice initiated in 2002

PAGE 101

91 Table 5 (Continued) Data Source Dissemination Method Role: What does it do for co nsumers of the information? Institutional Learning Outcomes and Gen Ed: A Model for Local Assessment Hardcopy Informs external constituents of a college practice in institutional effectiveness piloted in 2005. Data Analysis Process Creswell (1998) describes the inductive an alysis of qualitative data as a “data analysis spiral” (p.143). The process begins with the collection of text and images through interviews, observations, and documents Creswell advocates getting a sense of the entire database holistically by reading and re-reading the transcripts. At this time, the case worker may write memos in the margins of transcripts or field not es that contain key concepts or phrases. Sometimes it helps to di sregard the actual question that precipitated a response and instead focus upon themes in the response. In the next level of the spiral, the responses are sorted in to categories initiating the “describing, classifying, and interpreting loop” (p. 144). Cla ssifying data requires the rese archer to look for general themes or dimensions which serve as parent s of children and grandchildren (sub-themes). Interpretation may then allow the researcher to connect themes to each other or to constructs elaborated in a research map (such as the list of Emergent Themes in Table 1). According to Boyatzis (1998), thematic an alysis can be used for at least five different analytical purposes. It can allow the researcher: 1. A way of seeing 2. A way of making sense out of seemingly unrelated material 3. A way of analyzing qualitative information

PAGE 102

92 4. A way of systematically observing a person, an interaction, a group, a situation, and organization, or a culture 5. A way of converting qualitative information into quantitative data (p. 5) There are four learning stages in doing th ematic analysis in a particular case: 1. Sensing themes – that is, recognizing the codable moment 2. Doing it reliably – that is, recognizing the codable moment and coding it consistently 3. Developing codes 4. Interpreting the information and themes in the context of a theory or conceptual framework – that is, contributing to the development of knowledge (p. 11) A thematic code must capture the essen ce of what it describes. For that reason, good codes have structure: 1. A label (i.e., a name) 2. A definition of what the theme concerns (i.e., the characteristic or issue constituting the theme) 3. A description of how to know when the theme occurs (i.e., indicators on how to “flag” the theme) 4. A description of any qualifications or ex clusions to the identification of the theme 5. Examples, both positive and negative, to eliminate possible confusion when looking for the theme (p. 31)

PAGE 103

93 In the presentation of the results of qualitative analysis, Anfara, Brown, & Mangione (2002) advocated the use of tabular formats for making the data analysis transparent to the reader. A presentation that includes an illustration of the inductive processes of code mapping should demonstrat e that qualita tive analysis techniques are sufficiently rigorous to ensure trustworthine ss of the data. This approach permits the triangulation of themes that emerge from in terview, observation, and documentary data. Analysis Procedures I kept field notes while on campus for in terviews and I again journalized my thoughts as I began to interpret the data printed on reams of paper. Using the transcribed interviews with supplements, I initially used SPSS Text Analysis to categorize responses to questions thematically. However, I came to believe that this particular software was intended mainly for sorting major themes into categories when a researcher has a large volume of text-based data and is looking for simple patterns and frequencies. My conclusion from the limited results generated was that SPSS Text Analysis doesn't really work all that well when the researcher is trying to discern complex concepts in a relatively small data set (Ana lysis Journal, 7/3/2006). My next attempt to make sense of the data was to analyze responses to each question and to then apply that analysis to th e research questions. However, some of the themes I wanted to use in analysis were co ntained in responses to a number of questions. Also, an interview participant would sometimes offer the best answer to an early question toward the end of the interview. After di scussing it with a research and measurement consultant, I decided to analyze the research qu estions thematically, a nd then return to the

PAGE 104

94 specific interview questions for answers to th e more topical questions arising within the case. I then examined the intent of each in terview question more car efully, yielding Table 6 below: Table 6: Thematic Representation of Interview Questions Interview Question Theme Topical Issue 01. Participation 02. Learning Goals Are there goals for developmental education other than cognitive skills dictated by Florida state-mandated testing? 03. Learning about Teaching 04. Learning about Assessment 05. Professional Development 06. Assessment Role 07. Information Sharing 08. Developing Consensus 09. Engagement of Temporary Faculty Do parttime/temporary faculty participate in outcomes assessment? 10. Use of Results How might developmental educators improve their response to this accreditation requirement? 11 Goal Achievement 12. Other Something unexpected happened when I sat down the next day to work on the research. When I looked at the research quest ions and compared them to the information collected, I began to create ca tegories as if I instinctively knew I needed each of those separate pieces of information to adequately answer the research questions. Each of these categories had specific qualifiers that dis tinguished one type of participation from another. As I began to code the interviews with these categories, I added others. When I finished coding all interviews, I realized that I had further fleshed out the categories for the last interviews coded and that I needed to go through the first ones again and recode.

PAGE 105

95 When all interviews were coded, the complete d code set looked like that shown in Table 7. Table 7: Analytical Categories with Qualifiers Category Description Qualifiers (assumed positive unless otherwise noted) Coding Examples A. Participation Focus/intensity of participation in student learning outcomes assessment activities Focus (name of course, department, program) "A-gen ed-low-nondev" might describe a faculty member's participation in gen ed outcomes assessment (for example, a volunteer for the annual assessment week activities) Extent/Level of involvement (low, moderate, high) "A-English-high-dev" might describe the participation of an English department faculty member who has assumed leadership in department level assessment of developmental outcomes Developmental/Nondevelopmental "A-QEP-moderate-dev" might describe a QEP steering or subc ommittee member Context Example from interview B. Collaboration Nature of collaboration with others on assessment activities Member type (Faculty, faculty mentor, IE staff, Other staff, Administrators) "B-IE-Dev" might describe a QEP committee member working with one or more IE staff members Developmental/Nondevelopmental "B-Faculty-Nondev" might describe a faculty member participating in a colloquium Context Example from interview C. Instrumentation Role in creating, refining, and using a specific measurement tool Creates, refines, or uses "C-Creates-Nondev" might describe an outcome chair for the (general education) Learning Outcomes Assessment effort Developmental/Nondevelopmental "C-Uses-Dev-Neg" might describe a faculty member teaching a developmental course who administers assessment tools created by others and has had a negative experience with the process Context Example from interview

PAGE 106

96 Table 7 (Continued) Category Description Qualifiers (assumed positive unless otherwise noted) Coding Examples D. Analytical Response Role in interpreting, evaluating, and applying the results of outcomes assessment Level of decisionmaking role: "Low" simply acknowledges the results of outcomes assessments, "Moderate" recommends changes to curriculum or instruction, and "High" implements and /or monitors such changes. "D-High-Dev" might describe the chair of the English department who collaborated with faculty to determine why students were not succeeding, what could be done to improve instruction, and how to implement and monitor the changes. Developmental/Nondevelopmental Context Example from interview E. Communication Role in communication with regard to student learning outcomes assessment Receives/initiates "E-Receives-Faculty" might describe a faculty member who receives communication about outcomes assessment from another faculty member Focus level (faculty, faculty mentor, department, committee, program, institution) Context Example from interview F. Reflexivity Perceived importance of reflective practice Importance (Low, moderate, high) "F-Low" might describe a participant who has not mentioned or alluded to reflection as an element of a continuou s improvement process Context Example from interview G. Development Focus of professional development Time frame (Past, present, future) "G-Past-Participant-Dev" might refer to professional development (to enhance students' developmental outcomes) undertaken prior to the QEP initiative Leader/Participant Developmental/Nondevelopmental "G-Future-Participant-Nondev" might refer to professional development the participant wishes to undertake in the future Context Example from interview H. Education Focus of educational background Major (e.g., Counseling, Measurement, English, Reading) "H-Counseling" might refer to a faculty member with a Master's degree in Counseling Context Example from interview

PAGE 107

97 Table 7 (Continued) Category Description Qualifiers (assumed positive unless otherwise noted) Coding Examples I. Experience Teaching experience Years "I-5-dev" might refer to a faculty member who has taught developmental courses for 5 years Developmental/Nondevelopmental Context Example from interview J. BarriersGateways Perceived barriers stand in the way of collaboration or student success; perceived gateways facilitate these processes. Barrier/Gateway "J-Barrier-Collaboration-Language" might refer to a perceived language barrier needing to be overcome in an effort to effectively collaborate Type (collaboration or student success) Description of barrier or gateway "J-Gateway-Collaboration-Colloquium" might refer to an opportunity for information sharing embedded in organizational culture Developmental/Nondevelopmental K. Integration Integration with organizational resources Horizontal/Vertical "K-Vert-Pa st-Financial-Absent" might indicate that college financial resources were perceived to have been absent in the past Time frame (Past, present, future) Type of resource Context L. Best Practice Perceived best practice in improving student learning Description "L-The QEP is the College's plan for college preparatory success: 'This seems to be a very solid response to SACS, and I'm proud of it.'" M. Assessment Type Type of assessment ELT, Longitudinal, cognitive, affective "M-Longitudinal-Key variables in tracking student success were student persistence, degrees awarded, ethnicity, and age." Context N. Authoritative Models Authorities in teaching and learning Name "N-John Roeuche"

PAGE 108

98 Journal entries created during the data an alysis process yielded insights into sitespecific terms used at Sunshine State Community College to describe aspects of student learning outcomes assessment. These are documented in Table 8: Local Definitions of Planning and Assessment Pr ocesses and Documents in Chapter Four. Ethics A case study worker has an obligation to create a contra ct with the institution and the actors under study. Particip ants must receive informati on about the possible harm or inconvenience to them fo r their participation. Further, they have the right to review and comment upon the data collected (but not the right to change the conclusions of the research study). For example, in this st udy, each participant received a summary of the transcribed interview and asked to verify or correct its conten ts. All but one of 12 interview participants responded to this reque st. Appendices A, B, and C contain copies of the individual interview and focus gr oup prospectus/consent and a brochure on the study used generate interest in participa tion. These documents contain the ethically required elements of human s ubject research protocol. Because of the highly political nature of institutions under study, they always have the perfect right to anonymity. Pseudonym s and additional cover for the actors through the use of composite characters (Cre swell, 1998, p.133) have been used in this case study to shield the identity of individuals. Als o, position titles have been changed to maintain the functional areas they represent, but prevent readers from identifying individuals. Similarly, “Sunshi ne State Community College” is a pseudonym for the real college.

PAGE 109

99 Reliability and Validity: Ensuring Trustworthiness of the Data Verification and validation ar e about ensuring the truthful ness of the research. In that sense, validation does more than ensure qua lity; it binds the researcher into an ethical contract with those in the field. Whereas in quantitative research de signs (i.e., surveys) there is a wall between ideas and action, in case study resear ch, the wall between ideas and action is porous. Thus, in this very pe rsonal form of study, the researcher reveals sources of personal bias so that the reader can form his or her own conclusions about the results (Bartlett, 2005). For example, I worked as an adjunct instruct or for two years prior to my current position. Through teaching, I gain ed a perspective on instruction within a business and technology classroom. While my 10 years of more recent experience as an institutional researcher may bias my views toward the IR perspective, I have been studying student learning outcomes assessment as a component of my current doctoral studies in Curriculum and In struction since 2001. Although I have a broad perspective on this topic, to mitigate personal bias in analyz ing and interpreting the data from this case, I maintained a daily journal of reflections upon the research in progre ss while in the field and then again while undertaking the data analys is. This journal has become an artifact of the study. I have also verified interview data by asking participants to verify the truthfulness of my impressions. In drawing conclusions a bout the research questions, I have examined evidence from multiple sources: surveys, reports, documents, observations, interviews, and personal reflection. The boundaries of this pa rticular case and the authenticity of experience may or may not permit the reader to make “naturalistic generalizations”

PAGE 110

100 (Stake, 1995, p. 86) concerning th e applicability of aspects of the case to his or her own college. Summary In short, this study has used qualitative methods within a case study model to examine the institutional context, processes, and change strategies employed by faculty and assessment professionals in assessing st udent learning outcomes in developmental reading and writing. Multiple sources of data collected through an ethically sound process have been used to examine the cont ext in which these professionals work to improve student success. In the next two chapters, I will present findings (Chapter Four) and interpret these findings (Chapter Five). In particular, in Ch apter Five, I will pres ent the conclusions of this research and discuss the implicati ons for the theories underpinning community college learning outcomes assessment, implica tions for the practice of assessment in a community college developmental education program, and implications for further research.

PAGE 111

101 Chapter Four Results This chapter presents the findings of the study. Chronological and thematic analyses of the case data have provided insi ghts into the research questions, topical issues, and the development of the College ’s Quality Enhancement Plan (QEP), an essential component of the Co llege’s regional accreditation process. The six sections described below comprise the contents of this chapter. I. Synopsis of Findings contains a brief digest of fi ndings, both research and topical. II. An Introduction to the Actors briefly describes faculty and assessment professionals (identified by pseudonym) who shared their experiences in personal interviews with this researcher and identi fies their orientation toward assessment. III. A Brief Timeline for the Development of the College’s Learning Improvement Focus summarizes the sequence of events l eading up to the approval of the QEP. The purpose of presenting this timeline is to help the reader put the QEP development process into a larger framework. In this framework, developmental education provides a pathway to the achie vement of student learning outcomes expected of all graduating stude nts (general education outcomes). IV. Findings on Research Questions provides detailed findings on the main issues (research questions) in th e study. These findings include data from thematic

PAGE 112

102 analyses of interviews and quotations fr om faculty and staff members that support these findings. V. Findings on Topical Issues provides background information from the analysis of three specific interview questions that help the reader to understand the interplay of research issues. VI. Chapter Four Summary is a point by point summary of the eight research and three topical questions an swered through this study. I. Synopsis of Findings Findings of the research questions, the instrumental issues within the case, describe the community in which full-time f aculty connected with ot hers, including parttime faculty, instructional administrators, student services staff, and assessment professionals. Supplementing research fi ndings, topical findings provide background information that help in unders tanding the interplay of research issues. Topical findings include issues such as faculty and sta ff goals for developmental education, the participation of part-time faculty in a ssessment, and how the College could more effectively use the results of assessment fo r improvement. While this section presents only a brief synopsis of the research findings details (including comments from faculty and assessment professionals) may be found in the third section of this chapter, “IV. Findings on Research Questions.” Likewise, detailed findings on the topical issues may be found in the fourth section of this chapter, “V Findings on Topical Issues.” Synopsis of Research Findings. Dimensions of profe ssional preparation and experience included professional development, experience, and s ubject and level of

PAGE 113

103 educational degree. Most assessment professi onals had teaching expe rience and a variety of educational backgrounds. Th eir level of education tended to be higher than that of faculty. Faculty members, likewise, brought a variety of experiences to teaching, including coaching and counseling. Faculty members were actively engaged in developing their measurement and teaching and learning expertise, as evidenced by their past, present, and future references to professional developm ent in interviews. Assessment professionals, on the other hand, tended to speak of professional development in the past tense, as if mo st of their learning had already occurred. Aspects of the assessment role we re communication, participation, and instrumentation. While faculty members referred to occasions when they received assessment communication, assessment profe ssionals assumed a much more proactive role, both receiving and initiating assessment communication. Furt her, the participation of an assessment professional was qualitativel y different than that of a faculty member. Faculty members focused the discussion upon the outcome they would like to achieve for the student. Assessment professionals, on th e other hand, reframed the outcome in a way that was credible to faculty and measurable in terms of student response. Assessment professionals thus combined their expertise in research design w ith faculty curriculum expertise to customize a measure (or measures) for a particular outcome. Continuous transformation of the organi zational structure to accommodate tasks to be accomplished, activities involving indi vidual and group reflection, and occasions for the celebration of college successes created the common ground for faculty and assessment professionals to work together on assessment plans toward the improvement of developmental education. The multiple structures, both verti cal (steering) and

PAGE 114

104 horizontal (coordinating), made widespread inclusion and participation possible. The QEP Committee was designed with a porous bounda ry. This enabled official members of the Committee to bring other faculty and staff into dialog when appropriate. Themes that explained estrangement between faculty members and assessment professionals included academic structure, gateways and barriers to collaboration, and collaboration partners. While or ganizational structure served as a barrier to participation and communication in some cases, it became a gateway in others. The formal structure of the academic leadership team, for example, served as a gateway for communication within academic affairs, but served as a ba rrier to direct communi cation with assessment professionals for most faculty members. In developing the QEP, however, the College tailored its structure to the ta sks at hand, facilitating collabora tion in three distinct phases of development. The College also used st ructure to form a gateway by creating common ground for collaboration, thus becoming a "co mmunity of practice" (Wenger et al, 2002, p.1). This occurred by structuring deliberate act ivities over a number of years (i.e., an annual learning theme) that made possible bo th thoughtful reflection about the college’s vision statement and the celebration of that vision. With faculty and staff members thus joining in college-wide learning activities, the ha bit of collaborating in smaller groups for learning assessment could eventually become a more routine practic e through structured interactions between faculty a nd institutional research staff. A concept that developed in the analysis of this study was “analytical response.” The concept was defined as the role one pl ays in interpreting, evaluating, and using the results of outcomes assessment. A lo w analytical response would be to acknowledge the results of outcomes assessment. Faculty a nd staff members discussing the results of

PAGE 115

105 outcomes assessment at a college-wide m eeting would provide an example of low analytical response. A moderate level of analytical res ponse would be to recommend changes to curriculum and inst ruction based upon th e results. An example of this would be an instructional administrator making pl ans to change instructional strategies. However, a high level of analytical response would be to implement and monitor such changes. An example of this would be the decision of the QEP Committee to implement a learning community model as one stra tegy to improve student success. While assessment professionals acknowledged or recommended changes to instruction based upon the results of outcomes assessment, academic leadership and faculty members implemented and monitored ch anges indicated with the resources (e.g., time, money, leadership) provided by Co llege administrators. Thus, faculty and assessment professionals parted company in the decision-making role within the student learning outcomes assessment process. The QE P structure in which the IR officer was defined as a staff resource rather than a member exemplified this division of labor. This meant that the measurement expertise of asse ssment professionals was often not heard in conversations about implementing changes to instruction at the course level. While all interview participants exhibited characteristics of reflexivity and a desire to learn about both teaching and learning and measurement, there was little evidence that the roles of faculty members and assessment professionals were merging. For example, the decision-making and implementation role of academic affairs in outcomes assessment was to refine criteri a for determining a students’ level of proficiency in a faculty-def ined competency. However, without good instruments for authentic assessment, the outcomes assessment process would be limited to measuring

PAGE 116

106 proxies for learning like grades. Assessment pr ofessionals helped faculty by framing the research questions in a measurable way. The process often involved a certain amount of struggle to communicate with faculty on outco mes. Assessment professionals facilitated and coordinated the logistics of the assessment process and provided an interpretation of the results (Learning Outcomes Assessment Task Force Report). Synopsis of Topical Issues. Topical issues were secondary issues that provided background information to help in understand ing the interplay of instrumental issues (research questions). This section contains a summary of responses to individual interview questions 2, 9, and 10. The “Findings from Topical Issues” section of this chapter is devoted to a description of th ese background issues in the case. Issues included: 1. specific goals of developmental educa tion (individual inte rview question 2), 2. the participation of part-time faculty in student learning outcomes assessment (individual interview question 9), and 3. methods of using the results of learning outcomes assessment toward improving developmental education (indivi dual interview question 10). The first issue, developmental education goals above and beyond Florida mandated exit testing included the general e ducation outcome “self-direction,” student affective development (i.e., motivation), and su ccess at the next level. The second issue, participation in student learning outcomes assessment among part-time faculty was limited to only a handful, typically those with an interest in curriculum governance and time to spend on campus during the day. The limite d participation of part-time faculty in curriculum development has been problematic for curriculum development, especially in

PAGE 117

107 curricular pockets where part-t imers have predominated. The third issue, obtaining more useful results from student learning outc omes assessment means that developmental educators should start with research-prove n instructional strategies for specific populations of students, take more time to focus and reflect upon assessment results to achieve discipline-specific application to curriculum, and develop measurement habits within their communities of practice. Following these brief glimpses of research and topical findings the next section of this chapter provides a profile of the participants in this study. II. An Introduction to the Actors This section of Chapter Four introduc es eight faculty, an instructional administrator, and three assessment professi onals at Sunshine State Community College who were interviewed for this study. It also cl assifies them according to the strength of the evidence each provided in s upport of assessment collaboration. Faculty members brought an assortment of perspectives to the process of collaborating on outcomes. The least involved in collaboration were the two part-time temporary faculty members, Philip and Maida. Other faculty such as Terri and Mary were full-time, but had only superficial involvement with collaborative activities. The most involved faculty members were Beth, Dina, Ge ri, and Fred. All four had many years of experience teaching at Sunshine State Co mmunity College and spoke on matters of outcomes assessment from an insider’s persp ective. It is these four faculty members whose voices speak most forcefully within the findings documented in this chapter.

PAGE 118

108 As part-time faculty members, both Philip and Maida were hired to teach classes on a one-semester contract. Philip was eager to put his experience to work in his classroom at Sunshine State Community Coll ege. He talked about ways to motivate students to read and to gage their progress through assessment. However, Philip didn’t have a way to meet other faculty or staff members with whom he could share his ideas. Maida was in a similar situation. Although a part-time temporary faculty member, she appeared to be invested in her work teach ing reading and writing courses and was eager to talk about her experiences with classr oom assessment. She had volunteered to be interviewed for this research in antici pation of her interview summary, which she enthusiastically endorsed. Al as, she could provide no inform ation on collaboration, either with faculty or with assessment professi onals, as the following passage reveals: I, as a reading specialist have recommended unofficially that they should carry out what I call a needs assessment. A needs assessment being that they have to first of all figure out what the people we are bringing in, what are their scores, what do they look like, how well are they performing in reading and writing base d on their FCAT scores, SAT, or whatever scores they have coming in. I would have driven their decision making process to know what exactly th ey are planning on, what skills are needed to be used on these students and who are the teachers they have to hire. But beyond that departmental level, since I’m not really a member of any of these committees that determ ine what happens, I don’t know what they are doing. (Interv iew with Maida)

PAGE 119

109 Both Maida and Philip were trying to use their experience and knowledge to help students during the Spring term, but kne w nothing about the large-scale QEP preparations to improve developmen tal education begi nning in the Fall. Unlike Philip and Maida, both Terri and Mary worked for the College full-time. Mary was another passionate professor, st raddling a load of both prep and non-prep writing courses. She was eager to talk about her experiences implementing classroom and general education assessment, but did not say much about collaboration, either with other faculty members or with assessment professi onals. The following passage reveals Mary’s orientation to collaboration on assessment: Things have been done here a certain way here in the English department for a long time. There is one way that we share information that I didn’t mention to you. We on the Learning Outcomes Committee come back to the English department and share what we’re doing in the Committee. So that is one way we do communicate. And everybody’s been pretty upbeat about doing that, but you know, we really don’t talk a lot about assessment. (Interview with Mary) Terri, on the other hand, had taught study skills courses for the students moving through a federally funded set of services for stude nts who were first generation in college, disabled, or low-income. Terri had assist ed the Quality Enhancement Plan (QEP) Director in the task of writing the final document, but was not a participant during the QEP implementation. When asked if she partic ipated in outcomes assessment activities, Terri replied, “Honestly no, and I think that’s one of the i ssues in our SACS review…” Both Terri and Mary, however, provided some limited insights into outcomes assessment.

PAGE 120

110 Beth, Dina, Geri, and Fred all had solid experience in colla borating with other faculty and with assessment professionals. At the time of our interview, Beth had been with the college for over 10 years. She was involved in every aspect of quality enhancement planning and general education assessment, from chairing committees to writing documents. Teaching was her great passion and she talked at length about the frustrations and successes of collaborati ng on, planning, and implementing assessment plans. Dina, on the other hand, was a well-resp ected faculty leader who had taught prep courses for nearly 30 years. Although she had not been able to participate in the QEP research and strategy formulation stages, she was fully invested in the implementation of the learning communities strategy for the impr ovement of college preparatory success. Providing an outsider’s perspective on deve lopmental education assessment, Geri was a very plain-spoken woman with a lot to say about the relevance of her experiences with coaching and counseling. She had been in volved with faculty efforts to define general education outcomes and link them to specific courses. Also, while she had not been involved in the research and strategy fo rmulation stages of the QEP, she was very involved with the development and implement ation of a common course syllabus for the study skills course. Although she had littl e professional preparation in outcomes assessment, it seemed as if Geri’s experien ce had given her an intuitive feel for the authentic assessment of study skills comp etencies. Her job within the QEP was coordinating the group of faculty and student se rvices staff responsible for the effort to get more prep students in to study skills courses. Fred was the most qualified of any of the faculty interviewed to discuss the evolution of collaborative assessment within developmental education at the College.

PAGE 121

111 Fred was a veteran of the prep writing cl assroom. He had initiated the collaboration between institutional research and developmen tal education by requesting student success data from a longitudinal tracking system. As he related this lifetime of experiences, the hour was sometimes filled with angst from pa st experiences without sufficient resources to help the vast herds of students moving thr ough prep reading and writing courses. As a state-wide leader of devel opmental education, he had been well aware of successful strategies other colleges were using. At other times during the interview, he expressed great satisfaction with the fruits of the QEP collaborations, as if th e trajectory of his career had destined him to be there just at the right moment to help the group forge a winning plan: The challenge is to get everyone on the same base and then have either direct resources or shared resources to be able to get it done. Any point you take away any of the shared resources, the program will not be as successful. And what I really enjoyed with the QEP process here, was the give and take of the administra tion. They recognized that they were actually ready to make some changes. (Interview with Fred) Michelle was the Dean of the Associate in Arts program. She was very involved in shaping discussions about general educat ion outcomes. She also led faculty members toward a discourse on the appropriate responsib ilities and roles of the faculty including “teaching, professional development, college service, service to students and public service” (Faculty Position Desc ription). Faculty consensus on responsibilities and roles ultimately became the core of a single position description for all faculty members at this non-union College. Michelle was also a member of the QEP and had previously taught

PAGE 122

112 prep math and study skills courses. She wa s touched that the faculty as a whole recognized developmental educat ion as the learning area they would most like to focus upon in the QEP: I’ve been around for a while and been in the field, there are so many new people that have not, and seeing everyone across the campus come together and saying, you know what? This is a population of students that we really need to spen d some time and focus on. I think that was the best thing that came out of the whole process. (Interview with Michelle) The final group of interviewees was th e assessment profe ssionals. All three (Carolyn, Jeff, and Joe) talked about thei r wealth of experience collaborating on developmental outcomes assessment. Carolyn’ s particular area of expertise was adult education. While she had previ ously taught prep reading cour ses, Carolyn’s role as an assessment professional was to direct th e QEP through its research and strategy formulation phases. She shared the group’s ex ploration of best pr actices literature in developmental education and f acilitated their often contentio us strategy negotiations with College administrators. The culmination of th ese efforts was a plan that was approved by SACS. Jeff had just completed some consulting work for the college when Academic Affairs needed a facilitator for a pilot of the general education outcomes assessment plan developed by faculty. Jeff was well-liked by faculty and the ro le turned into a job as Assessment Coordinator, reporting to the Director of Institutional Research. Jeff was also

PAGE 123

113 responsible for carrying out QEP assessments When asked if he had learned anything new in the last year about planning for le arning outcomes assessment, he replied: Yes, definitely. The QEP evalua tors who came on-site were informative and shared some good insight. We could sit here and read books and attend conferences, but not until you have people who are practitioners who have had success in their college programs do you really have some input that you can also use in that perspective. (Interview with Jeff) Joe had taught business and computer science courses before becoming an institutional researcher. Ultimately, he becam e Director of Institutional Research at Sunshine State Community College, serving as SACS liaison for the College’s recent reaffirmation of accreditation. Af ter a number of years as an assessment professional, he was recognized by a Florida-wide research gr oup for his innovative use of longitudinal tracking, having collaborated with Fred a nd other faculty to enhance college prep students’ success. During discussions about assessment, th is researcher became aware of some unique definitions of assessment related terms used locally by faculty and staff. These terms and their definitions are shown in Table 8. Table 8: Local Definitions of Planning and Assessment Processes and Documents Term Definition Strategic Imperatives Goals that span more than one year Colloquium A full faculty mee ting that takes place during Professional Development (planning) days, usually in the afternoon following a college-wide assembly Learning Outcomes Work of the faculty and Ins titutional Effectiveness staff toward developing learning outcomes, rubrics, and measures for general education effectiveness. (Also referred to as the Learning Outcomes Assessment Task Force (LOATF) by administrators)

PAGE 124

114Matrix A cross-reference lis t of credit-bearing courses and the general education outcomes to which they contribute The collective voices of eight faculty, their academic leader, and three assessment professionals have provided evidence in support of a chronology of events beginning with the college’s exploration of learning outcomes and assessment and ending with the approval of the Quality Enhancement Plan. III. A Brief Timeline for the Genesis of th e College’s Learning Improvement Focus In 1996, the President began college-wi de discussions on mission and vision. A year later, the College adopted the current vision statement emphasizing shared values, openness, and inclusion. In 1998, faculty focus groups at an annual meeting agreed that the college did not have clearly enunciat ed outcomes for students (The Institutional Learning Outcomes Process). A small group of faculty members then went to work on this newly defined task. They began by formul ating an initial set of general education outcomes and distributed readings to colle ge-wide faculty about the learning outcomes assessment process. Faculty members were th en asked to complete the Angelo & Cross (1993) Teaching Goals Inventory. The en suing faculty-wide discussions about assessment broadened to other constituents, such as students. The goal “self-direction,” suggested by students, has since become a highly valued competency by faculty and students alike. By 2001, faculty members received the set of six approved general education outcomes and were asked to associ ate them with specific courses. The resulting document became known as the “matrix.” Meanwhile, discussions about teaching goals had resulted in a single college-wide position de scription for faculty. It described the five

PAGE 125

115 major faculty roles and responsibilities: t eaching, professional development, college service, service to students, and public service (Faculty Handbook). In Fall 2002, the college began to stu dy themes embedded in the College vision statement, one each year. Each theme was tie d to the College’s vision of how it would serve society at large and provided food fo r thought for faculty, staff, and students through classroom-based study, guest speakers, and college-wide activities. About this time, faculty members began to take more of an interest in reviewing institutional research reports on student success measures. Joe, the Director of Institutional Research started to work with Fred, a faculty member, to examine data from a consultant-written longitudinal tracking system. The system track ed the progress of st udents who had started in college preparatory reading and writi ng for eight semesters to determine their enrollment, grades, and degree comple tion success by various demographics (Longitudinal Tracking System). This cohe sive group of faculty members was taken aback by the failure of so many students to pass through prep and into credit-level courses successfully. They designed and implemented a number of instructional strategies in 2002 and 2003. Fre d, the faculty member who work ed with Joe (Director of Institutional Research) on longitudinal track ing measures, had this to say about early efforts to improve developmental writing: I contacted [Joe] and said we need some measures, I prefer longitudinal measures for developmenta l students. That’s where it began. The lady who had already supplied a program to a couple of colleges [provided us with a consultant-writte n system to do longitudinal tracking]. And we used that to begin tracking basically persisters / nonpersiters. At

PAGE 126

116 that point I went to the department and said we are graduating under the state average. We have to put toge ther a program [to improve student success]. After implementing the Longitudinal Track ing System, Fred and Joe collaborated on the interpretation of results from the system. Then Fred went to the faculty in the department to formulate instru ctional improvement strategies: The initial assessment of six se mesters for one cohort produced an awareness in the department that we were not retaining these students. Since our developmental program led into ENC1101, one of our measures of persister/ non-pers ister success was succe ss in ENC1101 (English Composition I). Faculty learned that students weren’t succeeding – what could we do? At that point we bega n to put in place some collaborative learningI was one of the first to go. We then put in place through departmental suggestions in minutes, a minimum number of activities. It was very basic to begin with. It star ted with a unified syllabus for this particular course so that everyone is on the same page. It let to a small workshop in which we called in not only the permanent teachers but the adjuncts. We said, “Can we all get to gether on the syllabus, on the nature of the non-punitive grading system?” Now we always had the A, B, C, N grades. The definition of the “N” grad e and how it was actually calibrated had to be agreed among the teachers. Then within the courses themselves, we went to test-retesting for comp etency purposes while retaining the CLAST-style essay. We also adopted a paragraph/CLAST style final for

PAGE 127

117 the first level/second with blind gradi ng and training for the faculty in the six points – standard stuff. I wate red it down, but we did simplify it for college prep level II and then really simplified it for level I, paragraph. After developmental writing faculty had already implemented some changes to instruction, Joe provided Fred more specific information on the success of students they had been tracking. They found that they lost about half of their students the term after completing college level writing. While strong de partmental leadership allowed faculty to put a number of promising changes into eff ect, they found that th e limited time they had to devote to maintaining these interventions was ultimately not enough to impact student success in the long-run. The Institutional Research Office kick ed off 2004 with a publication that compared the college’s performance to Florida-wide student performance on accountability measures and grades in speci fic courses (What We Know About Student Learning). Several of these measures were focused upon college prep success. Later that year, faculty identified college preparatory education as the focus of the college’s Quality Enhancement Plan (QEP) and other constituent groups affirmed this choice. The QEP steering committee began to study best practices in developmental education. Later in the Fall, information from faculty focus groups was triangulated with assessments of program deficiencies and developmental edu cation best practices to determine which practices needed to be enhanced. In Spring 2005, college-wide communicati ons such as the campus newsletter discussed what had been learned during the re search phase. The Strate gies and Initiatives team in the second phase then collaborated on strategies to improve college preparatory

PAGE 128

118 education over a period of five years. Fina l strategies, including a new organization and associate dean for college prep were nego tiated with college administrators. The QEP team adopted longitudinal tracking of students as one of the college’s summative assessment measures. In 2006, after approval of the plan by the Southern Association of Colleges and Schools (SACS), teams from the College Prep Coordinating Committee began to iron out the details of intervention strategies, scheduled to begin in the Fall. The chronology of events from the college ’s exploration of learning outcomes and assessment to the approval of the Quality Enhancement Plan showed the importance of the college’s foundational work in developing student learning outcomes. These findings have been combined and triangulated with documents, where available. IV. Findings on Research Questions Thematic analyses of interview data a nd documents have provided evidence to address each of the eight rese arch questions, the instrumental issues of this case study. This researcher designed the individual interview questions to get college faculty and staff members to talk about their orientat ion to assessment as a tool for improving teaching and learning, to judge whether or not they had sufficient preparation for the task going in, how they had grown professionally (and in what ways), what the process looked like, what role(s) they played in making it work, and whether they felt as though it had made a difference for students. By examining their responses to these quest ions, this researcher was able to probe the “assessment” intersection between the role s of faculty and asse ssment professionals. None of these individuals alone seemed to ha ve a complete picture of what “assessment”

PAGE 129

119 at the college looked like. Each one seemed to have a unique window into the process. The purpose of conducting this case study was then to develop th e larger picture by fitting these individual views together. Research Question 1 How is the professional preparati on and educational background of a developmental education faculty member like that of an assessmen t professional, and how is it different? Dimensions of this issue included ex perience, professional development, and educational degree and level. These dimensions are discussed in deta il in the paragraphs and tables that follow this introduction. Most of the assessment professionals had teaching experience and a variety of educat ional backgrounds. Their level of education tended to be higher than that of faculty. For example, all assessment professional interviewed had doctorates. Faculty members, likewise, brought a va riety of experiences to teaching, including coaching and counse ling. They were actively engaged in developing their measurement and teaching and learning expertise, as evidenced by their past, present, and future references to development in interviews. Assessment professionals, on the other hand, tended to speak of professional development in the past tense. Experience. Faculty who participated in interv iews for this case study brought a wide variety of experiences with them. Those included K-12 education, counseling, developmental and non-developmental teaching, and coaching. Although one faculty

PAGE 130

120 member had little formal training in assessmen t, she seemed to have an instinct for authentic assessment from her prev ious experience with students: Every coach will tell you that they look for players that have whatever that coach calls it – the X-fact or – that have self -discipline, that have self-motivation, that have organi zational skills, that have leadership skills, all of the kind of – when you ge t down to it, nebulous areas. If those are important qualities to you, you ha ve to find an objective way to evaluate them. You also have to find a way to teach them those skills. (Interview with Geri) Counseling experience, according to Bet h, allowed her to look at students holistically. This view of th e student helped her to look for factors contributing to a student's failure or success to provide timel y intervention, where needed. When asked if she had goals for developmental students other than test scores, she replied: It’s difficult for me to narrow it down ju st to academic terms. I look at the holistic picture, and look at cha nges not only in academic levels or changes in reading comprehension or vocabulary, but also looking at changes in attitudes, perceptions an d behaviors. (Inter view with Beth) All but one of the three assessment professiona ls (Carolyn, Jeff, and Joe) interviewed had prior teaching experience, and Joe was looki ng forward to doing more teaching. Professional Development. Faculty and assessment professionals showed differences in their time orientation to development. While assessment professionals spoke almost exclusively of professional deve lopment as something that happened in the

PAGE 131

121 past, faculty members had a variety of time orientations including past, present, and future, as exemplified in Beth’s di scussion of professional development: I think (in the future) maybe a crash c ourse or review (not so much for me) in different types of assessment – qualitative vs. quantitative – just a bare bones review of those kinds of basic principles and concepts of assessments. Statistics is very helpful. (Interview with Beth) One assessment professional (Jeff) explained th at as the college had limited resources for professional development, the approach ta ken was to ensure that any expenditure enhanced valued skill and knowledge. For exam ple, a lot of resour ces (especially time) had been invested in developing local in struments for general education outcomes assessment. This came from the desire to measure what students at the College received from their experience that could not be measured on nationally normed instruments. Faculty and staff members were thus ac tively seeking measurement skills through professional development opportunities to im prove their approaches to assessment. Education. While not all participants attributed their assessment expertise to educational background, some assessment prof essionals and faculty reported degrees in education. Likewise, both assessment professi onals and faculty had masters or higher degrees in non-education fi elds like Business Administ ration or English. The main difference in educational background was leve l of education: facu lty teaching college preparatory courses tended to hold master’s degrees; assessment professionals tended to hold doctorates. However, at l east two faculty members indicated that they were working on doctorates.

PAGE 132

122 While Research Question 1 was designed to determine similarities and differences in the professional preparation and educa tional background of faculty and assessment professionals, Research Questi on 2 was designed to elicit di scussion of the roles played by each within the structur e of their organization. Research Question 2 How is the assessment role of a developmental education faculty member like that of an assessment professiona l, and how is it different? Aspects of the assessment role we re communication, participation, and instrumentation. These dimensions are discusse d in detail in the paragraphs and tables that follow this introduction. While faculty members referred to occasions when they received assessment communication, assessment professionals assumed a much more proactive role, both receiving and initiating assessment communication. Further, the participation of an assessment professional was qualitatively different than that of a faculty member. Faculty members focused th e discussion upon the outcome they would like to achieve for the student Assessment professionals, on the other hand, reframed the outcome in a way that was credible to f aculty and measurable in terms of student response. Assessment professionals thus combin ed their expertise in research design with faculty curriculum expertise to customize a measure (or measures) for a particular outcome. Communication. The communication role among assessment professionals was generally more proactive that that of faculty members regarding assessment. All assessment professionals interviewed indica ted that they had initiated communication

PAGE 133

123 within a learning outcomes assessment context. Five of eight facu lty members (63%) did so. While the other faculty members were enthusiastic users of various classroom assessment techniques, they were either not presently teaching or were part-time faculty. However, all eight faculty members indicated a "receiving" role in communication about assessment. An example of faculty-initi ated communication on assessment follows: There are four or five of us who decided that if we were going to plan the learning communities model for this school, then we needed more information. We needed more guidance. So a couple of weeks ago, we had [a consultant] come in from a college in North Carolina. For two days we were prepared to explore the ideas of learning communities, to talk about in general what learning communities are, to talk about some models that are used at other institutions. We wanted to emerge from that meeting with our own model, which we have done. So that’s been my role. (Interview with Dina) Dina was highly involved in not only br inging faculty and staff together from different parts of the college, but in conducting research and facilitating a consensus on a model for implementing a learning community at Sunshine State Community College. Participation. Assessment professionals played a different participatory role than did faculty members. While it was the faculty role to determine which student outcomes constituted evidence of student success, it was the assessm ent professional's role to reframe the outcome in a measurable way. Toward that endeavor, the assessment professional provided a service for faculty, much like that of an attorn ey to a client. It

PAGE 134

124 was the job of this professional to climb in to the faculty mind set in order to tailor a measurement that fit both facu lty and student comfortably. Some faculty members, esp ecially part-timers, however, did not participate at all in student learning outcomes assessment: Groupings that pull everyone togeth er to look at what has been working, that would be perfect. Beca use right now, they have done a wonderful job giving us opportunities for professional development, preparing us for the fact that we have to take responsibility for how these kids are learning, knowing what’s av ailable out there, knowing what works and what doesn’t work. The instru ctor has to take the initiative, but at the same time there should be a co llaborative effort, there should be a process whereby after they have give n us the workshops, and what needs to be done. We should go beyond that le vel and we should come together and say what has been working best here. What strategies have you been employing? I should know what [another faculty member] is using when she’s teaching inferencing. I should know what strategy is working or how many students are responding to th at. Right now I don’t have that information. (Interview with Maida) While some faculty did not participate in asse ssment, other faculty pl ayed a role that was more akin to that of an assessment profe ssional in terms of par ticipation, communication, and involvement. For example, Geri talked about the roles of the members of her QEP subcommittee, composed mostly of student affairs staff members, in the following passage:

PAGE 135

125 The [Student Learning Skills] subcommittee is a fairly small, cohesive subgroup. Our purpose was to enhance the enrollment. Of course our QEP is for the success of our pr ep students and this subcommittee is supposed to address the role of the co urse in that general purpose – to enhance what it does for students. With in that group it’s a mixture that is overwhelmingly student affairs folks. The chair of the committee is a counselor, the director of testing and assessment is on the committee, there’s an advisor on the committee, ther e’s the VP for student affairs, and the director of enrollment services. Then there’s myself, and the person under whom the preps and study skills fa ll. I think that if I could have changed the members of the committee, I would have put at least one more faculty member on there. I’m in an odd role, and I fall into an odd niche anyway. We don’t have that ma ny people that have been both fulltime faculty and full-time student affairs. (Interview with Geri) Geri played the role of f acilitator both with in the committee among student affairs professionals and outside of it as she worked to bring other faculty together on the issue of a common course sy llabus for study skills. Instrumentation. While the assessment professionals and faculty member both had a stake in putting the right measurements in place, the impact upon students was felt more keenly by faculty members, who were closer to the "firing line." (Learning Outcomes Assessment Task Force Report). Geri indicated that what she had learned about planning for student learning outcomes assessment was that many of the college's general

PAGE 136

126 education outcomes defined by faculty were di fficult to measure. To explain this, she drew an analogy to students who must choose a research topic for a class assignment: I think I probably have a clearer idea of how those outcomes play a role with what I’m doing in the classroom. I think I’ve probably also learned that many of the outcomes are re ally difficult to measure. It’s so funny because we are struggling through our critical thinking unit in my study skills class. We went through th is whole process of choosing topics and I kept asking them questions that I was hoping would lead them to an evaluation of the topics and a rejection of some that they’d picked. So they worked on them a little bit now and th ey’re hitting those roadblocks that are clearly out there. The other day I said well, you know, that’s really what I wanted you to learn: The secret to a lot of what you’re going to do in your classes to be successful is choosing the right topic. It’s not something necessarily you’re passionate about. It’s something that you can find good evidence for, you can research fairly easily, and has more than one point of view. Well, I think that I kind of relate that then to the learning outcomes. We’ve chosen all these learning outcomes. While we were doing it, were we really thinking about how are we going to measure these within the framework of what we do in the classroom? Some of them are very concrete and easy to measure and others are not. So that’s one thing I think I have learned a bout the assessment So I’ve done like my students. I’ve chosen some that weren’t easy to assess. They are definitely important, but how do you m easure them? (Interview with Geri)

PAGE 137

127 Criteria for a good topic may include accessibility of research, a body of evidence, and more than one point of view. However, a st udent often chooses a favorite topic without thinking about how difficult it will be to complete an assignment on it. Similarly, the faculty may not have thought a bout how they would m easure the outcomes they picked in the context of what they di d in the classroom. Thus, an important way in which assessment professionals influenced the assessment process was by framing questions about instructiona l effectiveness in a measur able way by applying their expertise in research design. While Research Question 2 discovered the differences and similarities in the roles played by faculty and assessment professionals Research Question 3 sought to determine which strategies brought them together to work on assessment projects. Research Question 3 Which collaborative strategies serve to create common ground for faculty members and assessment professionals to work together on assessment plans? Continuous transformation of the organi zational structure to accommodate tasks to be accomplished, gateways to collaboration, activities involving individual and group reflection, and occasions for the celebrati on of college successes created the common ground for faculty and assessment professionals to work together on assessment plans toward the improvement of developmental education. The multiple structures, both vertical (steering) and horizontal (coord inating), made widesp read inclusion and participation possible. In Fi gures 2, 3, and 4 below, the QEP Committee is depicted as having a porous boundary. This enabled offici al members of the Committee to bring

PAGE 138

128 other faculty and staff into dialog when appropriate. The structure promoted crossfunctional dialog on issues affecting stude nt success, often inviting the direct participation of institutional researchers. Structural Transformations in Three Phases of the QEP. Phase I was characterized by a loose confed eration of research groups with diverse memberships. The “main” committee, which performed a st eering function for the activities of subcommittees, had representation from academ ic affairs (academic deans, learning resources, liberal arts & sciences, voca tional programs, and academic support) and student services (enrollmen t, student support, testing and assessment, counseling & advising). Institutional Effectiveness staff and the VP for academic affairs were named resources and liaisons to the committee (QEP Committee Roster). Figure 2: QEP Phase I – Research (Year 1: March-December) QEP Committee Policies and Practices Data and Research Literature Review Best Practices Facult y Focus Grou p s Student Focus Grou p s

PAGE 139

129 Phase II, which continued the Committee’s work into the second year, was characterized by increasingly specialized s ubcommittees, whose tasks were to identify funding sources (Resources), communicate news and events related to the QEP (Communications & Technology), devel op formative and summative evaluation strategies (Assessment and Evaluation), a nd identify instructional strategies and interventions that would serve to improve college prep success (Strategies and Initiatives). Figure 3: QEP Phase II – Develop Stra tegies (Year 2: January-October) An emphasis upon horizontal integration in Phase III was exemplified by both the creation of the College Prep Coordinati ng Committee to oversee implementation and by the expected increase in participation of part-time faculty members. According to one QEP Committee Resources Assessment and Evaluation Communication and Technology Strategies and Initiatives En g lish/Readin g Student Su pp or t Math

PAGE 140

130 faculty member, while adjunct faculty me mbers had had only minor involvement in general education learning outcomes assessment up to this point, they would most likely be involved with the implementation of QEP intervention strategies. This was because the study skills and math prep courses, majo r targets of implementation strategies, were largely taught by adjunct instructors. The Coordinating committee spanned prep disciplines and student services, keeping a cros s-section of the colleg e continually talking about college prep issues. Ve rtical integration in the im plementation phase of the QEP was accomplished mainly through the creation of the new position of associate dean for the college prep program. This element of QEP strategy, rooted firmly in the developmental education lit erature (Boylan, 2002), focuse d attention and College resources on improving college prep. The facu lty won a hard-fought battle to hire an academic officer to lead college preparator y education, thus marshaling a champion to their cause. The structural ch anges depicted in Figures 2, 3, and 4 were synthesized from interviews and the QEP document and have been verified by the or iginal QEP director.

PAGE 141

131 Figure 4: Phase III – Implement (Year 3) Gateways to Collaboration. “ Gateways” in this study are defined as organizational structures and processes that facilitate coll aboration. While faculty and assessment professionals were just as lik ely to discuss collaboration, 21 faculty references to collaboration were about barriers (57%) and 16 were about gateways (43%). An example of a gateway to communication a bout assessment often cited by faculty was QEP Committee (Associate Dean of college preparatory education, Chair) Planning Group for Implementation Strategy (Learning Communities ) Reading Coordinator English Coordinator Math Coordinato r Study Skills Coordinator Planning Group for Implementation Strategy (Student Life Skills) Individual Course Sections College Prep Coordinating Committee

PAGE 142

132 the colloquium, a faculty mee ting in which important topics impacting teaching and learning were discussed: Every semester we have the facu lty colloquium where all full-time faculty get together for meetings. A nd we’ve suggested that [at] every single faculty colloquiu m, we need to have a learning outcomes assessment update and we need to have a QEP update so it’s always visible and always there in front of fa culty. We still have a lot of faculty that have no clue what the QEP is, th ey’ve heard of it, but they are really not sure what it is – “Oh, it’s just th ose prep students – it doesn’t involve us,” when it really is a college-wid e initiative. (Interview with Beth) The proportion of assessment professiona ls referencing barriers was just the reverse of faculty. There were only two refere nces to barriers (33%), while there were four references to gateways to collabo ration (67%) among assessment professionals. Thus, faculty tended to site the difficulties inherent in assessment collaboration, while assessment professionals more often referre d to the ways in which it worked well. Community Learning Activities. Another way in which the College created common ground for collaboration was through its efforts to become a "community of practice" (Wenger et al, 2002, p.1). This occurred by structuring deliberate activities over a number of years that made possible both t houghtful reflection about the college’s vision statement and the celebr ation of that vision. A college-wide activity addressed one pa rticular “theme” each year, for which there were common readings. The idea was for faculty to integrate that theme into as many classes as possible. A new theme was introduced to the college each Fall. Last

PAGE 143

133 year’s theme was “Dignity,” and it was di splayed prominently on vertically hanging banners along the winding campus road. A ccompanying “Dignity” were the learning themes from the three previous years. The college has plans to continue reading and exploring new topics with students, facult y, and staff members. According to Joe: Certainly if there’s a learning th eme and all faculty and students have an opportunity to participate in (and many do), that helps build a community. (Interview with Joe) Celebration. After an annual college-wide pe er nomination process, a committee of peers chose the “best” nominee in each ca tegory to receive an award for reinforcing a set of common values among peers. There were awards for teams, as well as individually achieving faculty and staff members who ha d accomplished a goal of great strategic importance to the college that year. During th e staging of these awards at the end of Spring term, each honoree received a small grant through a college endowment. An awards committee staged these prestigious awards as the culmination of a ceremony celebrating the individual and team contributi ons of faculty and staff members, including length of service awards and retiree recogni tions. The staging for the peer awards was dramatic, heightening the an ticipation of the announcement by darkening the room and playing spotlights on the audience in “search” of the winner. The winner’s family also enjoyed a place in the spotlight as photogra phers snapped pictures of the winners and their families. Commented Joe, “This is a rec ognition that people have put a lot of effort into things their family contributes to that.” The success of recognition programs lik e these counters the argument that strategy must always be rational and topdown. For example, Mintzberg (1989) proposed

PAGE 144

134 his grassroots model of strate gic planning to counter a ph ilosophy of leadership that strategies must be carefully and deliberately cultivated by a careful leader. The grassroots model has elements that mirror Birnbaum’s (1988) organizational anarchy and Senge et al’s (1994) learning organizati on. Richard Voorhees, a past pr esident of the Association for Institutional Research, may have been thin king of the grassroots model when he said that an alternative job of IR was to f eed networks (2003), which then grow in unanticipated directions. Applicable principles of this model are: Strategies initially grow like w eeds in a garden, they are not cultivated like tomatoes in a hothouse. These strategies can take root in all kinds of pl aces, virtually anywhere people have the capacity to learn and the resources to support that capacity. Such strategies become organizational when they become collective, that is, when the patt erns proliferate to pervade the behavior of the organization at large.(Mintzberg, 1989, p. 214-216) This recognition program thus helped to solidify the messages to faculty and staff members college-wide that those who had a pa rt to play in the college’s successes and that innovation and collaborativ e effort would be rewarded. While Research Question 3 sought to determine organizational processes and structures that brought faculty and assessment professionals together in collaboration on assessment projects, Research Question 4 instea d explored processes and structures that kept faculty and assessment professiona ls from successfully collaborating.

PAGE 145

135 Research Question 4 Which strategies cause estrangement between faculty members and assessment professionals? Themes that explained estrangement between faculty members and assessment professionals included academic structure, barriers to collabo ration, and collaboration partners. These dimensions are discussed in de tail in the paragraphs and tables that follow this introduction. Academic Structure. Although the flexible structur e of the QEP brought faculty and assessment professionals together into direct dialog, the hierarchy of management teams within academic affairs served as a f ilter for information flowing from institutional research. Without further examination, the horizonta l and vertical lines of communication within the academic structure could be s een as a formula for engaging the academic community in dialog about student success. Information on the structure of communications channels within academic affairs came mainly from the single interview with an instructional administrator, Michelle According to her, faculty received data on student success in a number of ways. First, ad ministrators shared important findings with faculty directly though a coll oquium each semester. Second, administrators shared data with the small group of direct reports in th e division. In these meetings, information was used as a springboard for discussion about what could be done to improve areas that were troublesome. Third, administrators discussed da ta in weekly meeti ngs of the Learning Management Team, “four deans, directors, kind of top-level management in instruction” who reported to the academic VP, said the Dean. Fourth and finally, administrators discussed data within the Learning Response Team. “We meet weekly with our VP of

PAGE 146

136 [Academic Affairs], but quarterly, we have what we call our the learning response team and that includes all of our department chai rs (we call them program facilitators), and that’s vocational programs, our workforce progr ams, continuing ed, so it’s about 30 or so folks that sit around a table. The management team helps to decide what things we want to bring to that group for discussion. Then information sharing, as well, but a lot of decisions are made through that group.” The Dean described the process of information sharing as one in which Institutional Research communicated informa tion to her directly. The information was then disseminated through the above -described information channels: For the past three years we’ve done a really good job of sharing that information with everyone, but especially our learning management team. I get reports all the time that show how well our students are doing within our prep classes, what per centage of students are retained or successful or passing at that highest level in our prep classes, so I get to review that. Our [IR] office sends me an electronic copy and then I’ll also have a hard copy later. Other types of surveys on students, CCSSE, just all kinds of surveys that we do institut ionally with students in different programs, that information’s always shared back with the learning management team. And then what I try to do is when I have my small group together, my division together, is making sure that they’re aware of it – seeing what’s going on, what the numbers look like. That generates discussion: “Ok, this is an area we’r e not doing so well in, so what do we need to do? This is an area that we ’re doing well, so how can we do even

PAGE 147

137 better?” So I think just recently -well, three years, we’ve done a much better job of getting that informa tion out. We have faculty colloquiums each term and that information’s shared there, which is very positive because I think sometimes in some in stitutions that can be a disconnect. (Interview with Michelle) In this manner, the Learning Management Te am decided what information to share and which decisions needed to be made within the quarterly meetings of the Learning Resource Team. Academic administrators thus filtered much of the information seen by the faculty though the organiza tional hierarchy. This lack of standing within academic affairs and inability to negotiate an age nda on standardizing course outcomes also hampered Institutional Research efforts to complete an effective assessment plan for general education outcomes, as Joe commented: I would feel more comfortable if we also had more embedded assessment, that we saw that some of these outcomes identified in syllabi, that they identified how they were as sessing them at the course level, and that there’d be some means of colle cting [samples of student work]. Whether they were used for grading, that ’s not important. It’s that we need that information in order to improve instruction. (Interview with Joe) The restriction on information flow with in the academic structure, however, did not occur in the ad hoc, ever-evolving QE P, in which the structure was innovated specifically to promote crossfunctional dialog on issues affecting student success and which often invited the direct participation of institutional researchers. The elaborate structure of communications channels within academic affairs thus facilitated internal

PAGE 148

138 decision-making and communication, but excluded institutional researchers from direct dialogs with faculty on teaching and learni ng issues. The exceptions to this were structures created for specific tasks such as general education assessment simply called “learning outcomes” or the QEP, a plan to improve developmental education. In these structures, assessment professionals worked with a limited group of faculty. Beth, a veteran faculty member who had worked with institutional researchers staff on assessment, had positive experiences working with these staff members: With learning outcomes when things got a little rocky or whatever our [institutional research] staff have been very helpful. They’ve been very supportive and very open to help ing to helping with problem solving and working with us to help us to be as effective as we can. (Interview with Beth) While working relationships between faculty and assessment professionals were positive, their daily work life kept them in sepa rate environments with limited two-way communication opportunities. The QEP structure was linked to acad emic communications channels through membership on the QEP steering committee (26 individuals) by some of the key people in academic affairs (Dean of AA Programs, Dean of Library Services, program facilitators, and faculty ). Although fluid in its developmen tal stages, the QEP eventually became anchored in the College organizationa l structure during the implementation stage through the newly created positio n of Associate Dean of College Preparatory Programs (reporting to the Dean of the Associate in Arts Program). The Office of Institutional Research bore responsibility for con ducting QEP assessments, thus periodically

PAGE 149

139 measuring and reviewing the effectiveness of planned intervention strategies (such as learning communities). Barriers to Collaboration. “Barriers” in this study are defined as those structures and processes within the organization that obstruct collaboration. Faculty often cited “time” as a barrier to information sharing abou t the results of assessment. For example, I didn’t realize it unti l you asked these quest ions, and I realized how little we share. It’s such a grea t question. It’s going to set into play some better ways that we can effectively communicate. We could do plenty of things. We could have info rmal meetings where we talk about this. We could have formal sessions where we hash out ideas, brainstorm ways to improve our assessment techniques. We really don’t do that, we really don’t. I think at the community college, you have to remember, we teach five. At the university level, you teach two. We teach five. And I think because of that, we don’t have as much time to share and to think. And I miss that. I don’t think at th e community college level it’s as important, at least it doesn’t seem to be. (Interview with Mary) The proportion of assessment professiona ls referencing barriers was just the reverse of faculty. There were only two refere nces to barriers (33%), while there were four references to gateways to collabo ration (67%) among assessment professionals. Thus, faculty tended to site the difficulties inherent in assessment collaboration, while assessment professionals more often referre d to the ways in which it worked well. Collaboration Partners. Because of the limited number of assessment professionals in proportion to faculty members, assessmen t professionals collaborated

PAGE 150

140 with a greater mix of faculty, administrators and staff than did faculty members. For example, while all three asse ssment professionals collaborated in mixed groups, only four out of eight faculty member s interviewed (50%) collaborated in this manner. More faculty members tended to mention collaborati ons with other facult y, if they mentioned collaborating on assessment at all. While Research Question 4 investigated the sources of estrangement that kept faculty and assessment professionals from collaborating on assessment, Research Question 5 sought to determine the role of a ssessment professionals in using assessment results for improvement. Research Question 5 What role, if any, does an assessment pr ofessional play in determining how the results of student learning outcomes assessment will be used for improvement? A concept that developed in the analysis of this study is “analytical response.” The concept is defined as the role one pl ays in interpreting, evaluating, and using the results of outcomes assessment. A lo w analytical response would be to acknowledge the results of outcomes assessment. An example of this would be faculty and staff members discussing the results of outcomes assessmen t at a college-wide meeting. A moderate level of analytical response would be to recommend changes to curriculum and instruction based upon the results of outcomes assessment. An example of this would be an instructional administrator making plans to change instructional strategies. However, a high level of analytical response would be to implement and monitor such changes. An example of this would be the decision of the QEP Committee to implement a learning

PAGE 151

141 community model as one strate gy to improve student success. Fred, the faculty member who worked with Joe to refine uses of the longitudinal trackin g system, provided an excellent example of both the limited role of assessment professionals in effecting change in developmental education and the vexation of faculty members who had the authority, but not the resources to do so prio r to the developm ent of the QEP: I contacted [Joe] and said we need some measures, I prefer longitudinal measures for developmen tal students. We’re not succeeding – what can we do? At that point we began to put in place some collaborative learning. We then put in place a minimu m number of activities. It started with a unified syllabus for this particul ar course so that everyone is on the same page. It led to a small workshop in which we called in not only the permanent teachers but the adjuncts. We said “Can we all get together on the syllabus, on the nature of the non-punitive grading system?” We initially did achieve a highe r retention rate with th e next cohort, but since everyone was being asked, just to be honest, too many hats to wear, that faded in the English department We did not have any unified developmental education progr am. (Interview with Fred) While Joe provided the data, it was up to Fred to formulate and implement instructional strategies to the limit of th e resources available to him. Thus, while assessment professionals acknowledged or recommended change s to instruction based upon the results of outcomes assessment, academic leaders and faculty members implemented and monitored cha nges indicated with the resources (e.g., time, money, leadership) provided by College administrators. Faculty and

PAGE 152

142 assessment professionals thus parted comp any in the decision-making role within the student learning outcomes assessment process. The QEP structure in which the IR officer was defined as a staff res ource, rather than a member exemplified this division of labor. While Research Question 5 sought to determine the role of an assessment professional in using the results of asse ssment for improvement, Research Question 6 investigated whether faculty and assessment professionals have become more alike in terms of their assessment roles since they began collaborating on assessment. Research Question 6 Have faculty members at the college beco me more like assessment professionals and assessment professionals more like faculty members in terms of their assessment roles since they began collaborating on student learning outcomes assessment? Both faculty members and assessment pr ofessionals alike greatly valued both measurement expertise and reflexivity, the ability to reflect upon one’s practice. Evidence of the capacity for self-reflection co uld be found in the ge nerous responses to the thought-provoking questions posed in in dividual and focus group interviews. Participants really reached down to find answer s to the questions posed by this researcher in these interviews. Often a more complete answer to a previous question would emerge in later conversation, after the partic ipant had time to think about it. Faculty talked about the desire to pursue professional development toward increased expertise on measurement topics or emphasized how valuable their courses in measurement were in developing assessment instruments. On the other hand, assessment

PAGE 153

143 professionals, most of whom had significan t teaching experience, talked about their studies of best practices in teaching and l earning. Also, both assessment professionals and faculty members indicated a high degree of re flexivity, the tendency to reflect upon one’s actions for the purpose of learning from them. Vi rtually all indicated some amount of this characteristic in interviews. For example, in the following passage, Fred a ssessed the need to change instructional strategies within the English department and the examined the reasons for the slow progress in making that change prior to the QEP: The place where as a subcommittee I believe we were the weakest going in (again including myself) was in the level of assessment. I learned during the QEP process that what I had been using was OK, but that in fact, to go into particular programs, particular faculty, and say you need to change your teaching strategies and we need to be on the same page rather than trumpeting academic freedom. The literature reports are saying this over and over and over. It was during the QEP, we were talking back into our departments, and for the first time we had 85-90% cooperation from the faculty, and what they wanted to know, of course, on a political basis up front was “Is this the real thing?” “I s this serious?” But even that part of it came from their professionalism; pa rt of it came from the fact that they didn’t want to be involved in a developmental program that was not successful. They just didn’t, and that came from the ongoing data that said you could be doing better than this. So they learned and I learned that we

PAGE 154

144 were going to have to be even more cooperative within [our] teaching strategies. (Interview with Fred) Joe, on the other hand, took a more institu tional focus in his reflections, thinking back about the reasons why the faculty may have selected college preparatory education as the focus of the QEP: The origin of that came about from a faculty colloquium when there was a general discussion, and I don’t know that it was so much the longitudinal tracking system that we did with [Fred], as it was a general awareness that we had a number of students at the college who were not ready to do college level work. Part of that is obvious from the number of students who are required to take colleg e prep courses, but in addition to that was the feeling among some of th e faculty that students who entered their courses were not ready for colle ge level learning, whether it was the previous work with the learning outcomes “matrix” or the work of the communications faculty with the long itudinal tracking of students who successfully completed the college prep to see how they did. It wasn’t just the communications faculty or the ma th or those involved with college prep courses, but a general feeling th at students needed more preparation to do college level learni ng. (Interview with Joe) Although both faculty and assessment profe ssionals shared a desire to improve their knowledge in teaching and learning and exhibited the ability to reflect upon the practice of assessment, there was little ev idence that their roles were merging. For example, the decision-making and implementation role of academic affairs in outcomes

PAGE 155

145 assessment was to refine criteria for determ ining a students’ level of proficiency in a faculty-defined competency. However, without good instrume nts for authentic assessment, the outcomes assessment process would be limited to measuring proxies for learning like grades. Assessment professionals helped faculty by framing the research questions in a measurable way. This pro cess, however, was often consumed by the struggle to find common language to communicate about learning outcomes: From a non-assessment side, it’s almo st always a frustration (not a discomfort), but a frustration (in talk ing with assessment staff) because we think we’re measuring one thing, an d they think they’re measuring another, and we don’t always communi cate well. So even the development of the cohort for the QEP was PAINFUL, because we were hearing “Well, we can’t measure that” and we were saying “How can we not measure that – that’s what we have to measure.” And so frequently assessment and teaching don’t talk the same language, and so there’s lots of misunderstanding with that. It’s a la ck of vocabulary. When I’m talking with other faculty members we’re us ually talking the same vocabulary. It’s with measurement and assessment people that we’re usually talking a different language than they are. [Differences] had to be negotiated. We had to make ourselves clearer, because assessment people come to us thinking we’re talking emotion and theory and they’re talking specific data. So we just have to keep defining for one another what we’re talking about. (Interview with Terri)

PAGE 156

146 Other evidence of the different language used by faculty involved in general education outcomes assessment was the name each had for the group. While Jeff, the Assessment Coordinator, us ually referred to the group as the “Learning Outcomes Assessment Task Force,” faculty members called it simply “Learning Outcomes,” as if the group and the task were the same. Having overcome the language barriers to frame outcomes measurably, assessment professionals then facilitated and coordinated the logistics of the assessment process and provided an interpretation of the results (Learning Outcomes Assessment Task Force Report). However, it was subseque ntly the purview of academics to develop and implement changes to instructional strategies. While Research Question 6 investigated whether or not faculty and assessment professionals had become more alike in th eir roles since they began collaborating on assessment, Research Question 7 sought to determine the qualitative differences or similarities in those roles, if any. Research Question 7 If so, how have they become more alike? In acquiring teaching and learning and measurement expertise, faculty and assessment professionals have been on simila r professional development paths. Faculty members and assessment professionals me ntioned similar types of professional development and training, both onand off-ca mpus. Faculty cited John Roeuche, Patricia Cross, Hunter Boylan, Trudy Banta, Peter Elbow (writing as a process of discovery), Jack Misero (transformative learning),Vincent Ti nto, and Skip Downing as assessment experts

PAGE 157

147 whose advice they had sought. For example, Mary, a faculty member, talked about the way she had applied her professional deve lopment to improvements in classroom assessment: I’ve improved my assessment tools. As teachers, you know we’re stuck with those two instruments “the essay and the multiple choice objective test.” But really, there ar e so many methods of evaluation and I’ve gone to portfolio assessment in al most all of my classes, including prep English, and the reason I do the portfolio assessment is I’ve read so much of Trudy Banta and other experts in that field who said portfolio is a wonderful way to go because of our holistic view of how a student’s doing, how well a student is prog ressing. (Interview with Mary) Likewise, assessment professionals cited Bob McCabe (2003) and Bill Blank, a USF faculty member who had taught an on-campus professional development activity on contextual learning for college faculty and staff members. Joe talked about the lessons from that session he put to use in a class he taught part-time: After I went to that little work shop, in the class that I taught this term (a computer applications class), I gave the students an alternative to the standard cases at the end of each chapter, [a chance] to come up with their own application. So, you (students) have to meet the objectives of this standard case, but come in with your own case. I had a student who had a large paper route. He designed a spreadsheet application to help him an alyze his profitability at different parts of the route. A student who’s on our tennis team did an Excel

PAGE 158

148 application to look at tennis scores of the team. Anyway, they used this model that said, I can meet all the cr iteria of this standard case and come up with my own [case]. The other part is to have them share that with the class. I want to do that earlier in th e term the next time I teach it, because then you start more of a group interchange. People figure, “Oh, this person does something, I can learn from them.” (Interview with Joe) Thus, faculty and assessment professionals alike supplemented their core knowledge in student learning outcomes assessment with both measurement and teaching and learning expertise. However, while knowledge of teaching and learning was essential to applying measurement expertise to learning problems, one of the limitations upon the analytical response of the Assessment Coordinator came from the dual roles demanded by Jeff’s job description: MAJOR RESPONSIBILITY: To provide logistical and research/assessment support for the Quality Enhancement Project and the Learning Outcomes Project; to repres ent the college as its legislative liaison. (Assessment Coordinator Position Description) His role as legislative liaison and other duties sometimes necessitated Jeff’s presence elsewhere, as indicated in conversation with Geri, a faculty member involved with the QEP implementation: I don’t know that I’ve ever had ca ll to discuss the results with [institutional research]. It’s usually in paper form or we get it in a faculty colloquium or something. Actually there is one person from institutional

PAGE 159

149 [research] who is on our group. He hasn’t been there in several meetings. He was given the dual responsibility for another campus for a while. The person in charge left for another job and so he had to wear two hats. Our hat kind of slipped off, so he hadn’ t been in the meeting in a while. (Interview with Geri) Both his role as legislative liaison and as an interim campus director placed Jeff squarely in the role of administrator, sometimes in conflict with his role as a measurement consultant and manager of QEP assessmen ts. Thus, while assessment professionals shared the same professional development interests as faculty, the differences in their assigned roles within the organizational structure caused them to place emphasis on different tasks, at times. While Research Question 7 investigated the qualitative changes to faculty and assessment professionals since they bega n collaborating on assessment, Research Question 8 explored the perspec tives of the interview participants in identifying the most promising assessment approaches. Research Question 8 From the perspective of respondents, wh ich assessment approaches have shown the most promising results? Faculty and assessment professionals talk ed about collaborative processes that evolved out of their desire to help students succeed. The college’s crowning achievement was developing a Quality Enhancement Plan for the improvement of developmental education based upon best practices in the fiel d. The details of thes e processes show the

PAGE 160

150 way toward improving collaboration on the use of assessment results, not only for the improvement of developmental education, but for all outcomes assessment efforts at Sunshine State Community College. Faculty members and assessment professionals alike agreed that the QEP was an exemplary plan for college preparatory success. The process began with a complete literature review and incorporated many strategies that had been successful at other institutions. Scarcity of funding, however, nece ssitated compromises and the college could not afford everythi ng QEP members wanted to do. However, the process that the college went through, buildi ng capacity linked to resources, was the way to make a successful program (Boylan, 2002). Remarked Fred, a veteran faculty member: “This seems to be a very solid res ponse to SACS, and I’m proud of it.” During the development of the plan, when faculty lacked understanding of the nature of the problem at hand, they spent time in professional development. They talked to experts in their fields, read books, attend ed seminars, and discussed what they had learned within their cross-f unctional groups. This initiall y occurred in the large QEP group assigned to research best practices in developmental education in Phase I of QEP development. After this faculty learning co mmunity had decided upon the best strategies the college could afford in Phase II, they cr eated new learning groups in an effort to implement learning communities and to strengthe n study skills courses in Phase III. Their success in moving these efforts forward came not only from their own abilities to reflect upon what they had learned, but from stretching this ability furthe r to collaboratively reflect and decide upon courses of action that would work within the college’s fiscal constraints.

PAGE 161

151 One example of a successful collaboration was the development of the college’s learning communities model. Dina led a small group of about five i ndividuals in planning the model for the QEP. Because they need ed more information, they brought in a consultant, a successful practitioner of the l earning communities model from a college in North Carolina. They explored the various mode ls used at other institutions that allowed students to take pairs of related courses with in a cohort, and emerged from that meeting with their own learning community model. Dina also consulted with institutional research to select the population for th e initial learning community cohort group. In this case, a practitioner was able to lend in structional expertise and institutional research was able to lend its data analysis know-how to make the effort work. Occasionally, a faculty member played a role that was more akin to that of an assessment professional in te rms of participation, comm unication, and involvement. For example, Geri talked about her role as a facu lty facilitator among stude nt services staff in enhancing enrollment in student learning skil ls courses, while simultaneously working with faculty to develop a comm on syllabus. Geri thus played an important role in leading these groups to consensus on important details of the QEP implementation. Well-Qualified Faculty Human resources provided at least one key to collaboration on improving developmental st udent success. According to the academic administrator interviewed, one of reasons reading and writing outcomes had improved over time was that the Communications depa rtment was comprised of an exceptionally well-qualified, cohesive group of faculty. According to Michelle, Dean of the AA Program:

PAGE 162

152 From my perspective, when I look at the communications department, it’s a very cohesive gro up. The full-time faculty there really, really take ownership of the course s and they’re very responsible for getting part-time people that are teac hing in their department. Very, very, very high expectations and standards for folks teaching in prep reading, definitely, and in the prep math. Fo r instance, faculty who teach prep classes are only required by SACS to have a bachelors degree. Our prep reading and English folks tell me all th e time if we can find that Master’s person for teaching prep reading/prep English, that’s what we want. And I would say they might have two peopl e teaching prep classes that only have bachelor’s degrees and the rest have Master’s level. Very wellqualified. They come together with common syllabi. We know that faculty have academic freedom, but they make sure that they’re all on the same page with what the students are be ing exposed to in the classroom. (Interview with Michelle) Thus, one of the college's success strategi es was to focus upon the quality of the faculty who teach developmental courses. A ccording to Mary, who straddled both prep and non-prep courses, faculty who taught a mix of developmental and non-developmental courses had a better idea of what it took for that student to succeed in credit level courses. This group’s cohesiveness meant that when Fred asked them to make changes to instruction suggested by result s of longitudinal tracking, facu lty members were willing to put in the extra time and effort.

PAGE 163

153 Assessment Support. The Community College Survey of Student Engagement (CCSSE) provided a means of assessing support for developmental learners and determining the success of the above-mentioned improvement strategies. Student attitudes toward the learning environment we re periodically assessed through the CCSSE. The College has participated in the CCSSE since 2002, benchmarking student support, student-faculty interaction, student effo rt, academic challenge, and active and collaborative learning against itself and against state and na tional scores. The Office of Institutional Research analyzed and distributed CCSSE data, linking it with the success of college prep reading and writing students (L ongitudinal Tracking System) in partnership with faculty members teaching developmen tal courses. Longitudinal tracking was incorporated into QEP assessment methods, as well as assessments of outcomes of developmental learning by intervention stra tegy (such as learning communities). The Assessment Coordinator (in th e Office of Institutional Research) was responsible for coordinating these assessment ef forts (Position Description). However, faculty members reported a fragmented view of the college’s assessment efforts. By examining their respons es to individual inte rview questions, this researcher was able to probe the “assessmen t” intersection between the roles of faculty and assessment professionals. Analysis of the data showed that none of these individuals alone seemed to have a complete picture of what “assessment” at the college looked like. Each one seemed to have a unique window in to the process, although a couple of veteran faculty members had a larger view than ot hers. Mary, for example, regularly used assessment to inform her teaching, but indicated that faculty rarely spoke to one another about assessment. While she had been highl y involved in general education outcomes

PAGE 164

154 assessment and taught some prep classes, she was less informed about the details of the QEP development and implementation. While this section, “Findings on Resear ch Questions,” sought answers to the research issues of the case, the next secti on presents findings rela ted to topical issues, secondary issues that bear a relati onship to major findings of the study. V. Findings on Topical Issues While the previous section discussed the pr imary research questions or “issues” of this case study, the following section will elaborate upon “topical” findings, relating information that provides background and context for greater understanding of the findings of research questions. These findings came from analyses of interview questions 2, 9, and 10, and were corroborated w ith documents, where applicable. Interview Question 2 (Goal) Are there goals for developmental educati on other than cognitive skills dictated by Florida state-mandated testing? In addition to passing Florida-required exit exams, interview participants mentioned developmental education goals such as the general education outcome “selfdirection,” student affective development (i.e ., motivation), and success at the next level. These themes are discussed in detail in the paragraphs that follow this introduction. Self-direction. Maida’s view of how to define student success began with studentdefined goals. She explained that when stude nts arrived at college, they had vaguely defined goals. However, as a faculty member she could help students to more clearly

PAGE 165

155 define and focus upon their goals so they knew where they are going and what it would take to get there. The diagnostic tests admini stered at the beginning of the term allowed students to know where they were on that pathway to success. Thus, while there were no standardized tests for measuring the outcomes of study skills courses, the group of faculty developi ng the curriculum for the course was studying outcomes as they related to students’ long-te rm success. The goal of this process was to ensure that each of those cri tical elements was included in ea ch study skills course taught. Geri, a study skills faculty member, used an on-line assessment by Skip Downing, which gave helpful feedback to students as they looked at their own performance. Students would try to identify where they were, figur e out where they ought to be, and determine why they were not there. The self-assessmen t addressed things like responsibility, inner motivation, and organization. Students receiv ed a score, upon which they could then reflect. This ability to self-m onitor behavior that led to su ccess was one aspect of the general education outcome “self-directio n” (Learning Outcomes Report, 2004-2005). The outcomes and assessments in Geri’s study skills course took an authentic form. For example, instead of teaching students about note taking by having them memorize guidelines for taking good notes, she a ssigned them the task of taking notes in another class. A description of her appr oach to authentic assessment follows: I don’t want them to memorize from me what’s a good format for note-taking. Instead, their assignment w ill be, take notes in your other class and bring them in and let me see it. If they do that and if I can see that they’ve applied that, then th ey’ve done well. If they bring in something that nowhere resemble s anything that anybody could study I

PAGE 166

156 give it back to them and I have them revisit that unit and redo that again. We just keep hammering away at it That’s a luxury though, because I only have 15 students in that class. I don’t know if I could do that if I had 30. I think that one of the keys for me to be able to do those kinds of applied things is because it’s a smal l class. I had that luxury, and I could afford to grade things three times. (Interview with Geri) Geri also used this approach to teach st udents how to go to an academic advisor to obtain and subsequently interp ret a degree audit and to enc ourage their engagement in student activities. Affective Development. According to Beth, student su ccess in reading preparatory courses was the result of much more than changes in students’ comprehension and vocabulary. Ostensibly, her couns eling training influenced her to view students’ progress holistically by looking for changes in thei r attitudes, perceptions, and behaviors. Examples of such progress included learning to get to class on time and learning how to solve transportation problems to get to class regularly In addition to QEP and general educati on assessments, the college developed a number of feedback mechanisms to measure student affective development, engagement, and satisfaction with College services. Thes e included the Community College Survey of Student Engagement, Noel-Levitz Student Satisfaction Inventory, and faculty focus groups. The TRIO program (in Student Support Services), availabl e through a federal grant, used strategies for many years that ma de a difference in student success rates for program participants at the college. In the TRIO program, a small population of nontraditional students developed new attitude s and skills while progressing within a

PAGE 167

157 “learning community.” The TRIO students achieved higher success rates than other students placed into developmental courses. This strategy was later adopted by members of the QEP toward the improvement of colle ge preparatory success in a larger group of students. Success at the Next Level. While faculty members often had to decide when they needed additional data to inform the learning outcomes assessment pr ocess, a consultantwritten longitudinal tracking and reporting sy stem used enrollment and completion data to produce volumes of data on student learning outco mes. The greatest challenge in analyzing these results was determining which data were relevant and which were not. Ultimately, faculty found that the key vari ables were student persistence, degrees awarded, ethnicity, and age. Further, the initi al six-term follow-up process was eventually extended to eight. This two-year discovery process, in partnership with Institutional Research, was eventually worth the effort, as it narrowed the selection of the data measures for the QEP to those most relevant to college prep succe ss. Fred described it this way: [The longitudinal trackin g system] was so intense in terms of what you could collect – after [Joe] and I sat and agreed on the amount of data we would collect, I then spent the next two years trying to figure out what I didn’t want. He could s lice that cake in so many ways. So it was actually a system devolution, so to speak. We came down to the basic persister/ non-persister. We came down to degr ees awarded/non-degrees. We came down to success in entry level course s and we came down to simple demographics in terms of ethnicity and age. Later, I believe that led us to

PAGE 168

158 select the right ones for the QEP, ‘cau se [Joe] and I had been through that experience. But the first paper report was this thick! It was only six semesters, we later went to eight, so it was a process of discovering what we really needed, and that was si mplification. (Interv iew with Fred) Several measures of student success were discussed during the formation of the QEP, including course grades and test scor es. However, the group concluded that success in the subsequent course was the best possi ble evidence of student success. Jeff, an assessment professional, commented that th e adoption of this measure held great potential for aligning curriculum in prep and non-prep areas. The measures of student success designed into the QEP were primarily retention and success in courses and in next-level “gateway” courses, associated w ith various intervention strategies (such as learning communities). What the college mo re recently learned about planning for student learning outcomes assessment came ma inly from on-site evaluators, who were generous in sharing their expertise. Th e on-site evaluators offered “analysis and comments” which have been incorporated into the plan to strengthen the QEP assessment. Student success in the Associate in Arts (AA) program was also measured by success at the next level, usually in the form of upper division grad e point average (GPA) at Florida universities. For example, when the GPA of AA transfers exceeded that of native university students, administrators saw evidence that curriculum was aligned and the content of the courses was appropriate Michelle, the Dean of AA Programs, explained it this way:

PAGE 169

159 To me success is when students come here and they transfer to a university and we track and see how they do the first term. We can see how they do when they transfer. Righ t now, I know for a fact our transfer student’s GPAs are just a little bit hi gher than the native students. So to me that’s so important to see that they’re successful at the next step, because I think that’s what we’re here for faculty -to prepare them, in general. Now if we want to talk sp ecifically about reading or English or communications, if they’re not successful regarding the skills they need in reading and English and becoming good wr iters and proficient readers, [it] cuts across all of the disc iplines. To do well in Humanities, you have to be a good reader and a good writer and to do well in your science classes. When I see they’re successful at the next level, knowing the textbooks and the additional readings that students ha ve to do in courses once they get to their junior or senior level, I see that students are succeeding and I can say “Hey, that student was in our prep program” -that’s success. (Interview with Michelle) Over the last three years, Academic Affa irs has increasingly re lied upon the Office of Institutional Research to provi de progression data to track student success at the next level. While Interview Question 2 investigated the goals for college prep students in addition to state mandated cut scores, Interv iew Question 9 sought to determine whether part-time faculty participated in outcomes assessment.

PAGE 170

160 Interview Question 9 (Engagement of Temporary Faculty) Do part-time/temporary faculty pa rticipate in outcomes assessment? Participation in student learning outcomes assessment was limited to a handful of part-time faculty, typically thos e with an interest in curriculum governance and time to spend on campus during the day. As part-tim e faculty members usually came to the college only when they taught classes, part icipation in assessment activities was very difficult for them. While an annual recognition program had b een developed for highly participating adjunct instructors, some interview participan ts felt that more could have been done to involve them in curriculum development. Pa rt-time faculty members participated in general education outcomes assessment by volun teering their classes for participation in the pilot, but had not to this point serv ed as general education assessment committee members. Also, there were a few adjuncts in communications and mathematics who attended departmental meetings and had t hus been involved in curriculum development discussions. Participation in Developme ntal Education Improvement. Most faculty interviewed expressed the belief that part-tim e faculty would particip ate in developmental education assessment activities planned for th e QEP. The large proportion of part-time faculty teaching college prep courses, especi ally in study skills and math, necessitated this involvement. For example, of all co llege preparatory cour se sections (reading, writing, and math) at Sunshine State in Fa ll 2003, 63% were taught by part-time faculty (Windham, 2005, Number and Percentage, p.1-5). Fred assessed part-time faculty participation in the QEP this way:

PAGE 171

161 They were asked to come, and they were invited to all the group meetings. We had one on our [QEP ] subcommittee that I kept in communication with. If your retenti on rate between your full-time and part-time [faculty] is significantly different, then you need to have a faculty mentor on assessment. You n eed to bring your part-time people into it. [This is] another place wh ere everybody needs work, ourselves included. We tried to include them. We did not include them as much as we should have. (Interview with Fred) While considerable efforts were made to include part-time faculty in assessment activities, Fred felt that more could have b een done to secure th eir participation in a systematic process to engage part-time faculty members in assessment. Participation in General Education Assessment. Faculty members who had previously led general education outcomes assessment activities said that part-time faculty members were expected to again par ticipate in the current year by volunteering their classes for general e ducation outcomes assessment. Many part-time faculty members had participated the previous year by offering their classe s during the pilot in Spring 2005. Recognition and Rewards. According to one faculty member, part-time faculty members had an opportunity equal to that of full-time faculty to participate in college committees. The few who took advantage of th at opportunity were likely candidates for the “Adjunct Faculty of the Y ear” award, announced at the end of the Spring term at the college-wide awards assembly. Near the end of the academic year, the VP for Academic Affairs sent out a request fo r nominations for this award. In determining the best

PAGE 172

162 candidate, an awards committee reviewed nomi nations, looking at factors like high levels of committee participation and favorable student evaluations. While part-time faculty members participated to the extent of their availability and interest, few became as connected to curriculum processes as full-time faculty members wished. Retired part-timers sometime s participated in committees and other oncampus events. However, those with other jobs such as the dean of a local high school, simply did not have the time or sufficient incen tives to participate. It was because of parttimers’ lack of participation in curriculu m development processes on campus that some full-time faculty preferred that the college simply hire more full-timers. While Interview Question 9 sought to determine whether part-time faculty members had participated in assessment activities, Question 10 investigated how developmental educators might better respond to the SACS requirement to use the results of student learning outcomes assessment for improvement. Interview Question 10 (Use of Results) How might developmental educators improve their response to this accreditation requirement? “Use of results” of outcomes assessment activities has been one of the most frequently dispensed recomm endations for improvement by SACS review committees since the approval of the revi sed standards in 2001 (Clear y, 2005). Although Sunshine State Community College had been proactive in creating a process to assess general education outcomes since 1998 and had create d an exemplary Quality Enhancement Plan, all College programs had not been able to demonstrate the use of results for

PAGE 173

163 improvement. Sunshine State Community Co llege thus received two recommendations on the use of results (Comprehensive Standa rds 3.3.1 and 3.4.1) followi ng their site visit for Re-affirmation of Accred itation (Key informant, Pe rsonal communication, April 26, 2006). These standards are cited below. 3.3.1 The institution identifies expected outcomes for its educational programs and its ad ministrative and educational support services; assesses whether it achieves these outcomes; and provides evidence of improvement based on analysis of those results. 3.4.1 The institution demonstrates that each educational program for which academic credit is awarded (a) is approved by the faculty and the administration, and (b) establishes and evaluates program and learning outcomes. (SACS, 2001, p. 22) The question of how developmental educators could more effectively use the results of assessment was therefore made a pa rt of this research to help others in answering the demands of their regional accre diting agencies. The analysis of Interview Question 10 is provided below. Maida, a part-time faculty member, commente d that to obtain more useful results from student learning outcomes assessment, de velopmental educators should begin with proven instructional strategies that work with specific populations of students: Engaging in long-term research fo r courses, figure out what has worked in other areas, see what me thods other schools are using, see how well it works for them. There’s a lo t of research out there on certain

PAGE 174

164 strategies that work for certain grou ps of people. And here we have a variety of students from all kinds of areas….There is a research effort to show that…international students learn best with these [strategies], mainstream learn best with these [s trategies]. So if I’m going to be planning a lesson, a reading lesson for a class of 25 where seven are international, I know very well that I do not expect them to use maybe whole language method in arriving at m eaning. I know that I have to break it down for them teach main idea by th e paragraph instead of telling them OK read this story and tell me what you think…. The international student needs to be assisted point to point paragraph to paragraph, because he doesn’t have the level of proficienc y, he doesn’t have the vocabulary, he doesn’t have enough of that stuff in his skill map to be able to transfer and do that, so those are the areas that I w ould like them to look at and there is research that supports that (Interview with Maida) Maida advocated that developmental educat ors use instructional strategies proven through educational research and then collect evidence of their lo cal effectiveness. Other advice came from Fred, a veteran f aculty member: To ensure that faculty members who teach developmental courses use the results of outcomes assessment for improvement, faculty members must first value the results of the assessment, believing in the measures’ credibility. Second, while most faculty members will want to know what they can do to improve students’ chances for su ccess, others will resist, citing issues of “academic freedom.” Third, faculty members’ ti me is very constrained by an already heavy workload. To justify asking faculty me mbers to collect data in conjunction with

PAGE 175

165 outcomes assessment, leaders must explain why it is necessary and how it is going to be used. Completion of this contra ct necessitates the use of the information collected for its advertised purpose. Information Sharing. Carolyn, an assessment professional, commented that in order to better use assessment results to im prove college preparat ory instruction, those results needed to be shared often in a r outine process. Following sharing, changes to curriculum in courses and programs needed to follow: I think, bottom line, is we have to start sharing that often, regularly, routinely, it has to be discusse d. It really has to be then utilized in classrooms and in program development, so I think that’s where we have a real weakness. We have thos e results out there and everybody talks about it once a year, but then nothing really happens. (Interview with Carolyn) However, according to Carolyn, the deve lopment of the Quality Enhancement Plan changed that way of doing assessment in developmental education. As progress reports on the QEP were scheduled to go to SACS for review periodically, the developmental education assessment process took on a higher inst itutional profile and priority than it had before. Thus, there were pl ans to assess routinely, look at the results, share results widely, and then make progr am improvements based upon those results. Involve Adjunct Faculty Memb ers in Assessment Activities. According to still another faculty member, Beth, in order to us e assessment results to improve instruction, developmental educators should: 1. examine assessment results,

PAGE 176

166 2. determine how the information should be integrated into the curriculum, and 3. decide how to integrate it into individual classroom instructional strategies. These steps often fail to occur, especially for adjuncts because many are high school teachers who are only on campus long enough to teach their classes. While information sharing about college-wide best practices has not always been available to adjunct faculty, all faculty members can participate in the professi onal development activities most relevant to their practice. Faculty memb ers are also encouraged to visit each others’ classrooms to pick up new techniques. Furt her, for the QEP initiative focused upon study skills, a retreat was planned, as well as an organized, continuous training and communication program for adjunct instructors. Discipline-Specific Application to Curriculum. Developmental educators might be able to more effectively use the results of student learning outcomes assessment if they had more time to focus and reflect upon them, thus better understand ing the implications for teaching their specific cour ses. However, one strategy w ith the potential to convert assessment results into curriculum changes woul d be to talk about the results more often in disciplinary subgroups. Geri, a study skill s faculty member, expressed the opinion that while outcomes were frequently discussed in the large faculty colloquium group, most faculty members didn’t think a bout teaching in global terms: I think that if we really underst ood their connections with what we were doing in our classes, and maybe use those as workshop and training tools. You know, here are the areas th at we’ve got to address. Within our interaction with students, how can we do that? What best suits me in my study skills class, what best suits me in my wellness class, what best suits

PAGE 177

167 me in my teaching of history class. I r eally think we need to do that. A lot of what we do with outcomes, we talk about it within the big huge colloquium group. I understa nd there are global things that we all could address, but I think most of the tim e faculty members don’t think like that. I think most of the time they really are kind of like “What does that have to do with my history course?” I re ally think because they’re kind of wrapped up in their subject. I think it would be better if you said, OK, so here is one of our outcomes and here’s how we did it and how we’d like to do it and how we’re evaluating it. So how does this fit into your course, what elements do you have, how can we enhance those elements, and how can we share them between faculty me mbers? I think that would be more useful than talking about them in the big group. (Interview with Geri) What would provide more improvement through curriculum development, Geri said, would be for someone to define the connections between a ssessment results and disciplinary teaching, to develop training tools, and then to hold a required workshop for faculty in the discipline on how to incorporat e these new teaching strategies within their courses. Developing New Habits though Professional Development. Difficulty getting into new habits may be preventing developmental educators from making full use of assessment results for learning improvement. While measurement and assessment are important to the college’s culture and fundi ng, Terri expressed the opinion that college preparatory faculty found it difficult to adopt measurement habits.

PAGE 178

168 Developmental educators tend to be a very emotional bunch. They tend to be the kind of people who are in human services and want to help people. So specific measurements are not something they’re accustomed to being defined by. I think they have gut feelings. They may be able to tell you that this many people passed th e state mandated test at the end of the course, but they can’t always tell you how they got from A to B. (Interview with Terri) Through practice, many faculty members w ith a long service in teaching have trusted their “gut feelings” about what make s students successful and are reluctant to adopt evidence-based processes. Professi onal development and encouragement may slowly improve the adoption of meas urement-based teaching and learning. While this section of the chapter has i nvestigated topical is sues of the case in order to explore their interplay with resear ch findings, the next section provides a summary of each of the eight research questions and three topical issues explored in this study. VI. Chapter Four Summary This section provides a su mmary of major findings of research questions, instrumental issues in the case, and topi cal questions, secondary issues that often interplay with research issues.

PAGE 179

169 Summary of Research Questions Eight research questions were investigat ed in this case study of Sunshine State Community College. Presented below is a point by point summary of the research findings as they correspond to each research question. Research Question 1. How is the professional pr eparation and educational background of a developmental education facu lty member like that of an assessment professional, and how is it different? Findings. Dimensions of professional pr eparation and experience included professional development, experience, and subject and level of educational degree. Although assessment professionals, who assume primarily facilitativ e roles in student learning outcomes assessment, tended to hold do ctorates more often than faculty teaching developmental courses, their educational preparation and professional development interests shared many similarities. For example, the research director, whose experience included years of teaching business and co mputer applications courses, talked enthusiastically about his use of material from a workshop on authentic assessment within his computer applicati ons class. Two of the assessm ent professionals interviewed had doctorates in education. Faculty members, likewise, brought a variety of experiences to teaching, including coaching and counseling. Faculty members were actively engaged in developing their measurement and teaching and learning expertise, as evidenced by their past, present, and future references to pr ofessional developmen t in interviews. For example, Beth, a faculty member, believed th at statistics courses she had previously taken had been helpful in planning and c onducting learning outcomes assessment. She

PAGE 180

170 was looking forward to an opportunity to furthe r sharpen her measurem ent skills. Patterns of professional development in faculty and assessment professionals showed that both groups valued measurement and teaching and learning expertise greatly. Research Question 2. How is the assessment role of a developmental education faculty member like that of an assessme nt professional, and how is it different? Findings. Aspects of the assessment role we re communication, participation, and instrumentation. While a small number of f aculty members assumed a facilitative role similar to that of assessment professional, staff members designated for that role by title (e.g., Assessment Coordinator) and reporting st ructure were limited in their expected analytical response to the results of outcome s. That is, while assessment professionals interpreted the results and recommended area s for changes to inst ructional strategies, only faculty and instructional administrators implemented a nd monitored those changes. For example, Fred, a veteran faculty member, asked for longitudinal outcomes data from assessment professionals to determine how pr ep students did after they completed prep classes. The resulting classroom and fo llow-up measurement strategies were both developed and implemented by Fred and other communications faculty members. Patterns of communication differed between faculty and assessment professionals. While faculty members referred to occasions when they received assessment communication, assessment professionals assu med a much more proactive role, both receiving and initiating assessment communication. Further, the participation of an assessment professional was qualitatively differe nt than that of a faculty member. Faculty members focused the discussion upon the outco me they would like to achieve for the student. Assessment professionals, on the ot her hand, reframed the outcome in a way

PAGE 181

171 that was credible to faculty and measurable in terms of student response. This process, however, was not always without a struggle. Terri and Geri, for example, talked about how faculty found it difficult to measure th e outcomes they wanted for students. Agreeing upon an acceptable assessment usually meant talking through their differences with assessment professionals in point of view and vocabulary. Assessment professionals thus combined their expertis e in research design with f aculty curriculum expertise to customize a measure (or measures) for a pa rticular outcome, although the measure was sometimes less than optimum from a faculty perspective. Research Question 3 Which collaborative strategies serve to create common ground for faculty members and assessment profe ssionals to work toge ther on assessment plans? Findings. Transformation in the structure of the College’s Quality Enhancement Plan through three phases (re search, strategy formulation, and implementation) ensured the flow of information into and out of the main policy and strategy-making body for developmental education. For example, in Phase I (Research), the structure was characterized by a loose confed eration of research groups with diverse memberships. Represented on the steering committee were academic faculty and staff members from learning resources, arts and sciences, vocational programs, academic support. Also represented were student services folks fr om enrollment, studen t support, testing, and counseling. The multiple structures, both vertical (steering) and horizontal (coordinating), made widespread inclusion and participati on possible. The QEP Committee was designed

PAGE 182

172 with a porous boundary. This enabled official members of the Committee to bring other faculty and staff (such as assessment professi onals) into dialog when appropriate. This diversity in membership and task meant incl usion of a more complete set of knowledge about how to deliver programs and services to students in developmental education. The college also used structure to form a gateway by creating common ground for collaboration, thus becoming a "community of practice" (Wenger et al, 2002, p.1). This occurred by structuring deliberate activiti es over a number of years (i.e., an annual learning theme) that made possible both t houghtful reflection about the college’s vision statement and the celebration of that vision. With faculty a nd staff members thus joining in college-wide learning activ ities, the habit of collabo rating in smaller groups for learning assessment through structured interac tions has the potential to become a more routine practice. Continuous transformation of the organi zational structure to accommodate tasks to be accomplished, activities involving indi vidual and group reflection such as the faculty colloquium, and occasions for the celebration of college successes such as the annual awards ceremony created the comm on ground for faculty and assessment professionals to work together on asse ssment plans toward the improvement of developmental education. Research Question 4. Which strategies cause es trangement between faculty members and assessment professionals? Findings. Themes that explained estrangement between faculty members and assessment professionals included academic structure, barriers to collaboration, and collaboration partners. While or ganizational structure served as a barrier to participation

PAGE 183

173 and communication in some cases, it became a gateway in others. The formal structure of the academic leadership team, for example, served as a gateway for effective communication within academic affairs, but se rved as a barrier to direct communication with assessment professionals for most facu lty members. Geri, for example, indicated that while she had received reports on student achievement from institutional research, she couldn’t recall actually discussing the in formation with an assessment professional. The academic structure had bridges to institutional research through the QEP and general education outcomes assessment pr ocesses. However, the academic-research communications barrier, according to Joe (the institutional research director), kept the research office from entering conversations with faculty about identifying learning outcomes in specific course outlines that w ould have assisted the development of an effective and complete outcomes asse ssment strategy for general education. Research Question 5. What role, if any, does an a ssessment professional play in determining how the results of student learning outcomes assessment will be used for improvement? Findings A concept that developed in the anal ysis of this study was “analytical response.” The concept was defined as the role one plays in interpreting, evaluating, and using the results of outcomes assessment. A low analytical response would be to acknowledge the results of outcomes assessment. F aculty members discussing the results of outcomes assessment at a college-wide co lloquium would provide an example of low analytical response. A moderate level of analytical res ponse would be to recommend changes to curriculum and inst ruction based upon th e results. An example of this would be an instructional administrator maki ng plans within the college’s Learning

PAGE 184

174 Management Team to change instructional stra tegies. However, a high level of analytical response would be to implement and monitor such changes. An example of this would be the decision of the QEP Committee to impl ement a learning community model as one strategy to improve student success. Assessment professionals played a limited role in determining how the results of student learning outcomes asse ssment would be used for instructional improvement, remaining at a low to moderate analytical response to assessmen t results (acknowledging and recommending changes). They instead help ed faculty by interpreting results and by assisting faculty developing tools for outcomes assessment to reframe their research questions (outcomes) in a more measurable way. Research Question 6 Have faculty members at th e college become more like assessment professionals and assessment prof essionals more like faculty members in terms of their assessment roles since th ey began collaborati ng on student learning outcomes assessment? Findings. Evidence of the capacity for self -reflection could be found in the copious responses to the thought-provoking questions pos ed to both faculty and assessment professionals in individual and focus group interviews. Participants really reached down to find answers to the questi ons posed by this researcher in these interviews. Often a more complete answer to a previous question would emerge in later conversation, after the participant had time to think about it. Mary, in particular, was an enthusiastic proponent of reflexivity, both in her practice of teachi ng and as a component of curriculum.

PAGE 185

175 Faculty talked about the desire to pursue professional development toward increased expertise on measurement topics or emphasized how valuable their courses in measurement were in developing assessment instruments. Similarly, assessment professionals like Carolyn talk ed enthusiastically about their newly acquired knowledge about teaching and learning for developmenta l education resulting from the research phase in the development of the QEP. While all interview participants exhibited characteristics of reflexivity and a desire to learn about both teaching and learning and measurement, there was little evidence that the roles of faculty members and assessment professionals were merging because of their separate lines of authorit y. The decision-making and implementation role of academic affairs in outcomes assessment was to refine criteria for determining a students’ level of proficiency in a faculty-d efined competency. Assessment professionals provided a valuable (but limited) complement to that role by helping faculty to frame research questions (outcome s) in a measurable way. Without good instruments for authentic assessment, the outcomes assessment process would be limited to measuring proxies for learning like grades. Assessment pr ofessionals facilitate d and coordinated the logistics of the assessment process and provide d an interpretation of the results (Learning Outcomes Assessment Task Force Report). Research Question 7. If so, how have they become more alike? Findings. In acquiring teaching and learning a nd measurement expertise, faculty and assessment professionals have been on similar professional development paths. Faculty members and assessment professionals mentioned similar types of professional development and training, both onand offcampus. For example, faculty cited John

PAGE 186

176 Roeuche, Patricia Cross, Hunter Boylan, Tr udy Banta, Peter Elbow (writing as a process of discovery), Jack Misero (transformative learning),Vincent Tinto, and Skip Downing as assessment experts whose advice they had s ought. Likewise, assessment professionals cited Bob McCabe (2003) and Bill Blank, a USF faculty member who had taught an oncampus professional development activity on co ntextual learning for college faculty and staff members. Joe, an assessment professional, talked about the lessons from that session he put to use in a class he taught part-time. Thus, faculty and assessment professionals alike supplemented their core knowledge in student learning outcomes assessment with both measurement and teaching and learning expertise. However, while knowledge of teaching and learning was essential to applying measurement expertise to learning problems, one of the limitations upon the analytical response of the Assessment Coordinator came from the dual roles in demanded by Jeff’s job description. His role as legislative lia ison and other duties sometimes necessitated Jeff’s presence elsewhere, as indicated in c onversation with Geri, a faculty member who recalled that Jeff had been temporarily r eassigned and unable to participate in QEP implementation meetings. Both his role as le gislative liaison and as an interim campus director placed Jeff squarely in the role of administrator, sometimes in conflict with his role as a measurement consultant and c oordinator of QEP assessments. Thus, while assessment professionals shared the same pr ofessional development interests as faculty, the differences in their assigned roles within the organizational struct ure caused them to place emphasis on different tasks, at times. Research Question 8. From the perspective of re spondents, which assessment approaches have shown the most promising results?

PAGE 187

177 Findings. The college’s crowning achievement was developing a Quality Enhancement Plan for the improvement of developmental education based upon best practices in the field. The details of thes e processes show the way toward improving collaboration on assessment planning, not onl y for the improvement of developmental education, but for all outcomes assessment efforts at Sunshine State Community College. Faculty members and assessment professiona ls alike agreed that the QEP was an exemplary plan for college preparatory su ccess. The process began with a complete literature review and incorporated many stra tegies that had been successful at other institutions. One example of a successful collaboration was the development of the college’s learning communities model. Dina, a faculty member, led a small group of about five individuals in planning the model for the QE P. Because they needed more information, they brought in a consultant, a successful practitioner of the learning communities model from a college in North Carolina. They e xplored the various m odels used at other institutions that allowed stude nts to take pairs of related courses within a cohort, and emerged from that meeting with their ow n learning community model. Dina also consulted with institutional research to se lect the population for the initial learning community cohort group. In this case, a pr actitioner was able to lend instructional expertise and institutional research was able to lend its data analys is know-how to make the effort work. One of the college's success strategies wa s to focus upon the quality of the faculty who teach developmental courses. According to Mary, who straddled both prep and nonprep courses, faculty who taught a mix of developmental and non-developmental courses

PAGE 188

178 had a better idea of what it took for that stud ent to succeed in credit level courses. This group’s cohesiveness meant that when Fred as ked them to make ch anges to instruction suggested by results of longitudinal tracking, faculty members were willing to put in the extra time and effort. While this portion of the Chapter Four Summary focused upon the findings of the eight main research questions in this study, the fnext portion of the Chapter Four Summary will summarize findings from three interview questions that bear upon topical issues of the case. Summary of Topical Issues Topical issues were secondary issues that provided background information to help in understanding the interp lay of instrumental issues (r esearch questions). The three interview questions cited below provided evidence in support of these issues. Interview Question 2 (Goal). Are there goals for devel opmental education other than cognitive skills dictated by Florida state-mandated testing? Findings Developmental education goals, in addition to cut scores on statemandated exams, included one of the college’s general education outcomes “selfdirection.” Other goals included student a ffective development (e.g., motivation), and success at the next level (e.g., course, program ). Self-direction was a general education competency contributed by students and valu ed by faculty. However, the college has made limited progress in developing a reli able direct measure for this outcome. Interview Question 9 (Engagement of Temporary Faculty): Do parttime/temporary faculty participate in outcomes assessment?

PAGE 189

179 Findings. While part-timers were invited to participate in the QEP, their involvement was limited because so many had full-time day jobs. This prevented many of part-timers from communicating directly with full-time faculty and fully participating in the College’s social life and governance processe s. It was for this reason that some fulltime faculty preferred that the College hire mo re full-time instructors. However, the parttime faculty members interviewed for this st udy indicated an eagerne ss to participate in curriculum development activi ties if invited to do so. Interview Question 10 (Use of Results): How might developmental educators improve their response to this accreditati on requirement? Findings. Both part-time and full-time faculty indicated that classroom research should be based upon proven instru ctional strategies for spec ific populations of students (e.g., international), th at the College should develop di scipline specific workshops on the application of assessment results to clas sroom instruction, and develop measurement habits within each discipline’s respective community of practice. While this chapter presented the findi ngs of this case study, both research and topical, Chapter Five discusses those findings in the context of the Literature Review in Chapter Two and presents the conclusions a nd implications that may be drawn from those findings.

PAGE 190

180 Chapter Five Major Findings, Conclusions, and Implicati ons for Theory, Practice, and Research This chapter discusses conclusions that may be drawn from findings described in Chapter Four, limitations of those findings, a nd their implications in terms of theory, practice, and research. To recap information presented in prev ious chapters, Chapter One, Introduction, highlighted some of the reasons why helping students successfully complete a college education has recently become an urgent mission. Stumbling blocks many students must overcome in this journey are college pr eparatory reading and writing. Colleges are therefore using student lear ning outcomes assessment and learning evidence teams to improve students’ chances for success. As the use of assessment becomes more prevalent, institutional research and teaching functi ons are moving closer to one another and learning more about the scholarship of teaching. The “measurement” intersection between their professions is where the data inte rpretation process takes place. Beyond interpretation, however, this sense making proc ess is a lynchpin in the use of data to ensure the appropriate applica tion of college resources to solve persistent problems in student learning. Chapter Two, Literature Review, describe d the corner of the stage upon which the actors in this case, faculty and assessme nt professionals, con duct student learning

PAGE 191

181 outcomes assessment while taking their cues from governing board s and accreditation agencies. The chapter discussed governmental and accreditation pressures on colleges to adopt outcomes assessment processes, faculty and institutional research roles, the scholarship of assessment, organizational ch ange theory, examples of action research with a potential to improve student learning, measurement opportunities for colleges, best practices in developmental education, the Fl orida policy environment in which Sunshine State Community College operates, and chal lenges currently faced by the College in improving student learning in developmental courses. Chapter Three, Methods, described the manner in which this researcher used qualitative methods within a case study mode l to examine the in stitutional context, processes, and change strategies employed by faculty and assessment professionals in assessing student learning outcomes in de velopmental reading, writing, and study skills. Multiple sources of data colle cted through an ethical process were used to examine the context in which these professionals wo rked to improve student success. Major Findings This section provides a su mmary of major findings of research questions, instrumental issues in the case, and topi cal questions, secondary issues that often interplay with research issues.

PAGE 192

182 Summary of Research Questions Eight research questions were investigat ed in this case study of Sunshine State Community College. Presented below is a point by point summary of the research findings as they correspond to each research question. Research Question 1. How is the professional pr eparation and educational background of a developmental education facu lty member like that of an assessment professional, and how is it different? Findings. Dimensions of professional pr eparation and experience included professional development, experience, and subject and level of educational degree. Although assessment professionals, who assume primarily facilitativ e roles in student learning outcomes assessment, tended to hold do ctorates more often than faculty teaching developmental courses, their educational preparation and professional development interests shared many similarities. For example, the research director, whose experience included years of teaching business and co mputer applications courses, talked enthusiastically about his use of material from a workshop on authentic assessment within his computer applicati ons class. Two of the assessm ent professionals interviewed had doctorates in education. Faculty members, likewise, brought a variety of experiences to teaching, including coaching and counseling. Faculty members were actively engaged in developing their measurement and teaching and learning expertise, as evidenced by their past, present, and future references to pr ofessional developmen t in interviews. For example, Beth, a faculty member, believed th at statistics courses she had previously taken had been helpful in planning and c onducting learning outcomes assessment. She

PAGE 193

183 was looking forward to an opportunity to furthe r sharpen her measurem ent skills. Patterns of professional development in faculty and assessment professionals showed that both groups valued measurement and teaching and learning expertise greatly. Research Question 2. How is the assessment role of a developmental education faculty member like that of an assessme nt professional, and how is it different? Findings. Aspects of the assessment role we re communication, participation, and instrumentation. While a small number of f aculty members assumed a facilitative role similar to that of assessment professional, staff members designated for that role by title (e.g., Assessment Coordinator) and reporting st ructure were limited in their expected analytical response to the results of outcome s. That is, while assessment professionals interpreted the results and recommended area s for changes to inst ructional strategies, only faculty and instructional administrators implemented a nd monitored those changes. For example, Fred, a veteran faculty member, asked for longitudinal outcomes data from assessment professionals to determine how pr ep students did after they completed prep classes. The resulting classroom and fo llow-up measurement strategies were both developed and implemented by Fred and other communications faculty members. Patterns of communication differed between faculty and assessment professionals. While faculty members referred to occasions when they received assessment communication, assessment professionals assu med a much more proactive role, both receiving and initiating assessment communication. Further, the participation of an assessment professional was qualitatively differe nt than that of a faculty member. Faculty members focused the discussion upon the outco me they would like to achieve for the student. Assessment professionals, on the ot her hand, reframed the outcome in a way

PAGE 194

184 that was credible to faculty and measurable in terms of student response. This process, however, was not always without a struggle. Terri and Geri, for example, talked about how faculty found it difficult to measure th e outcomes they wanted for students. Agreeing upon an acceptable assessment usually meant talking through their differences with assessment professionals in point of view and vocabulary. Assessment professionals thus combined their expertis e in research design with f aculty curriculum expertise to customize a measure (or measures) for a pa rticular outcome, although the measure was sometimes less than optimum from a faculty perspective. Research Question 3 Which collaborative strategies serve to create common ground for faculty members and assessment profe ssionals to work toge ther on assessment plans? Findings. Transformation in the structure of the College’s Quality Enhancement Plan through three phases (re search, strategy formulation, and implementation) ensured the flow of information into and out of the main policy and strategy-making body for developmental education. For example, in Phase I (Research), the structure was characterized by a loose confed eration of research groups with diverse memberships. Represented on the steering committee were academic faculty and staff members from learning resources, arts and sciences, vocational programs, academic support. Also represented were student services folks fr om enrollment, studen t support, testing, and counseling. The multiple structures, both vertical (steering) and horizontal (coordinating), made widespread inclusion and participati on possible. The QEP Committee was designed

PAGE 195

185 with a porous boundary. This enabled official members of the Committee to bring other faculty and staff (such as assessment professi onals) into dialog when appropriate. This diversity in membership and task meant incl usion of a more complete set of knowledge about how to deliver programs and services to students in developmental education. The College also used structure to form a gateway by creating common ground for collaboration, thus becoming a "community of practice" (Wenger et al, 2002, p.1). This occurred by structuring deliberate activ ities over a number of years (i.e., an annual learning theme) that made possible both t houghtful reflection about the college’s vision statement and the celebration of that vision. With faculty a nd staff members thus joining in college-wide learning activ ities, the habit of collabo rating in smaller groups for learning assessment through structured interac tions has the potential to become a more routine practice. Continuous transformation of the organi zational structure to accommodate tasks to be accomplished, activities involving indi vidual and group reflection such as the faculty colloquium, and occasions for the ce lebration of college successes such as the annual awards ceremony created the comm on ground for faculty and assessment professionals to work together on asse ssment plans toward the improvement of developmental education. Research Question 4. Which strategies cause es trangement between faculty members and assessment professionals? Findings. Themes that explained estrangement between faculty members and assessment professionals included academic structure, barriers to collaboration, and collaboration partners. While or ganizational structure served as a barrier to participation

PAGE 196

186 and communication in some cases, it became a gateway in others. The formal structure of the academic leadership team, for example, served as a gateway for effective communication within academic affairs, but se rved as a barrier to direct communication with assessment professionals for most facu lty members. Geri, for example, indicated that while she had received reports on student achievement from institutional research, she couldn’t recall actually discussing the in formation with an assessment professional. The academic structure had bridges to institutional research through the QEP and general education outcomes assessment pr ocesses. However, the academic-research communications barrier, according to Joe (the institutional research director), kept the research office from entering conversations with faculty about identifying learning outcomes in specific course outlines that w ould have assisted the development of an effective and complete outcomes asse ssment strategy for general education. Research Question 5. What role, if any, does an a ssessment professional play in determining how the results of student learning outcomes assessment will be used for improvement? Findings A concept that developed in the anal ysis of this study was “analytical response.” The concept was defined as the role one plays in interpreting, evaluating, and using the results of outcomes assessment. A low analytical response would be to acknowledge the results of outcomes assessment. F aculty members discussing the results of outcomes assessment at a college-wide co lloquium would provide an example of low analytical response. A moderate level of analytical res ponse would be to recommend changes to curriculum and inst ruction based upon th e results. An example of this would be an instructional administrator maki ng plans within the college’s Learning

PAGE 197

187 Management Team to change instructional stra tegies. However, a high level of analytical response would be to implement and monitor such changes. An example of this would be the decision of the QEP Committee to impl ement a learning community model as one strategy to improve student success. Assessment professionals played a limited role in determining how the results of student learning outcomes asse ssment would be used for instructional improvement, remaining at a low to moderate analytical response to assessmen t results (acknowledging and recommending changes). They instead help ed faculty by interpreting results and by assisting faculty developing tools for outcomes assessment to reframe their research questions (outcomes) in a more measurable way. Research Question 6 Have faculty members at th e college become more like assessment professionals and assessment prof essionals more like faculty members in terms of their assessment roles since th ey began collaborati ng on student learning outcomes assessment? Findings. Evidence of the capacity for self -reflection could be found in the copious responses to the thought-provoking questions pos ed to both faculty and assessment professionals in individual and focus group interviews. Participants really reached down to find answers to the questi ons posed by this researcher in these interviews. Often a more complete answer to a previous question would emerge in later conversation, after the participant had time to think about it. Mary, in particular, was an enthusiastic proponent of refl exivity, both in her practice of teaching and as a component of curriculum.

PAGE 198

188 Faculty talked about the desire to pursue professional development toward increased expertise on measurement topics or emphasized how valuable their courses in measurement were in developing assessment instruments. Similarly, assessment professionals like Carolyn talk ed enthusiastically about their newly acquired knowledge about teaching and learning for developmenta l education resulting from the research phase in the development of the QEP. While all interview participants exhibited characteristics of reflexivity and a desire to learn about both teaching and learning and measurement, there was little evidence that the roles of faculty members and assessment professionals were merging because of their separate lines of authorit y. The decision-making and implementation role of academic affairs in outcomes assessment was to refine criteria for determining a students’ level of proficiency in a faculty-d efined competency. Assessment professionals provided a valuable (but limited) complement to that role by helping faculty to frame research questions (outcome s) in a measurable way. Without good instruments for authentic assessment, the outcomes assessment process would be limited to measuring proxies for learning like grades. Assessment pr ofessionals facilitate d and coordinated the logistics of the assessment process and provide d an interpretation of the results (Learning Outcomes Assessment Task Force Report). Research Question 7. If so, how have they become more alike? Findings. In acquiring teaching and learning a nd measurement expertise, faculty and assessment professionals have been on similar professional development paths. Faculty members and assessment professionals mentioned similar types of professional development and training, both onand offcampus. For example, faculty cited John

PAGE 199

189 Roeuche, Patricia Cross, Hunter Boylan, Tr udy Banta, Peter Elbow (writing as a process of discovery), Jack Misero (transformative learning),Vincent Tinto, and Skip Downing as assessment experts whose advice they had s ought. Likewise, assessment professionals cited Bob McCabe (2003) and Bill Blank, a USF faculty member who had taught an oncampus professional development activity on co ntextual learning for college faculty and staff members. Joe, an assessment professional, talked about the lessons from that session he put to use in a class he taught part-time. Thus, faculty and assessment professionals alike supplemented their core knowledge in student learning outcomes assessment with both measurement and teaching and learning expertise. However, while knowledge of teaching and learning was essential to applying measurement expertise to learning problems, one of the limitations upon the analytical response of the Assessment Coordinator came from the dual roles in demanded by Jeff’s job description. His role as legislative lia ison and other duties sometimes necessitated Jeff’s presence elsewhere, as indicated in c onversation with Geri, a faculty member who recalled that Jeff had been temporarily r eassigned and unable to participate in QEP implementation meetings. Both his role as le gislative liaison and as an interim campus director placed Jeff squarely in the role of administrator, sometimes in conflict with his role as a measurement consultant and c oordinator of QEP assessments. Thus, while assessment professionals shared the same pr ofessional development interests as faculty, the differences in their assigned roles within the organizational struct ure caused them to place emphasis on different tasks, at times. Research Question 8. From the perspective of re spondents, which assessment approaches have shown the most promising results?

PAGE 200

190 Findings. The college’s crowning achievement was developing a Quality Enhancement Plan for the improvement of developmental education based upon best practices in the field. The details of thes e processes show the way toward improving collaboration on assessment planning, not onl y for the improvement of developmental education, but for all outcomes assessment efforts at Sunshine State Community College. Faculty members and assessment professiona ls alike agreed that the QEP was an exemplary plan for college preparatory su ccess. The process began with a complete literature review and incorporated many stra tegies that had been successful at other institutions. One example of a successful collaboration was the development of the college’s learning communities model. Dina, a faculty member, led a small group of about five individuals in planning the model for the QE P. Because they needed more information, they brought in a consultant, a successful practitioner of the learning communities model from a college in North Carolina. They e xplored the various m odels used at other institutions that allowed stude nts to take pairs of related courses within a cohort, and emerged from that meeting with their ow n learning community model. Dina also consulted with institutional research to se lect the population for the initial learning community cohort group. In this case, a pr actitioner was able to lend instructional expertise and institutional research was able to lend its data analys is know-how to make the effort work. One of the college's success strategies wa s to focus upon the quality of the faculty who teach developmental courses. According to Mary, who straddled both prep and nonprep courses, faculty who taught a mix of developmental and non-developmental courses

PAGE 201

191 had a better idea of what it took for that stud ent to succeed in credit level courses. This group’s cohesiveness meant that when Fred as ked them to make ch anges to instruction suggested by results of longitudinal tracking, faculty members were willing to put in the extra time and effort. While this portion of the Chapter Four Summary focused upon the findings of the eight main research questions in this study, the fnext portion of the Chapter Four Summary will summarize findings from three interview questions that bear upon topical issues of the case. Summary of Topical Issues Topical issues were secondary issues that provided background information to help in understanding the interp lay of instrumental issues (r esearch questions). The three interview questions cited below provided evidence in support of these issues. Interview Question 2 (Goal). Are there goals for devel opmental education other than cognitive skills dictated by Florida state-mandated testing? Findings Developmental education goals, in addition to cut scores on statemandated exams, included one of the college’s general education outcomes “selfdirection.” Other goals included student a ffective development (e.g., motivation), and success at the next level (e.g., course, program ). Self-direction was a general education competency contributed by students and valu ed by faculty. However, the college has made limited progress in developing a reli able direct measure for this outcome. Interview Question 9 (Engagement of Temporary Faculty): Do parttime/temporary faculty participate in outcomes assessment?

PAGE 202

192 Findings. While part-timers were invited to participate in the QEP, their involvement was limited because so many had full-time day jobs. This prevented many of part-timers from communicating directly with full-time faculty and fully participating in the college’s social life and governance processe s. It was for this reason that some fulltime faculty preferred that the College hire mo re full-time instructors. However, the parttime faculty members interviewed for this st udy indicated an eagerne ss to participate in curriculum development activi ties if invited to do so. Interview Question 10 (Use of Results): How might developmental educators improve their response to this accreditati on requirement? Findings. Both part-time and full-time faculty indicated that classroom research should be based upon proven instru ctional strategies for spec ific populations of students (e.g., international), th at the College should develop di scipline specific workshops on the application of assessment results to clas sroom instruction, and develop measurement habits within each discipline’s respective community of practice. While this chapter presented the findi ngs of this case study, both research and topical, Chapter Five discusses those findings in the context of the Literature Review in Chapter Two and presents the conclusions a nd implications that may be drawn from those findings. Conclusions Conclusion for Research Questions. Establishing effective assessment practices within institutions is a complex task. The effectiveness of assessment has many dependencies, and factors outside of the cl assroom often impact student learning. To

PAGE 203

193 improve student learning, colleges are re-t hinking how their various functions work together to improve student learning outcomes. Thus, while colleges may seek to use measurement professionals more effectively to aid faculty in im proving student learning outcomes, aspects of a college’s planning, professional development, and information dissemination, and organizational structure ma y hinder or help that process. While faculty and assessment professionals seem to have similar professional preparation, experience, and interests, the roles they are expected to play (position descriptions) and the structures within which they must opera te (organization chart) should be mutually compatible so that the partnership can be as effective as possible. Continuous transformation of the organizational struct ure during the development of the Quality Enhancement Plan, college-wide activities that enabled individual and collective reflection upon the College’s vision, and o ccasions for celebration brought faculty and assessment professionals together. Filtering mechanisms such as academic leadership teams tended to keep them apart. For example, Beth, a faculty member, commented that communications with faculty on assessment activities could be improved: Where the information sharing needs to be improved and must occur is with the general faculty for both initiatives (e.g., general education, QEP). If we’re going to get college-wide faculty to buy in and actually participate and s upport these initiatives, they need to know about it. Currently, there’s not very much communication with faculty at large. (Interview with Beth) The determining factor in what the role of an assessment professional should be is the analytical response expected by Colle ge administrators. At Sunshine State

PAGE 204

194 Community College, assessment professionals assumed a moderate role in responding analytically to the results of outcomes a ssessment. In their moderate role, they acknowledged or recommended changes to cu rriculum or assessment processes based upon their interpretation of results, but left decisions about implementation and monitoring to faculty members. Their main function in assessment was to help faculty members frame research questions (student learning outcomes) in a measurable way. Jeff commented on his role in supporting assessment: I was brought in as a f acilitator so I picked up last year’s activities up through the pilot, right after they completed their matrix and made a decision to do an assessment proce ss that was localized in nature. And also what they call a cro ss-sectional view. So they’d already made all their decisions on how they were going to do assessment, so they were looking for someone to carry out that proce ss. What we’re finding now is that these localized instruments do not necessarily appl y best to all circumstances. Because we have identified assessment week where we apply these instrument one week of the year, we find that the speech component of communications doesn’t necessarily line up with one week of the year, so we’re looking at do ing those in more and more of an embedded fashion. Interpersonal skill s is another one. And computer literacy is one that did not work. The local instrument did not work for us this past pilot, so we’re actually goi ng to look at some nationally normed instruments to assess that learning ou tcome. And so, even though I wasn’t on the front end of things you find that the administration and faculty are

PAGE 205

195 flexible. You’ll get where you want to go with assessment in the long-run, and it just takes a l ittle longer that way. Thus, Jeff performed the role of measur ement facilitator, combining research expertise with faculty defined outcomes to ne gotiate suitable measures for the process. Conclusion for Topical Issues. Interview questions 2, 9, and 10 probed relevant topical issues including goals for developmental education, part-time faculty involvement in assessment activities, and the use of assessment results. Findings from this study showed that faculty members focused upon more than just State-mandated exit exams as goals for developmental courses. Faculty interviewed named the general education outcome “selfdirection,” affective development (i.e., motivation), and success at the next level (i.e., course) as outcomes they wished to achieve for students. These outcomes were measured by the Community College Survey of Student Engagement, Florida Accountab ility Reports, and the College’s general education assessment process. The College’s Director of Institutional Research had responsibility for carry ing out these assessments and di sseminating results to College administrators. However, there was little ev idence that this information was filtering down through the hierarchy to f aculty members. Mary, for ex ample, said that faculty rarely talked with one another about assessment. While full-time faculty and instructional administrators have created venues for the involvement of part-time faculty such as retreats and colloquiums, the part-time faculty interviewed in this case study indicated that they were not involved in any collaboration on student learning outcomes a ssessment outside of their classrooms. To achieve genuine “traction” from the College ’s learning outcomes processes, both the

PAGE 206

196 young (often part-time) and the veteran (often full-time) faculty memb ers participating in student learning outcomes assessment mu st find a common meeting ground. If that common ground also includes the oftennoted scarce resource “time” and the participation of assessment professionals, the College will have gathered the ingredients it needs to begin making genuine progress in improving student learning outcomes. Sunshine State Community College faculty and staff may be stinging from the SACS recommendations on “use of results” desp ite their long history in the development of a general education assessment process. Although faculty began the process of defining general education outcomes in 1998, it was not until Spring 2005 that the Learning Outcomes Committee conducted its firs t pilot assessment of general education outcomes. In their recent compliance audit and site visit for the reaffirmation of accreditation, College academic and student se rvices programs had not been able to demonstrate the use of results for improveme nt. Sunshine State Community College thus received two recommendations for improveme nt on the use of results (Comprehensive Standards 3.3.1 and 3.4.1) following their site visit (Key informant, Personal communication, April 26, 2006). However, one suggestion with great potenti al for improving that record was made by Geri, a faculty member who felt that th e dissemination of research on outcomes to faculty should be accompanied by discipline and course-specific information on the outcome’s application to curriculum. A study skills faculty member, Geri expressed the opinion that although outcomes we re frequently discussed in the large faculty colloquium group, most faculty members didn’t think a bout assessments and they way they should impact teaching in global terms:

PAGE 207

197 I think that if we really underst ood their connections with what we were doing in our classes, and maybe use those as workshop and training tools. A lot of what we do with outcome s, we talk about it within the big huge colloquium group. I understand ther e are global things that we all could address, but I think most of th e time faculty don’t think like that. (Interview with Geri) What would provide more improvement through curriculum development, Geri said, would be for someone to define the connections between a ssessment results and disciplinary teaching, to develop training tools, and then to hold a required workshop for faculty in the discipline on how to incorporat e these new teaching strategies within their courses. The involvement of assessment prof essionals in this process would be an invaluable link to the infusion of reliable and valid measurement strategies into classroom research. Conclusion for Timeline. A chronological analysis c ontributed to conclusions of the study. The College’s long history (sin ce 1998) of developing general education outcomes and striving to improve the college preparatory program through longitudinal tracking of student success in cubated a powerful faculty learning community and an alliance with assessment professionals. This collective community of practice, when provided the right structure and leadership, enabled the College to create a Quality Enhancement Plan that faculty a nd staff members could be proud of.

PAGE 208

198 Limitations Although the co-curricular c ontributions of other comm unity college faculty and staff members (as in learning resources) and academic administ rators can greatly contribute to student learni ng, findings of this study have focused narrowly on the interactions between faculty members teach ing developmental education courses and assessment professionals such as institutiona l researchers. A study of the phenomenon of collaboration among developmental educators and assessment professionals at Sunshine State Community College, this research offers focused insights into a small but important segment of a much larger set of strategies needed to conduc t effective asse ssment within a community college. For example, resource s such as time, leadership, and money are also important organizational foundations for successful assessment practice. Another limitation of these findings is that the boundaries of this case and the authenticity of experience to each individua l reader may or may not permit “naturalistic generalizations” (Stake, 1995, p. 86) concerning the applicability of aspe cts of the case to the reader’s own college. Each reader must eventually deci de on his or her own what portions of a case apply to another and which do not. For example, while this researcher was able to elicit broad involvement from rele vant full-time faculty in this research, only two of the eight faculty members interviewed were classified as part-time/ temporary. The full-time faculty members interviewed were valuable informants of the developmental education assessment process, but represented the opi nions of less than half of faculty teaching developmental cl asses at the College in 2003 (Windham, 2005, Number and percentage).

PAGE 209

199 A unique aspect of this case study, a lthough not necessarily a limitation, is the disproportionately large number of academic s (9) interviewed compared to the small number of assessment professionals (3) intervie wed. This is due to the very small number of institutional research (IR) personnel typi cally found in colleges, particularly in community colleges. For example, Morest (200 5) found IR functions at these colleges to be thinly supported. Only 27% of colleges had IR departments of 1.5 full-time equivalent employees or more, 40% had a single IR positi on at the college, and 19% of colleges split IR with other duties (p.5). Eighty-five out of a sample of 200 colle ges responded to this electronic survey, and researchers personall y interviewed staff from 30 colleges in 15 states (p. 2) to obtain this data. Thus, although the sampling of academics and assessment professionals within this case study is unequa l, it is proportionate to their occurrence within a typical community college. Implications for Theory Theoretical implications of these findings include a major role for college culture in strengthening the intersecti on between faculty members a nd assessment professionals. However, college structures for bringing individuals togeth er on particular tasks also played a key role. Three theories of organiza tional workings foreshadowed themes in this study. First, the community of practice theory of social learning (Wenger, 1998) explained how professional development and collaboration played key roles in organizational learning, a primary goal of assessment. Learning within a community brought about mutual understanding Second, measurement issues were the structural conditions that served as gateways or ba rriers to effective collaboration between

PAGE 210

200 assessment professionals and faculty member s (Peterson, 1999; Banta, 2002; Lopez, 2003). Third and finally, sense ma king (Weick, 1995) described the process through which the results of assessment and self -reflection could be transformed into organizational goals. Mutual understanding, stru cture, and process together formed the conditions in which the roles of these highe r education practitione rs would intersect. Mutual Understanding. Professional development is a driving need within an organization undergoing rapid ch ange (Bolman & Deal, 2003, p. 372), either because of internal feedback from learning assessment results or external accountability demands. Learning new assessment concepts and methods toward fulfilling accreditation requirements, for example, eases the tensi ons caused by the upheav al of faculty roles within the college during periods of ch ange. The College was highly proactive in providing professional development opportunities to faculty and staff members. When a small group of faculty first began to study th e process of learning outcomes assessment, one of their first activities wa s to distribute information about assessment from experts on teaching and learning, like Angelo & Cross ( 1993). Virtually all in terview participants had something to say about their recent deve lopment experience, whether professional or formal educational. Faculty members and others who take part in learning communities (Milton, 2004) document their systematic inquiries into student learning in courses and programs for the benefit of their institutions and th eir peers. Meaning, practice, community, and identity, the components of Wenger’s (1998) social learning theory, are exemplified by faculty learning communities. First, meaning can be either individua l or collective, but the way people experience life and the world around them is continually changing. This

PAGE 211

201 is particularly true for co lleges transforming under external pressures. Second, practice “is a way of talking about th e shared historical and soci al resources, frameworks, and perspectives that can sustain mutual engagement in action” (p. 5). It is through practicing the art of interpretation among mu ltiple stakeholders that a college is able to connect its needs with resources that can meet thos e needs. Third, community lends value and recognition to individual a nd collective pursuits. By r ecognizing faculty and staff members who are assessment “success stories,” each member of the institution learns to place value on the effort. Fourth, identity provides a framework for considering individual growth in the c ontext of one’s community. Facu lty members who have taught for many years no longer need to feel that they’ve hit a platea u and can advance no further. Assessment for internal improveme nt provides mature faculty a means of continuing professional growth and improving stature. All of these experiences are available to faculty who ac tively share knowledge about assessment within local communities of practice. It is within th is culture, with resource support from administrators and technical support from assessment professionals, that improving student learning outcomes through assessment activities becomes possible (Banta, 2004). While Sunshine State Community College faculty had a semi -annual tradition, the colloquium, for uniting as a faculty lear ning community, the College had several structures and processes for un iting faculty with other constitu ents, as well. For example, a way in which the College created common ground for collaboration was by structuring deliberate activities over a numbe r of years that made possi ble both thoughtful reflection about the college’s vision st atement and the celebration of that vision. A college-wide activity addressed one particular “them e” each year, for which there were common

PAGE 212

202 readings. The idea was for faculty to integrate that theme into as many classes as possible. A new theme was introduced to the college each Fall. Students were thus able to participate in activities that increased th eir engagement with faculty members while actively interpreting some aspect of the co llege’s vision. Another annual practice, the college awards ceremony, helped to ensure that extraordinary efforts of faculty and staff were recognized by College administrators These practices within a community of learners thus provided meani ng to the College vision and he lped to enhance the identity of students, faculty, and staff alike through recognition and reward programs. Structural Conditions. Bolman & Deal (2003) use the concept of reframing to understand the complex nature of organiza tions by looking at them from multiple perspectives. One such perspective come s from looking at colleges as if their organization charts defined them. Assump tions of the structural frame include management by objectives, division of labor, coordination and contro l, rational decisionmaking, form dependent upon task and technology, and structural change as a remedy for performance deficiency (Bolman & D eal, 2003). Restructuring may occur to accommodate changes in the environment, technology, growth, or leadership. In particular, changes to college organizationa l hierarchies often o ccur in response to accreditation requirements. Such was the case at Sunshine State Community College. During the development of the Quality Enhancement Plan (QEP), there was a continuous transformation of the organizationa l structure to accommodate tasks to be accomplished in three separate phases. The multiple structures, both vertical (steering) and horizontal (coordinating), made widespr ead inclusion and participation possible. Also, the QEP structure maintained a porous boundary. This enabled official members of

PAGE 213

203 the Committee to bring other faculty and st aff into dialog when appropriate. Several interviewees who participated in the QEP de velopment process indicated that the process followed was indeed the way to complete a successful outcomes assessment plan. However, while structure can prove a gate way to some, it can be a barrier to other individuals whose voices need to be heard in conversations about learning outcomes assessment. The elaborate structure of communi cations channels within academic affairs facilitated internal decision-making, but excl uded institutional re searchers from many dialogs on teaching and learning issues. Academic administrators and faculty viewed the horizontal and vertical lines of communication within the acad emic structure as a formula for engaging the academic community in dialog about student success. Academic administrators shared data on student success in a number of ways. First, administrators shared important findings with faculty though a colloquium each semester. Second, administrators shared data with the small gr oup of direct reports in the division. In these meetings, information was used as a springboard for discussion about what could be done to improve areas that were troublesome. Thir d, administrators discussed data in weekly meetings of the Learning Management Team, composed of all managers who reported to the academic VP. Fourth and finally, administrators discussed data within the Learning Response Team, which added program facilitator s (department chairs) to the circle. The Learning Management Team decided what in formation to share and which decisions needed to be made within these quarterly meetings. Thus, academic administrators filtered much of the information seen by the faculty, sometimes impeding the flow of dialog between faculty members a nd institutional researchers.

PAGE 214

204 Process. Because new knowledge changes th e old order of cultural foundations and political connections, colleges need to continually renew themselves by creating new culture and connections. In that case, people who work together engage in sense-making (Weick, 1979), a four-stage process. In or ganizing for this process of socially constructing meaning, people in an institution first experience something new in their environment (ecological change). In the sec ond stage (enactment), they realize that the new phenomenon requires their atte ntion. In the third stage, these occurrences take on a name (selection). This enables the college in the fourth stage to retain a common vocabulary and mutual understanding of what the occurrence means (retention). These constructed meanings filter people’s focus so that they see only these defined patterns within their environment, thus reinforcing socially constructed meanings. This process can be seen by reflecting upon the timeline of events leading up to the college-wide endorsement of College Pr eparatory Education as the focus of the College’s Quality Enhancement Plan. In 2002, the Director of Institutional Research started to work with the program facili tator for communications (and his group) in examining data from a consultant-writt en longitudinal track ing system. Student enrollment, grades, and degree completion fa ilures for so many students took faculty by surprise. To get students through prep and in to credit-level course s successfully, they designed and implemented a number of instruc tional strategies in 2003, but found that the limited time they had to devote to maintaini ng these interventions was not enough to be helpful to students. The Institutional Research office kick ed off 2004 with a publication that compared the college’s performance to Florida-wide student performance on

PAGE 215

205 accountability measures and grades in speci fic courses (What We Know About Student Learning). Several of these measures were focused upon college prep success. Later that year, faculty identified college preparatory education as the focus of the college’s Quality Enhancement Plan (QEP) and other constituent groups affirmed this choice. Had the Research Director not worked with faculty on learning outcomes assessment (and had the faculty not been frustrat ed in their efforts to help these students), the college preparatory education focus might never have been conceived. For example, it is entirely possible that environmental change (the failure of so many students) would not have been enacted (realized by faculty) through revi ew of the longitudinal tracking reports and have been selected (given a name) by communications faculty in conversations with other facu lty. With the publication of What We Know About Student Learning, faculty college-wide were able to retain (grasp the significance of the problem for student success) sufficiently to want to pu t college resources into helping students get through college prep. Implications for Practice The ideal program in developmental educat ion should help all students, regardless of their level of competency when they enter college (Boylan, 2002). According to the National Association for Developmental E ducation, it helps “underprepared students prepare, prepared students advance, and advanced stude nts excel” (p. 3). Important contributions that institutional researcher s may make to developmental education programs are in the areas of strategic pla nning and program evaluation. The impact of community college collaboration between facu lty and assessment pr ofessionals on these

PAGE 216

206 “best practice” areas and implications for fu ture practice is discussed in the following paragraphs. Strategic Planning According to Boylan, “develop mental programs with written statements of mission, goals, and objec tives had higher student pass rates in developmental courses than programs without such statements” (p.19). Further, students in such programs tended to pass state-mandate d tests and continue their enrollment more often. The College vision statement emphasi zing shared values, openness, and inclusion provided the backdrop for college planning effo rts. The selection of college preparatory education as the focus of the Quality Enha ncement Plan sent a message to students, faculty, and staff that the words in the vision statement rang true. The job of the Assessment Coordinator in the Office of Ins titutional Research (IR), therefore, was to ensure that the assessment plan stayed on track during the next five years. However, the foundation of the QEP’s de finition of developmental education outcomes was the set of six learning outco mes for all graduates (QEP Document, “Definition of Student Learni ng”, p. 27) that faculty de bated, revised, and finally approved. The process for assessing these outcom es remained a work in progress, as the Jeff (Assessment Coordinator) admitted when asked about the relationship between QEP and general education assessments: I think that the area of instruction needs to be looking at how they want to proliferate the learning outcom es process to the areas, whether that be vocational education, co llege prep, do they want to do a gen ed process in other areas? Well first, we’ve got to get our gen ed assessment completed and whole. We only did three of the six this year, so we really

PAGE 217

207 need to go through and look at the other three. This is new territory. We’re dealing with locally developed inst ruments and we’ve found that in the past couple of years we’ve been very focused on the instrument development and refinement. (Interview with Jeff) Faculty indicated that the general education outcome assessment for the “selfdirection” competency had been problema tic during the 2005 pilot. Beth and Geri implied in interviews that self-direction defined as “those activities that reflect the college’s vision to encourage awareness through skills and behavior that lead to increased individual and community re sponsibility,” (Learning Outc omes Assessment Task Force Report, p. 7) was an important outcome of developmental courses, especially study skills courses. However, self-direction was one of the outcomes not assessed in 2006. Geri, who was coordinating the effort to achieve consensus on important outcomes among faculty teaching study skills courses for the QEP implementation, said that getting the group to establish a common course syllabus was a challenging task. These twin problems may have had simila r roots: lack of a common frame of reference based in scholarly literature or eff ective practices. The solution to this problem would be the same one used to determine be st practices in developmental education in Phase I of QEP development: set aside time fo r collaborative research. An example of how this task could be structured came from the subcommittee on learning communities. There, faculty leaders solicited advice fr om a consultant in North Carolina with significant experience with learning communitie s and brought her to the college for a two-day workshop. Likewise, the college’s efforts to operationa lly define self-direction

PAGE 218

208 would benefit from professional expertise. From a scholarly pe rspective, aspects of this outcome are similar to the notion that skill transfer can be improved by helpi ng students become more aware of themselves as learners who actively monitor their learni ng strategies and assess their readiness for partic ular tests and performances…. Metacognitive approaches to instruct ion have been shown to increase the degree to which students will transfer to new situations without the need for explicit prompting. (Brans ford, Brown, & Cocking, 2000) Metacognition is an important comp onent of the master student course taught in many community colleges, along w ith study skills. Measuring the extent to which students were taking individual re sponsibility for thei r learning would be an important first step in developing an assessment for this outcome. Certainly, a consultation with an educational psychologi st or experienced practitioner on this measure might serve to speed the development along. Program Evaluation. “Few program components are more important than evaluation” (Boylan, 2002, p. 39). In community colleges, consistent reporting on the successes, failures, and problem of these pr ograms institution-wide keeps developmental education visible, thereby reinfo rcing it as an institutional prior ity (p.23). Directly tied to this practice include concerns such as professional development for faculty and assessment staff and operational and policy changes needed to implement effective student learning outcomes assess ment strategies. Activities related to student learning outcomes assessment have been accelerated in colleges because of accreditation-driven changes in community colleges nationwide.

PAGE 219

209 At Sunshine State Community College, a working relationship between a program facilitator and the college’s institutional research officer developed in the process of studying and refining the Long itudinal Tracking System (a method of examining the subsequent success of students beginning in co llege prep). This process led to the early identification of appropriate selection criteria for ex amining longitudinal data on subsequent student success, an important QEP success measure. Process Evaluation. Institutional Research (IR) worked closely with developmental educators to create measures for developmental education outcomes that would measure the effectiveness of the QEP ove r the next five years. Likewise, a general education outcomes assessment has been in development since 2004. However, these measures can only become the collective mirror in which educators view the successes of their courses and programs when faculty have become satisfied that assessments are producing reliable assessment data that can be used for program improvement. Structural Barriers. Research Question 3 dealt with estrangements between faculty and assessment professionals. The f iltering of information that occurred though the academic structures (Learning Management and Learning Response Teams) served a sense making function (Weick, 1995) in that only the assessment results that leaders considered important were ever discussed. Ho wever, these structures did not incorporate direct dialog between assessment professiona ls and faculty on college-wide issues. For example, Jeff indicated that much of his communication with faculty about outcomes was through reports and the Intranet The exceptions to this rule were activities related to general and developmental ed ucation outcomes assessment. Faculty had little regular contact with IR staff, making dialog on asse ssment more difficult. In another example,

PAGE 220

210 Terri described tense initial conversations between faculty and IR staff in trying to communicate in a common language about QEP assessment. IR staff sometimes suggested operational definitions for outcomes that they knew how to measure, rather than directly responding to th e unfulfilled need to measur e the “unmeasurable” expressed by a faculty member. Short of moving IR into Academic Affa irs, communications could be improved by engineering more structured interactions between faculty and IR staff. Examples of these interactions could be formal, such as joint participation on committees or task forces or informal, such as a presentation by IR staff to faculty, followed by a reception to encourage dialog about learning outcomes. Capacity for Reflection. A systematic method of cr eating and sustaining learning communities within an institution is Collabora tive Analysis of Student Learning (CASL) (Langer, Colton, & Goff, 2003). Rather than fo cusing upon a “best practic es” one size fits all approach to professional development, the CASL process causes a teacher to engage in reflective inquiry when determining how to best help individual students over their learning hurdles. This self-awareness is a de fense against “habituated perception” (p. 33), which occurs when teachers see only what they expect to see and miss important clues that could lead students to learning break throughs. According to Senge et al, the mechanism that causes this blindness is th e teacher’s mental model (Senge, Kleiner, Roberts, Ross, & Smith, 1994) of how stude nt learning takes place. It is only by verbalizing her thought proce sses with a supportive group of peers that the teacher’s assumptions can be discerned, challenged, and revised. Analysis of interviews showed

PAGE 221

211 that both assessment professionals and facu lty members at Sunshine State Community College were highly reflective in di scussing their respective practices. Evidence of the capacity for self-reflecti on could be found in the responses to the thought-provoking questions posed to both faculty and asse ssment professionals in individual and focus group interviews. Partic ipants really reache d down to find answers to the questions posed by this researcher in these interviews. Often a more complete answer to a previous question would emerge in later conversation, after the participant had time to think about it. Mary, in part icular, was an enthusiastic proponent of reflection, both in her practice of teachi ng and as a component of curriculum. While individual reflection is an importa nt tool for better understanding one’s environment, the ability of faculty and sta ff members to participate in collaborative reflection or sense-making (Weick, 1995) is even more importan t to increasing the college’s capacity to use assessment results. According to Richard Voorhees, a past president of the Association for Institutional Re search, an alternative j ob of IR is to feed networks (2003). New ideas may germinate in u npredictable ways from the seeds of ideas planted by a catalyst member. These networks innovate more often when they exist within active, diverse communities. Thus, in brokering knowledge about the results of outcomes assessment among networks of faculty and staff members, IR could help the college determine which results were important enough for planning and action. Faculty Assistance in Implementing Assessment. Enhancing a faculty member’s predisposition to use assessment toward im proved course performance is an important step in professional development that is often neglected. Although seminars on assessment techniques usually produce a lot of enthusiasm, Kurz and Banta (2004) found

PAGE 222

212 that they could convince faculty of the value of using assess ment with some individual guidance from instructional e xperts. The researchers found pr e-/postmeasures to be effective in providing clear and convincing evidence of changes in students’ learning. Further, some participating faculty remarked that student s “spontaneously expressed gratitude for the feedback provided by the a ssessments, and others commented that their students clearly felt empowered by these e xperiences” (p. 93). The conclusion of the study was that successful classroom assessment should be “simple and closely tied to the course and its learni ng experiences” (p. 93). Sunshine State Community College could us e a “train the trainer” approach to developing a team of such in structional experts among faculty leaders in developmental education who could assist other faculty members in adopting a ssessment practices. A number of faculty interviewed for this study talked at length a bout their experiences using classroom assessment for learning impr ovement. For example, Mary used portfolio assessment in one of her classes to great effect: I guess I have as much freedom as I need right now independently in my individual courses to use variou s tools of assessment. Let me give you one example: In my class in the la st year, I’ve change d the final exam to a creative project. And it is so good! I wonder if I have an example here. I do. The final exam now is no longer just a test. The students are asked to create a matrix. The student are asked to create a chart, a diagram, a PowerPoint presentation – however they want to do it but the rules are: they have to connect the themes that they’ve discovered in the course they’ve just taken, and they connect the writers and the words

PAGE 223

213 underneath each of the themes so that they recognize large order to small order categories and connections am ongst and between the writers. And I just think it’s absolutely wonderful for several reasons in assessment: 1. They have to review, without being to ld to review. 2. That kind of critical reflection is just invaluable for stude nts – you don’t even have to call it critical reflection – you know they’re doi ng it. 3. They have to organize the project – They have to use their higher level thinking skills of analysis and synthesis and evaluation just to put it together. Here’s what I’ve learned. Look, here’s what I’ve mastere d. That’s a whole different way to assess students’ learning. It’s much mo re holistic – it’s a way to assess deeper student learning. I think that’s where we need to go. I think the real outcomes assessment looks at deep learning, not just what’s on the surface. (Interview with Mary) Mary also taught developmental courses in writing and would have been a natural “instructional expert” if provi ded course release time to a ssist other faculty in making incremental instructional changes within their classrooms. Other ways of moving program assessment results into classrooms would be to enhance opportunities for in teraction between faculty members and assessment professionals beyond the College’s twin lear ning outcomes assessment processes (e.g., general education and QEP). While Michelle, Dean of AA Programs, had come to rely upon Institutional Research to drive instructi onal change, this relationship had developed through exchanges of information over time:

PAGE 224

214 It’s like a natural instinct now, wh en five years ago, people weren’t sure what to do with the data or why they were even being allowed to see it. Now it’s totally different because they’re wanting the information, they’re wanting… I have a math faculty member who I was having lunch with (working on other stuffit was a working lunch) who was talking about something that she’s doing in her math class and she’s saying I really want to track these students to see if it made a difference when I was in the classroom. So here she is already thinking about “OK, I need to be tapping into this data to see if this change was a good change Do I need to continue doing this?” People are just kind of automatically thinking about that when they talk about change. By getting to know the Institutional Resear ch staff on a personal level as Michelle did, faculty members could become more lik ely to interact with them directly on measurement issues, further accelerating the College’s progress in using the results of assessment. Another way the college could convert th e assessment results into instructional change would be to have an instructional coordinator wo rk through the college’s center for teaching and learning excellence in part nership with institutional research. The contributions of a faculty peer in promoting the use of assessment results might help to transform outcomes reporting to formats that are more meaningful, and therefore more frequently read and talked about in faculty circles.

PAGE 225

215 Implications for Research How could future research “extend, clarif y, broaden, or deepen this strand of research further” (Blank, 1996, p.2, Doctoral Di ssertation/ Ed.S. Proj ect Specifications)? Strengthening the findings of this stud y, direct field observation of a postassessment conference with assessment profes sionals and faculty would allow a closer look at the process of sense-making (Wei ck, 1995) among assessment practitioners. The study could probe the question of how colla borative groups determine which assessment findings are important enough to be actionable. Expanding the field of study into other de partments of the co llege, the strongest developmental education programs used a co llaboration between faculty and student advisors to monitor and intervene in matte rs affecting student performance – whether cognitive or affective (Boylan, 2002, p.58). As this best practi ce in developmental education has not been a primary focus of th is research, future studies could focus upon cross-functional collaborations involving institutional research, faculty, and student services staff. As many of these collaborat ions are intended to solve the problems of individual students, a potentia l research question for future study could be at what point do such collaborations identify a repeating problem as a research need and what process do they go through to articulate research questio ns? This strand of future research would be a more in-depth investig ation into sense making (Weic k, 1995) activity within crossfunctional groups. Another related area of poten tial research concerns th e critical underpinnings of collaboration on assessment, the resources (e.g. time, money, resources) invested in them (Peterson, 1999; Banta, 2002; Lopez, 2003) Although interviews with faculty and

PAGE 226

216 assessment professionals at Sunshine Stat e Community College s upport this need, the quantity and quality of resources directed at learning improvement were not primary targets of this research. Future research effo rts could thus be focused upon the nature and level of resources as a de terminant of a successful learning outcomes assessment program. One last potential area for future resear ch bears directly upo n the conclusion of this study. If colleges wish inst itutional research ers to play a greater role in drilling the results of assessment down to classroom inst ructional strategies, should the analytical role of the assessment professional (acknow ledging and recommending) continue to stand aloof from that of the faculty member (i mplementing and monitoring)? A second related question would ask whether assessment was more effective in colleges where assessment coordinators were organized under academic affairs, rather than administration. If structure should follow task, as indicated by Bo lman & Deal (2003), this might indeed be the case. With the mandate for continuous improvement enshrined in accreditation principles nationwide, providing faculty and assessment professionals with the conditions for effective collaboration will be a continuing concern for college administrators for many years to come. It will also be the source of research questions for higher education researchers for at least that long.

PAGE 227

217 References Adelman, C. (1999). Answers in the tool box: Academ ic intensity, attendance patterns, and bachelor’s degree attainment. U.S. Department of Education (OERI). Adelman, C., Jenkins, D. & Kemmis, S. (1976). Rethinking case study: Notes from the second Cambridge conference. Cambridge Journal of Education, 6 (3), 139-150. American Association of Community College s. (2005). Membership brochure. Retrieved on June 12, 2005, from http://www.aacc.nche.edu/Content/Navi gationMenu/AboutAACC/Membership/M emb_Broch_2005pdf.pdf Anfara, V.A., Brown, K.M., & Mangione, T. L. (2002, October). Qu alitative analysis on stage: Making the research process more public. Educational Researcher 28-38. Angelo, T.A. (1995, November). AAHE Bulletin, 7. Angelo, T. A. (1999, May). Doing assessment as if learning matters most. AAHE Bulletin, 51 (9). Retrieved on March 5, 2005, from http://www.aahebulletin.com/public/archive/angelomay99.asp?pf=1 Angelo, T.A. & Cross, K.P. (1993). Classroom assessment techniques: A hand book for college teachers. (2nd Ed.) San Francisco: Jossey-Bass. Bailey, T., Kienzel, G. & Ma rcotte, D.E. (2004, August). The return to a subbaccalaureate education: The effects of schooling, credentials, and program of study on economic outcomes. Institute on Education and the Economy and the Community College Research Center, Teachers College, Columbia University. Paper prepared for the U.S. Department of Education.

PAGE 228

218 Bailey, T., Alfonso, M., Calcagno, J.C., Jenkins D., Keinzel, G. & Leinbach, T. (2004, November). Improving student attainment in community colleges: Institutional characteristics and policies. Paper prep ared for the Lumina Foundation and the U.S. Department of Education. Banta, T. W. (2002). Building a scholarship of assessment. San Francisco: Jossey-Bass. Banta, T. W. (2004). Hallmarks of effective outcomes assessment. San Francisco: JosseyBass. Bartlett, L. (2005, Spring). Case study method in qualitative enquiry, EDG6931.201, University of South Florida. Bendickson, M.M. (2004). The impact of technology on community college students’ success in remedial/developmental mathematics. (Unpublished doctoral dissertation, University of South Florida, Tampa). Bensimon, E.M., Polkinghorne, D.E., Bauman, G.L., & Vallajo, E. (2004). Doing research that makes a difference. Journal of Higher Education, 75 (1), 104-127. Birnbaum, R. (1988). How colleges work: The cybernet ics of academic organization and leadership. San Francisco: Jossey-Bass. Blank, W. (1996). Doctoral Dissertation/ Ed.S. Project Specifications, Author. Boggs, G.R. (2004). Community co lleges in a perfect storm. Change, 36 (6), 6-11. Bolman, L. G., & Deal, T.E. (2003). Reframing organizations: Artistry, choice, and leadership. San Francisco, CA: John Wiley & Sons. Boughan, K. (2000). The role of academic process in student achievement: An application of structural equations mode ling and cluster analysis to community college longitudinal data. AIR Professional File, (74), 1-22.

PAGE 229

219 Boylan, H.R. (2002). What works: Research-based be st practices in developmental education. Boone, NC: National Center for Developmental Education Boylan, H.R., Bonham, B.S., & White, S.R. (1999, Winter). Developmental and remedial education in posts econdary education. New Directions for Higher Education, 108. Boyatzis, R.E. (1998). Transforming qualitative information: Thematic analysis and code development. Thousand Oaks, CA: Sage Publications. Bransford, J.D., Brown, A.L., & Cocking, R.R. (Eds.). (2000). How people learn: Brain, mind, experience, and school. Washington, D.C.: National Academy Press. Brothen, T. & Wambach, C.A. (2004). Refocusing developmental education. Journal of Developmental Education, 28 (2), 16-18, 20, 22, 33, Chen, S. (2004). Research methods: Step by step Dubuque, IA: Kendall/Hunt Chen, X. & Carroll, C.D. (2005). First generation students in postsecondary education: A look at their transcripts. U.S. Department of Education (PEDAR). Chickering, A. W. & Gamson, Z. F. (1987) Seven principles for good practice in undergraduate education. AAHE Bulletin, 39 (7), 3-7. Cleary, T. (2005, November). Navigating through a SACS accreditation review. Presentation to the Florida Association for Community Colleges Institutional Effectiveness Commission Fall Meeting. Cohen, A.M. & Brawer, F.B. (1996). The American community college (3rd ed.). San Francisco, CA: Jossey-Bass. Community College Survey of Student Enga gement Institutiona l Report: Overview (2004). Community College Leadership Progr am, University of Texas at Austin. Retrieved on May 24, 2005, from http://www.ccsse.org

PAGE 230

220 Creswell, J.W. (1998). Qualitative inquiry and research design: Choosing among five traditions. Thousand Oaks, CA: Sage Publications. Ewell, P.T. (1997). Organizing fo r learning: A new imperative. AAHE Bulletin, 50 (4), 36. Festinger, L. (1957). A theory of cognitive dissonance. Stanford, CA: Stanford University Press. Florida Department of Education. (2004). Accountability outcome measure 4, part 1. Tallahassee, FL. Florida Department of Educa tion. (2005). Student headcount by ethnicity: Percent change from 2000-2001 to 2004-2005. Retrieved on December 24, 2005, from http://www.firn.edu/doe/workforce/pdf/mi nority_enrollment_completion_charts.p df Florida Statute 1008.30, 4a. (2006). Retrieved on August 20, 2006 from http://www.leg.state .fl.us/statutes/ Frost-Knappman, E. & Shrager, D.S. (1998). A concise encyclopedia of legal quotations. New York: Barnes & Noble. Grunwald H. & Peterson, M.W. (2003). Factors that promote faculty involvement in and satisfaction with classr oom student assessment. Research in Higher Education 44 (2), 173-204. Hardin, C. (1998). Who belongs in college : A second look. In J. Higby. & P. Dwinnel (Eds.), Developmental education: Prepar ing successful college students. Columbia, SC: National Resource Center for the First-Year Experience and Students in Transition.

PAGE 231

221 Harvey-Smith, A.B. (2002). An examination of the retention literatur e and application to student success. Baltimore, MD: Co mmunity College of Baltimore County. Howard, R. (Ed.). (2001). Institutional research: Decision support in higher education. Tallahassee, FL: Association for Institutional Research. Hrabowski, F. (2005). Interview with Ji m Lehrer, PBS News Hour, August 30, 2005. Jones, D. (2005). State fiscal outlooks fr om 2005 to 2013: Implications for higher education. NCHEMS News, 22, 1-6. Katsinas, S.G. (2003). Two-year college cla ssifications based on institutional control, geography, governance, and size. New Directions for Community Colleges, (122) 17-28. Kezar, A. J. (2001). Understanding and facilitating organizational change in the 21st century: Recent research and conceptualizations. San Francisco: Jossey-Bass. Kezar, A. & Talburt, S. (2004). Qu estions of research and methodology. Journal of Higher Education, 75 (1), 1-6. Kurz, L. & Banta, T.W. (2004). Decodi ng the assessment of student learning. New Directions fro Teaching and Learning, 98. Langer, G.M., Colton, A.B., & Goff, L.S. (2003). Collaborative analysis of student work. Alexandria, VA: Association fo r Supervision and Curriculum Development. Larson, E.J. & Greene, A.L. (2002). Faculty involvement in developing and measuring student learning outcomes. Unpublished pa per, presented at the AIR Forum, Toronto, Canada on June 5, 2002.

PAGE 232

222 League for Innovation in the Community College. (2004, August). An assessment framework for the community college : Measuring student learning and achievement as a means of demonstrating institutional effectiveness. White paper retrieved on June 14, 2005, from http://www.league.org/public ation/whitepapers/files/0804.pdf Lobowski, J., Newsome, B., & Brooks, B. (2002). Models, strategies, and tips for improving institutional climate (audio cassette), Workshop CS-24, 2002 SACS Annual Meeting. Lopez, C.L., (1999). A decade of assessing st udent learning: What we have learned; What’s next? Chicago, IL: North Central Association of Colleges and Schools. Retrieved on December 14, 2005, from http://www.ncahlc.org/AnnualM eeting/archive/ASSESS10.PDF Lopez, C. L. (2000, April). Assessing student learning: Using the co mmission’s levels of implementation. 105th Annual Meeting of the Nort h Central Association of Colleges and Schools, Commission on Inst itutions of Higher Learning, Chicago, IL. Retrieved on December 14, 2005, from http://www.ncahlc.org/dow nload/Lopez_Levels_2000.pdf Lopez, C.L. (2003, April). Assessment of student academic achievement: Assessment culture matrix. Chicago, IL: North Centra l Association of Colleges and Schools. Retrieved on December 14, 2005, from http://www.ncahlc.org/download/AssessMatrix03.pdf

PAGE 233

223 Leslie, D.W. (2002, November). Thinking bi g: The state of scholarship on higher education. Paper prepared for the Annual Meeting of the Association of the Study of Higher Edcuacation, Sacramento, CA. Maki, P. L. (2004, September 23). Building and sustaining a culture of evidence. Faculty Workshop, Hillsborough Community College. Marti, C.N. (2004). Overview of the CCSSE instrument and psychometric properties. Retrieved on May 25, 2005, from http://www.ccsse.org/aboutsurvey/psychometrics.pdf McCabe, R. H. (2003). Yes we can! A community college guide for developing America’s underprepared. Phoenix, AZ: League for Innova tion in the Community College. Merriam, S.B. (1998). Qualitative research and case study applications in education San Francisco: Jossey-Bass. Merriam, S.B. (2002). Qualitative research in practice: Examples for discussion and analysis. San Francisco: Jossey-Bass. Milton, C. (2004). Introduction to faculty learning communities. New Directions for Teaching and Learning, 2004 (97), 5-24. Mintzberg, H. (1989). Mintzberg on management: Insi de our strange world of organizations New York: Free Press. Morest, V.S. (2005). Realizing the full potentia l of institutional research at community colleges (presentation). League for I nnovations in the Community Colleges 2005 Conference, March 6th. Peterson, M. W., Augustine, C. H., Einarson, M. K., & Vaughan, D. S. (1999). Designing student assessment to strengthen instituti onal performance in associate of arts

PAGE 234

224 Institutions. NCPI (OERI), U.S. Depa rtment of Education, Technical Report Number 5-07. Peterson, M. W. (2000). Institutional climate for student assessment (survey), Stanford University, National Center for Posts econdary Improvement, Palo Alto, CA. Polanyi, M. (1962). Personal knowledge: Towards a post-critical philosophy Chicago: University of Chicago Press. Roueche, J.E. & Roueche, S.D. (1994). Between a rock and a hard place. Washington, D.C.: American Association of Community Colleges. Rouche, J.E., Roueche, S.D., & Ely, E.E. (2001). Pursuing excellence: The Community College of Denver. Community College Journal of Research & Practice, 25 (7), 517-537. Senge, P. M. (2000). Schools that learn New York, NY: Doubleday. Senge, P. M., Kleiner, A., Roberts, C., Ross, R.B., & Smith, B.J. (1994). The fifth discipline fieldbook. New York: Doubleday. Shugart, S. (2005). Leadership: Hands on, h earts in (Pre-Conference Workshop), The Heart of Leadership: Building Commun ity, Chair Academy Conference, March 2nd. Smith, D. & Eder, D. (2004). Assessment and program review: Linking the two (chapter). Hallmarks of effective outcomes assessment San Francisco, CA: Jossey-Bass. Stake, R. E. (1995). The art of case study research. Thousand Oaks, CA: Sage. Southeastern Association of Community Co llege Researchers. (2005). The ABCs of educational research: Accreditation, be nchmarking, case studies. Conference

PAGE 235

225 brochure retrieved on June 17, 2005, from http://www.tcc.edu/welcom e/collegeadmin/OIE/SACCR/ Southern Association of Colle ges and Schools (SACS). (2001). Principles of accreditation Retrieved on December 24, 2005, from http://www.sacscoc.org/pdf/P rinciplesOfAccreditation.PDF Southern Association of Colle ges and Schools (SACS). (2005). Resource manual for the principles of accreditation: Foundations for quality enhancement. Decatur, GA Statewide Course Numbering System (SCNS) (2005). Excel listing of SLS1101 courses. Retrieved on August 28, 2005, from http://scns.fldoe.org/sc ns/public/pb_index.jsp Suskie, L. (2004). Assessing student learning: A common sense guide. Bolton, MA: Anker Publishing. Tagg, J. (2005). Venture colleges: Creating charters for change in higher education. Change, 37 (1), 34-43. Tinto, V. (1993). Leaving college: Rethinking the causes and cures of student attrition. Chicago: University of Chicago. Tinto, V. (2004). Student retention and gra duation: Facing the tr uth, living with the consequences (Occasional Paper). Washi ngton, DC: The Pell Institute. Retrieved on September 15, 2004, from http://www.pellinstitute.org Treat, T., Kristovich, S., & Henry, M. (2004) Knowledge management and the learning college. Community College Journal, 75 (2), 42-46. von Wright, G. H. (1971). Explanation and understanding London: Routledge & Kegan Paul.

PAGE 236

226 Voorhees, R. (2003). Feeding networks: Ins titutional research and uncertainty. Opening plenary address, Meeting of the Associ ation for Institutional Research, May 18, 2003. Wallin, D.L. (2005). Adjunct faculty in community coll eges: An academic administrator’s guide to recruiting, supporti ng, and retaining great teachers. Bolton, MA:Anker. Weick, K.E. (1979). The social psychology of organizing (2nd ed.) Reading, MA: Addison-Wesley. Weick, K.E. (1995). Sensemaking in organizations. Thousand Oaks, CA: Sage Publications. Wergin, J.F. (2005). Higher education: Waki ng up to the importance of accreditation. Change, 37 (3), 35-41. Wenger, E. (1998). Communities of practice: Learning, meaning, and identity. New York: Cambridge University Press. Wenger, E., Snyder, W., & McDermott. (2002). Cultivating communities of practice: A guide to managing knowledge. Boston, MA: Harvard Business School Publishing. Wilkins, A. L. & Ouchi, W. G.(1983). Effi cient cultures: Exploring the relationship between culture and organizational performance. Administrative Science Quarterly, 28 (3), 468-481. Windham, P. (2002, Spring). Bridging the ga p: An analysis of Florida’s college preparatory program students, output, costs. Visions: The Journal of Applied Research for the Florida Associ ation of Community Colleges (FACC). Tallahassee, FL: FACC.

PAGE 237

227 Windham, P. (2005). CCSSE highlights on th e Florida consortium, 2004. Newsletter, Florida Department of Education. Windham, P. (2005). Number and percentage of course sections taught by instructional staff: Fall term 2003-2004 (ad hoc report) Tallahassee, FL: Florida Community Colleges. Yeaman, A.R., Hlynka, D., Anderson, J.H ., Damarin, S.K., Muffoletto, R. (2001). Postmodern and poststructural th eory. In Jonassen, D.H. (Ed.), Handbook of research for educational communications and technology (pp. 253-295). Mahwah, NJ: Lawrence Erlbaum Associates, Inc.

PAGE 238

228 Bibliography Baker, G. A. & Associates. (1995). Team building for quality Washington, D.C.: American Association of Community Colleges. Burke, J.C. & Minassians, H. P. (Summer, 2004). Implications of state performance indicators for community college assessment. New Directions for Community Colleges, 126. Council of Regional Accrediting Commissi ons. (2004). Regional Accreditation and Student Learning: Improving Institu tional Practice. Washington, D.C. Crocker, L. & Algina, J. (1986). Introduction to classical & modern test theory. Belmont, CA: Wadsworth Group. Cronbach, L.J. & Meehl, P.E. (1955). Cons truct validity in psychological tests. Psychological Bulletin, 52, 281-302. Glass, G.V & Hopkins, K.D. (1996). Statistical methods in education and psychology (3rd ed.). Needham Height s, MA: Allyn & Bacon. Hudson, L. & Hurst, D. (2002, January). Pers istence of employees who pursue college study. Stats in Brief, National Center for Education Statistics. Retrieved on March 27, 2002 from http://nces.ed,gov/pubsearch /pubsinfo.asp?pubid=2002118 Hughes, R. & Pace, C.R. (2003, July-August). Using NSSE to study student retention and withdrawal. Assessment Update, 15 (9). Kaminski, K., Seel, P. & Cullen, K. (2003). T echnology literate studen ts? Results from a survey. EDUCAUSE Quarterly 26(3), 34-40.

PAGE 239

229 King, P.M. & Kitchener, K.S. (1994). Developing reflective j udgment: Understanding and promoting intellectual growth and crit ical thinking in adolescents and adults. San Franciso, CA: Jossey-Bass. Kuh, G.D. (2001). The National Survey of St udent Engagement: Conceptual framework and overview of psychometric prope rties. Retrieved on May 25, 2005, from http://www.indiana.edu/~nsse/nsse_2001/pdf/framework-2001.pdf Lynch, C.L, Wolcott, S.K. & Huber, G.E. (1999, June). Assessing the development of critical thinking and professional prob lem solving skills (poster session). American Association of Higher E ducation Assessment Conference. McLaughlin, G.W. & Howard, R.D. (2004). People, processes and managing data (2nd ed.). Tallahassee, FL: Associa tion for Institutional Research. Shelly, P.H. (Jan. 7, 2005). Colleges n eed to give students intensive care. Chronicle of Higher Education, 51 (18).

PAGE 240

230 Appendices

PAGE 241

231 Appendix A Individual Interview Consent Form Informed Consent Social and Behavioral Sciences University of South Florida Information for People Who Take Part in Research Studies The following information is being presented to help you decide wh ether or not you want to take part in a minimal risk research st udy. Please read this carefully. If you do not understand anything, ask the pers on in charge of the study. Title of Study: Creating an Assessment Culture to Enhance the Quality of Developmental Reading and Writing: A Community College Case Study Principal Investigator: Pat Gordin Study Location(s): [Sunshine State Community College] You are being asked to participate in this study because you have recently collaborated with other faculty and assessm ent professionals on student le arning outcomes assessment to improve students’ developm ental reading or writing success General Information about the Research Study The purpose of this research st udy is to better understand the development of a culture of assessment among faculty and assessment/ in stitutional [research] staff members in improving students’ developmental reading and writing success. Plan of Study Individual Interview: The data collection po rtion of the study is expected to last two months, from February through March 2006. Duri ng that time, the pr incipal researcher will arrange convenient times for interviewing faculty members (full-time and part-time) who teach reading, writing, or study skills development and any assessment/ IE staff members or leaders who may be able to pr ovide background and insights into the study’s purpose. Interviews will take place at the wo rk location of each participant, at prearranged times. The interviews may last up to one hour and voice responses will be recorded on a tape recorder a nd/or digital medium. Within tw o weeks of an interview, the researcher will provide a written summary to each participant for verification and correction of the voice data collected. She will then follow up with a phone call to you to collect and record any correc tions you wish to provide. Payment for Participation You will not be paid for your participation in this study.

PAGE 242

232 Appendix A (Continued) Benefits of Being a Part of this Research Study By participating in this study, you may increas e your awareness of th e social learning aspects of collaboration on assessment. Important findings from this study will be shared with College faculty and staff members. Inst itutional [Research] will receive a copy of the final dissertation. Risks of Being a Part of this Research Study There is minimal risk involved to participan ts of this study. A foreseeable risk, however, is the possibility that an emergency may n ecessitate the cancellation and re-scheduling of an interview. Confidentiality of Your Records Your privacy and research records will be kept confidential to the extent of the law. Authorized research personnel, employees of the Department of Health and Human Services, and the USF Institutional Review Bo ard, its staff and other individuals acting on behalf of USF, may inspect the reco rds from this research project. The results of this study may be published. However, the data obtained from you will be combined with data from others in the pub lication. The published re sults will not include your name or any other information that would personally identify you in any way. At the start of your personal interview, you will be asked to provide a pseudonym that will be used to identify any information you provide. The principal researcher and her major professor will have access to interview recordings and transcriptions, which will be stored in a locked drawer in the office of the principal investigator. Volunteering to Be Part of this Research Study Your decision to participate in this research study is comple tely voluntary. You are free to participate in this research study or to withdraw at any time. There will be no penalty or loss of benefits you or the College are entitled to receive, if you stop taking part in the study. Questions and Contacts If you have any questions about this rese arch study, contact Pat Gordin at (239) 489-9008 (work) or (239) 495-2969 (home). If you have questions about your rights as a person who is taking part in a research study, you may contact the Divi sion of Research Compliance of the University of South Florida at (813) 974-5638.

PAGE 243

233 Appendix A (Continued) Consent to Take Part in This Research Study By signing this form I agree that: I have fully read or have had read and e xplained to me this informed consent form describing this research project. I have had the opportunity to question one of the persons in charge of this research and have received satisfactory answers. I understand that I am being asked to pa rticipate in research. I understand the risks and benefits, and I freely give my consent to participate in the research project outlined in this form, unde r the conditions indicated in it. I have been given a signed copy of this informed consent form, which is mine to keep. _________________________ ________________________________________ Signature of Participant Printed Name of Participant Date Investigator Statement I have carefully explained to the subject the nature of the above re search study. I hereby certify that to the best of my knowledge th e subject signing this consent form understands the nature, demands, risks, and benefits involved in participating in this study. _________________________ ________________________________________ Signature of Investigator Printe d Name of Investigator Date Or authorized research investigator designated by the Principal Investigator

PAGE 244

234 Appendix B Focus Group Interview Consent Form Informed Consent Social and Behavioral Sciences University of South Florida Information for People Who Take Part in Research Studies The following information is being presented to help you decide wh ether or not you want to take part in a minimal risk research st udy. Please read this carefully. If you do not understand anything, ask the pers on in charge of the study. Title of Study: Creating an Assessment Culture to Enhance the Quality of Developmental Reading and Writing: A Community College Case Study Principal Investigator: Pat Gordin Study Location(s): [Sunshine State Community College] You are being asked to participate in this study because you have recently collaborated with other faculty and assessm ent professionals on student le arning outcomes assessment to improve students’ developmental reading or writing success. General Information about the Research Study The purpose of this research st udy is to better understand the development of a culture of assessment among faculty and assessment/ in stitutional [research] staff members in improving students’ developmental reading and writing success. Plan of Study Focus Group Interview: The data collection por tion of the study is expected to last two months, from February through March 2006. Duri ng that time, the pr incipal researcher will convene a focus group to discuss the proces s of data analysis and interpretation in assessment. Members of this group will incl ude faculty members (full-time and parttime) who teach reading, writing, or study skills development and any assessment/ [IR] staff members or leaders who may be able to provide background and insights into the study’s purpose. This interview will take place at the work location of the participants, at a pre-arranged time. The interview may last up to one hour and voice responses will be recorded on a tape recorder and/or digital medium. Payment for Participation You will not be paid for your participation in this study. Benefits of Being a Part of this Research Study By participating in this study, you may increas e your awareness of th e social learning aspects of collaboration on assessment. Important findings from this study will be shared

PAGE 245

235 Appendix B (Continued) with College faculty and staff members. [I nstitutional Research] will receive a copy of the final dissertation. Risks of Being a Part of this Research Study There is minimal risk involved to participan ts of this study. A foreseeable risk, however, is the possibility that an emergency may n ecessitate the cancellation and re-scheduling of this interview. Confidentiality of Your Records Your privacy and research records will be kept confidential to the extent of the law. Authorized research personnel, employees of the Department of Health and Human Services, and the USF Institutional Review Bo ard, its staff and other individuals acting on behalf of USF, may inspect the reco rds from this research project. The results of this study may be published. However, the data obtained from you will be combined with data from others in the pub lication. The published re sults will not include your name or any other information that would personally identify you in any way. The principal researcher and her major profe ssor will have access to interview recordings and transcriptions, which will be stored in a lo cked drawer in the office of the principal investigator. Volunteering to Be Part of this Research Study Your decision to participate in this research study is comp letely voluntary. You are free to participate in this research study or to withdraw at any time. There will be no penalty or loss of benefits you or the College are entitled to receive, if you stop taking part in the study. Questions and Contacts If you have any questions about this rese arch study, contact Pat Gordin at (239) 489-9008 (work) or (239) 495-2969 (home). If you have questions about your rights as a person who is taking part in a research study, you may contact the Divi sion of Research Compliance of the University of South Florida at (813) 974-5638.

PAGE 246

236 Appendix B (Continued) Consent to Take Part in This Research Study By signing this form I agree that: I have fully read or have had read and e xplained to me this informed consent form describing this research project. I have had the opportunity to question one of the persons in charge of this research and have received satisfactory answers. I understand that I am being asked to pa rticipate in research. I understand the risks and benefits, and I freely give my consent to participate in the research project outlined in this form, unde r the conditions indicated in it. I have been given a signed copy of this informed consent form, which is mine to keep. _________________________ ________________________________________ Signature of Participant Printed Name of Participant Date Investigator Statement I have carefully explained to the subject the nature of the above re search study. I hereby certify that to the best of my knowledge th e subject signing this consent form understands the nature, demands, risks, and benefits involved in participating in this study. _________________________ ________________________________________ Signature of Investigator Printe d Name of Investigator Date Or authorized research investigator designated by the Principal Investigator

PAGE 247

237 Appendix C Participant Recruitment Brochure Dear QEP Team and Communications Faculty: The experiences you’ve shared with other faculty and staff members in building your College’s Quality Enhancement Plan (QEP) have provided you w ith unique expertise. Your college, by virtue of its SACS reaffirmation schedule, placed you on the frontier of knowledge in conducting student learning outcomes assessment for the improvement of developmental reading and writing. I invite you to share some of that expertise with me as I investigate the ways in wh ich assessment support and collaboration work best. You see, I am the institutional effectiveness director at Ed ison College, and I would like nothing more than to share this knowledge with your institution, mine, and others. Th e title of my dissertation is “Creating an Assessment Culture to Enhance the Quality of Developmental Reading and Wr iting: A Community College Case Study.” I am close to completing my Ph.D. in Curriculum and Instruction (with a specialization in Higher Education) at USF. What you should know about this research is th at I hope to meet with faculty members at your college, fulland part-time, who t each developmental reading, writing or study skills. I also want to meet with others involved in the development of the QEP. My meeting with you would take place at your c onvenience (in February or March) on your campus. The meeting would last not more than an hour. If you will consider participating, please send me an e-mail message at pgordin@edison.edu I can also be reached at Suncom 724-1008 or (239) 489-9008 (work) or (239) 495-2969 (hom e). You may discontinue pa rticipation at any time by calling or e-mailing me. Thank you! Pat Gordin (Picture here)

PAGE 248

238 About the Author Patricia Gordin received a Bachelor’s Degree in Psychology (with a minor in Child Development) from Rockford College in 1973, MBA from University of South Florida in 1993, and M.Ed. in Curriculu m and Instruction (Educational Technology concentration) from Florida Gulf Coast Un iversity in 1999. She has been involved with community college institutional research since 1994, currently se rving as District Director of Institutional Effectiveness and Program Development for Edison College, located in Southwest Florida. Ms. Gordin has served in many statewid e organizations including the Research Committee of the Florida Community College System and The Florida Association for Institutional Research (2004 President). She also served as the Chair for the Florida Association of Community Co lleges Institutional Effectiv eness Commission in 2005 and 2006. In her leadership of these organizati ons, she has hosted or co-hosted numerous conferences on institutional research and eff ectiveness issues and was a presenter at the 2005 Annual Conference of the Florida Associat ion for Institutional Research in Orlando.


xml version 1.0 encoding UTF-8 standalone no
record xmlns http:www.loc.govMARC21slim xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govstandardsmarcxmlschemaMARC21slim.xsd
leader nam Ka
controlfield tag 001 001929946
003 fts
005 20080307133457.0
006 m||||e|||d||||||||
007 cr mnu|||uuuuu
008 080307s2006 flu sbm 000 0 eng d
datafield ind1 8 ind2 024
subfield code a E14-SFE0001742
040
FHM
c FHM
035
(OCoLC)213096914
049
FHMM
090
LB2322 (ONLINE)
1 100
Gordin, Patricia C.
3 245
An instrumental case study of the phenomenon of collaboration in the process of improving community college developmental reading and writing instruction
h [electronic resource] /
by Patricia C. Gordin.
260
[Tampa, Fla] :
b University of South Florida,
2006.
520
ABSTRACT: Focusing upon the intersections between community college faculty and assessment professionals (e.g., institutional researchers) in improving student learning outcomes, the purpose of this study was to describe, analyze, and interpret the experiences of these professionals as they planned for and conducted student learning outcomes assessment in developmental reading, writing, and study skills courses. This instrumental case study at one particular community college in Florida investigated the roles played by these individuals within the larger college effort to develop a Quality Enhancement Plan (QEP), an essential component of a regional accreditation review. The methodology included individual interviews, a focus group interview, a field observation, and analysis of documents related to assessment planning.^ There were several major findings: ¨ Assessment professionals and faculty teaching developmental courses had similar professional development interests (e.g., teaching and learning, measurement). ¨ While some faculty leaders assumed a facilitative role similar to that of an assessment professional, the reporting structure determined the appropriate action taken in response to the results of assessment. That is, assessment professionals interpreted results and recommended targets for improvement, while faculty and instructional administrators implemented and monitored instructional strategies. ¨ The continuous transformation of the QEP organizational structure through research, strategy formulation, and implementation phases in an inclusive process enabled the college to put its best knowledge and measurement expertise into its five-year plan.^ ¨ Developmental goals for students in addition to Florida-mandated exit exams included self-direction, affective development such as motivation, and success at the next level. ¨ Faculty identified discipline-based workshops as promising vehicles for infusing instructional changes into courses, thus using the results of learning outcomes assessments more effectively.A chronological analysis further contributed to findings of the study. This researcher concluded that the College's eight-year history of developing general education outcomes and striving to improve the college preparatory program through longitudinal tracking of student success had incubated a powerful faculty learning community and an alliance with assessment professionals. This community of practice, when provided the right structure, leadership, and resources, enabled the College to create a Quality Enhancement Plan that faculty and staff members could be proud of.
502
Dissertation (Ph.D.)--University of South Florida, 2006.
504
Includes bibliographical references.
516
Text (Electronic dissertation) in PDF format.
538
System requirements: World Wide Web browser and PDF reader.
Mode of access: World Wide Web.
500
Title from PDF of title page.
Document formatted into pages; contains 237 pages.
Includes vita.
590
Adviser: Jan Ignash, Ph.D.
653
Community of practice.
Faculty learning community.
Institutional research.
Quality enhancement plan.
Assessment.
Student learning outcomes.
690
Dissertations, Academic
z USF
x Higher Education
Doctoral.
773
t USF Electronic Theses and Dissertations.
4 0 856
u http://digital.lib.usf.edu/?e14.1742