|USFDC Home | USF Electronic Theses and Dissertations||| RSS|
This item is only available as the following downloads:
xml version 1.0 encoding UTF-8 standalone no
record xmlns http:www.loc.govMARC21slim xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govstandardsmarcxmlschemaMARC21slim.xsd
leader nam Ka
controlfield tag 001 001709550
007 cr mnu|||uuuuu
008 060523s2005 flua sbm s000 0 eng d
datafield ind1 8 ind2 024
subfield code a E14-SFE0001412
Rajab, Aziza A.
A methodology for developing a nursing education minimum dataset
h [electronic resource] /
by Aziza A. Rajab.
[Tampa, Fla.] :
b University of South Florida,
Thesis (Ph.D.)--University of South Florida, 2005.
Includes bibliographical references.
Text (Electronic thesis) in PDF format.
System requirements: World Wide Web browser and PDF reader.
Mode of access: World Wide Web.
Title from PDF of title page.
Document formatted into pages; contains 110 pages.
ABSTRACT: Globally, health care professionals, administrators, educators, researchers, and informatics experts have found that minimum dataset and taxonomies can solve the problem of data standardization required in building an information system to advance disciplines body of knowledge. Disciplines continuously gather complex data, but data collected without an organizational context does not increase the knowledge-base. Therefore, a demand exists for developing minimum dataset, controlled vocabularies, taxonomies, and classification systems. To fulfill nursings needs for standardized comparable data, two minimum dataset are used in nursing for organizing, classifying, processing, and managing information for decision-making and advancing clinical nursing knowledge. No minimum dataset in nursing education currently exists.With common definitions and taxonomy of nomenclature related to nursing education, research findings on similar topics can aggregate data across studies and settings to observe overall patterns. Understanding patterns will allow educators, researchers, and administrators to interpret and compare findings, facilitate evidence-based changes, and draw significant conclusions about nursing education programs, schools, and educational experiences. This study proposes a generic methodology for building a Nursing Education Minimum Dataset (NEMDS) by exploring experiences of developing various minimum dataset. This study adapted the systems model as the conceptual framework for building the taxonomy and classification system of nursing education essential data elements to guide the analysis of structure, process, and outcome in nursing education.The study suggested using focus groups, an online Delphi survey, and the statistical techniques of Multidimensional Scaling, and kappa. The study presented these steps: identifying educational concepts and data elements; defining data elements as nursing education terminologies; building the taxonomy; conducting an empirical and theoretical validation; disseminating and aggregating the data in national dataset. The proposed methodology to build an NEMDS meets the criteria of having a nursing education dataset that is mutually exclusive, exhaustive, and consistent with the concepts that help nursing educators and researchers to describe, explain, and predict outcomes in the discipline of nursing education. It can help the transformation of simple information into a meaningful knowledge that can be used and compared by the school, state or country to advance nursing education research and practice nationally or internationally.
Adviser: Mary E. Evans, Ph.D.
Nursing data element.
Unified nursing language.
t USF Electronic Theses and Dissertations.
A Methodology for Developing a Nursing Education Minimum Dataset by Aziza A. Rajab A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy College of Nursing University of South Florida Major Professor: Mary E. Evans, Ph.D. Patricia Burns, Ph.D. Arthur Shapiro, Ph.D. Jason Beckstead, Ph.D. Date of Approval: November 10, 2005 Keywords: coding scheme, classification syst ems, Delphi, focus group, nursing database, nursing data element, nursing e ducation, online survey, ontology, taxonomy, unified nursing language Copyright 2005, Aziza A. Rajab
Acknowledgments Thanks goes first to God Almighty, Allah the most merciful, compassionate, and gracious, for granting me the health and ability to conduct this study. A very special thank you goes to my advisor, chair, and A ssociate Dean for Research & Doctoral Study in the College of Nursing, Dr. Mary E. Evans for her guidance, supervision, and instruction throughout my doctora l studies and this research. As a fine academician, she inspired me in many ways for which I am grateful and thankful. She exemplifies humanism and commitment. Thanks also to Dr Patricia Burns, Dean of the College of Nursing and a committee member, for her admi nistrative concern and direction. I am grateful too, to Dr. Jason Beck sted for his time, efforts, and patience but esp ecially for his encouragement of my inquisitiveness. It was an honor to work with a statistician like him. My special thanks and gratitude go to Dr. Ar thur Shapiro for his time and support, as well as the appreciation that he showed me th roughout the study phases. My sincere thanks go also to Dr. Linda Moody for her guidance a nd supervision throughout my doctoral study. She not only mentored me, but also opened the window of opportunity for me to work with the National League for Nursing on thei r project to develop A Nursing Education Minimum Dataset. As a member of that task force group, I am grateful to her as the chair of the task force group and to all the members. Further, I am grateful to all committee members whose comments helped make this a scientifically sound, clear, and readable work.
My deepest thanks to my family, esp ecially to my father for encouraging autonomy, to my mother for caring and for in spiring me, to my beloved husband for his unlimited support and understand ing, and finally to my w onderful children for sparing me our precious family time to finish this study.
i Table of Contents List of Tables iii List of Figures iv Chapter One: Introduction. 1 Background 1 Statement of the Problem 9 Statement of the Purpose 9 Specific Study Aims 10 Significance of the Study 10 Summary 12 Chapter Two: Review of Literature 14 Minimum Dataset in Nursing and Health Care 15 Definition and Value 15 Steps for Developing MDS 17 NMDS in Different Countries 21 Benefits of NMDS 22 Taxonomies, Standardized Language s, and Classification Systems 23 Focus Group Method 29 The Delphi Method 30 Types of Delphis 31 The Classical Delphi 31 The Policy Delphi 31 The Decision Delphi 32 The Real Time or Electronic Delphi 32 Delphi Survey Benefits 34 Online Delphi Survey Benefits: 35 Multidimensional Scaling 35 History of Multidimensional Scaling: 37 How to use MDS 37 Conceptual Framework 39 Definition of Terms 44 Summary 46 Chapter Three: Design and Methodology 48 Criteria for Sample Selection of Experts 48 Population and Sample Size 49
ii Methods and Steps for Building NEMDS 50 1) Identify educational concepts and data elements 51 2) Define the data elements as nursing education terminologies 52 3) Coding 53 4) Building a Taxonomy 53 5) Empirical and theoretical validation 54 6) Disseminate and aggregate the data 55 Data Collection Instrument 57 Institutional Review Board 61 Data Collection Procedures 61 Data Analysis 63 Summary 65 Chapter Four: Discussion and Conclusion 66 Summary 68 Chapter Five: Limitations and Recommendations for Further Research 69 Summary 71 References 72 Appendix 96 Appendix A: Nursing Education-Related Terminologies 97 About the Author End Page
iii List of Tables Table 1 List of Datasets in Nursing and Health Care 8 Table 2 List of Taxonom ies and Classification Systems in Nursing 27
iv List of Figures Figure 1. Example of Adap tation of Systems Model to Develop a Taxonomy for Building NEMDS. 43 Figure 2. Steps for Building Nursing Education Minimum Dataset 56
v A Methodology for Developing a Nursing Education Minimum Dataset Aziza A. Rajab ABSTRACT Globally, health care professionals, administrators, educators, researchers, and informatics experts have found that minimu m dataset and taxonomies can solve the problem of data standardization required in building an information system to advance disciplines body of knowledge. Disciplines continuously gather complex data, but data collected without an organizational cont ext does not increase the knowledge-base. Therefore, a demand exists for developing minimum dataset, controlled vocabularies, taxonomies, and classification systems. To fulfill nursings need s for standardized comparable data, two minimum dataset are used in nursing for orga nizing, classifying, processing, and managing information for decision-making and advancing clinical nursing knowledge. No minimum dataset in nursing education currently exists. With common definitions and taxonomy of nomenclature rela ted to nursing education, research findings on similar topics can aggregate data across studies and settings to observe overall patterns. Understanding patterns will allow educ ators, researchers, a nd administrators to interpret and compare findings, facilitate ev idence-based changes, and draw significant conclusions about nursing e ducation programs, schools, and educational experiences.
vi This study proposes a generic methodol ogy for building a Nursing Education Minimum Dataset (NEMDS) by exploring expe riences of developing various minimum dataset. This study adapted the systems mode l as the conceptual framework for building the taxonomy and classification system of nursi ng education essential data elements to guide the analysis of structure, process, and outcome in nursing education. The study suggested using focus groups, an online Delphi survey, and the statis tical techniques of Multidimensional Scaling, and kappa. The st udy presented these steps: identifying educational concepts and data elements; defining data elements as nursing education terminologies; building the taxonomy; conducting an empirical and theoretical validation; disseminating and aggregating the data in national dataset. The proposed methodology to build an NEMDS meets the criteria of having a nursing education dataset that is mutually excl usive, exhaustive, and consistent with the concepts that help nursing educators and re searchers to describe, explain, and predict outcomes in the discipline of nursing educati on. It can help the transformation of simple information into a meaningful knowledge that can be used and compared by the school, state or country to advan ce nursing education research and practice nationally or internationally.
1 Chapter One: Introduction. Background Consensus is reached by healthcare professionals, administrators, educators, researchers and informatics experts around the world that the solution to the problem of data standardization required in building an information system to advance the body of knowledge and science in any discipline, re lies on the availability of structured, standardized, and computerized data. ( AACN, 1997; Gassert, 1998; IMIA, 1999; Pew, 1998; Stagger, Gassert & Curran, 2002). More importantly, we need methods and tools to allow data to be collected nationally and/or internationally in a comparable way across various populations, settings, regions, and perhaps across some disciplines. (Werely, Lang 1988; PITAC, 2004; Stev anovic, et al 2005). In recent decades, with th e revolutions of el ectronic technology, there is a strong movement towards the development of cla ssical taxonomies, common vocabularies, and minimum dataset (MDS) to solve the pr oblem of organizing, collecting, storing, retrieving, and aggregating data (Goossen, et al., 1998; Colling, 2000; wheeler, 2004; PITAC 2004). These informatics researchers have found that taxonomies and MDS can solve the following problems: existence of huge amount of unstructured data in enterprises computer networks: wasted time re-creating overlapping information; and the lack of tools equivalent to data mini ng, data categorization and data visualization (Patience and Chalmers, 2002) They emphasized the need for tools to organize information and avoid overloads, especially with ambiguous words, the need for more complex search engines
2 with extreme recall, metadata search, link ranking, taxonomies, and MDS that can help find information both for known items and for di scovery of topics, and where interactive and iterative browsing of subj ect categories arrangements can trigger associations and relationships.( Edols, 2001; Rapoza, 2002; Lehman, 2003;). Information technology managers confirm that people spend more than 2 hours/day searching for information. One Delphi study findings showed that for 73 % of the people finding information was difficult, 28% said that the main impediments were Bad Tools, and 35% said that data changes constantly (Delphi study group report, 2002). In the information-intensive health care industry there is a growing movement toward the use of electronic re cords in health care services ; education, administration and research to collect and handle data. However, data collected without a theoretical context for organizing the data does not add to the knowledge-base. (Brailer, 2004; Moody, Slocumb, Jackson & Berg, 2004). Therefore there is a demand for the de velopment and use of MDS, controlled vocabularies, taxonomies and clas sification systems. They are essential dimensions of an integrated and coordinated health care service and education systems for organizing, classifying, processing and managing informa tion for decision-making purposes (Saba, 1992; Brailer, 2004; Moody, Slocumb, J ackson & Berg, 2004; PITAC, 2004,) The National Dataset Development Program at the National Health Services (NHS) is continuously working to transform various national data from information to knowledge using various tools and statistical methods, one of which is establishing guidelines for developing MDS (White, 2005 ). The underlying assumption of any MDS according to NHS is that it answers a clear information need; enables reliable
3 comparative analysis of individuals, services, or organizations; enables collecting and measuring of performance and outcomes; and permits sharing and aggregating consistent information within and across domains.(White, 2005) Healthcare systems and universities are being transformed by information technology systems (Chaffin & Maddux, 2004; PITA C, 2004). The arrival of information technology in health care settings in the la te 1980s altered traditional nursing education that focused primarily on patient care, to incorporate computer skills (Chaffin, & Maddux, 2004). The term nursing informatics evolved in 1990s; it integrated nursing science with computer scien ce and included the vast data bases available to nurses. Computers, the internet, software and onlin e journals became the new vocabularies for nurses to know and to learn from. Inform ation technology is now the vital source of education and communication for nurses beginning with accessing online information on degree programs, knowing national nursing orga nization web sites, demonstrating high standards advanced nursing skill s, to the growing number of online web based courses and distance education programs that are taught and evaluated from homes or offices (Carlton, Rayan & Siltzberg, 1998; Chaffin 2001; Stagger, Gassert, Curran, 2002). Information technology brought useful innova tions to health car e in general and to nursing as a professional discipline in par ticular. Nurses are using computers for assessing and monitoring patients, ad ministering medicatio ns, providing nursing procedures, documenting information, and co mmunicating with patients and hospital staff. (Halstead & Coudret, 2000). But because information technology is one of the 21 competencies required for all health prof essionals as affirmed by the Pew Health Professions Commission (Pew, 1998) there is a growing need for nurses to have
4 informatics knowledge and skills, and to be involved in the tec hnology for designing and accepting a standardized nursing language (Sullivan, 1997; Moorehead, Head, Johanson, Maas, 1998; Stagger, Gassert, Curran, 2202; PITAC, 2004). Harriet Werley started the initiative of developing MDS in nursing in the early 1980s ( Werley,1986) and her work was the first effort to fulfill the need for systematic data collection, organization, storage, and retrie val of standardized nursing data that are essential to quality in all structure, proce ss, and outcome components of nursing care. It was not until 1998 when the American Nurs es Association (ANA) steering committee on databases to support clinical nursing practice pioneered the development of nursing data set taxonomies, classification systems, codi ng and nomenclatures fo r creating a unified nursing language system (Cohen, Manion, & Morrison, 2000). The ANA also promoted the inclusion of nursing-related data in large health-related databases (Averill et al., 1998) facilitating the collection and analysis of massive amounts of data via large national computer networks. The committee recognized the need for nurses to participate in the development of national health care da ta sets by developing and disseminating standardized vocabularies suitable for inclusi on in computer based systems (Zielstorff et al., 1995). The purpose of a standardized comput erized essential nursing data set is to develop an organized information system that facilitates assessment of nursing services and determines nursings contribution to general health outcomes (Coenen & Schoneman, 1995; PITAC, 2004). Standardized vocabularies are essential for computer decision-support tools using sh arable protocols that reduce error rate, lower costs, and improve quality of health car e (PITAC, 2004). According to ANA, without a commonly
5 accepted NMDS, there will be gaps in nursing data at any or all levels of the systems, making it impossible to assess the effects of nursing care on hea lthcare outcomes. To reduce the present gap in nursing data, there have been various initiatives to develop several MDS in nursing globally (G oossen, et al, 1998). The development of a Nursing Minimum Dataset (NMDS) used widely in clinical nursing practice, and the development of Nursing Management Databa se (NMMD) used in nursing administration services fulfilled the need for systematic coll ection, storage, and retr ieval of standardized nursing data. Both these data sets in nursing ar e considered to be an essential component to the analysis of the systemsapproach to the study of inputs, process, and outcomes in nursing and healthcare. They are an essent ial element in classifying and advancing clinical nursing knowledge (Werley, De vine, & Zorn, 1991, Junger 2004). While the vocabularies used in clinical nursing data set are not universally accepted; there is a growing movement toward adoption of a unifi ed nursing language and an international classification of nursing pr actice (ICNP) (Hardiker, 2004; Moody, 2004; PITAC, 2004). The main reason for having commonly accepted NMDS systems is to produce comparable data for defining nursings cont ribution to patient car e specifically and to healthcare outcomes generally, and for a dvancing both nursing practice knowledge and professional growth (Anderson & Hannah, 1993, Coenen et al., 2001, Mass & Delaney, 2004; PITAC, 2004). It is evident, based on several studies (Hardiker, 2003; Burgun, 2001; Goossen, 1998; Baernholdt, 2003; deClerc q, 2000) that the process of developing any valid and reliable MDS has been show n to be complex and dynamic, and requires several stages and iterations Building a Conceptual Ontology and designing a Taxonomy of Terms and specific data elements of co mmon vocabularies and st andard language is
6 the primary step in building any MDS in a ny field or discipline (Goossen, 1998; Burgun, 2001; Hardiker, 2003; Moody, 2004). To standardize the language, vocabularies mu st be recorded in standard ways so their meaning can be shared between health professi onals in a manner that is interoperable and computable (i.e., able to manipulate and co mbine with other data by a computer). The language must be coded in a standard manner, even if the concepts are referred to by different local names, displayed in different lo cal languages, or depict ed in different local alphabets, they will mean and refer to the same variables each time. The availability of a core set of standard terms th at can be incorporated into the system at every level to describe concepts is crucial to the pr ocess of MDS building (Goossen, 1998; Burgun, 2001; Hardiker, 2003; Moody, 2004) The traditional classification systems used to code medical diagnosis, procedural interventions and outcomes are not adequate (PITAC, 2004), because in clinical settings, providers historically recorded all clinical encounters in detailed textual descriptions, then summarized and coded the information manually by selecting entries from existing classifications such as ICD-9-CM, Nursi ng Intervention Classification (NIC), and Nursing Outcome Classification (NOC). Thes e selections of coding are frequently influenced by reimbursement implications ra ther than detailed clinical implications, which may conflict with the underlying clini cal construct itself. Therefore with the advent of information technology, computer solu tions can ease the challenge of recording standard codes for detailed clinical concep ts. Table1 shows examples of some existing MDS in different healthcare specialties. The common first step in developing all these dataset and many others was to gain consensu s on the specific data elements and data
7 variables that best described the field by identifying what relevant information can be produced by the data elements that has meaning to the users and adds to their knowledge. Next these data elements are designed in a classification system (Taxonomies), the data elements are defined with specific terminol ogy and coded to build a MDS. The Delphi Method is widely known as a method for forecasting the future and reaching consensus on undecided, uncertain, or unclear i ssues. Therefore the Delphi Method is used commonly to reach consensus on rele vant essential data elements to build ontologies, taxonomies and minimum dataset th at allow disciplines to organize and manage knowledge electronically. Similarl y developing a Nursing Education Minimum Dataset (NEMDS) can serve as an infrastr ucture to organize the knowledge base of nursing education and research. Table 1 displays examples of some exis ting minimum dataset in different health care specialties.
8 Table 1 List of Datasets in Nursing and Health Care Type of Dataset Abbreviation Systematized Nomenclature of Medicine, Clinical Terms SNOMED-CT Emergency Medicine Minimum Dataset EMMD Long-Term Health care Minimum Dataset LTHCMDS Ambulatory Medical Care Minimum Dataset AMCMDS Financial Uniform Minimum Dataset UB92 Health Care Facilities Minimum Dataset HCF G MDS Health Professional Minimum Dataset HPMDS Nursing Minimum Dataset NMDS Nursing Management Minimum Database NMMD Uniform Clinical Dataset For Ho me Care and Hospice UDHCH Uniform Clinical Dataset UCDS Uniform Hospital Discharge Dataset UHDDS Mental Health Dataset MHDS Older People Dataset Diabetes Dataset Chronic Heart Diseases Dataset Cancer Dataset Data Element for Emergency Department DEED Patient Care Dataset Preoperative Nursing Dataset
9 Statement of the Problem To date there is no minimum data set in the field of nur sing education (NLN, 2004). The National League for Nursing (NLN) was the first to emphasize the importance of developing the Nursing Education Minimum Dataset (NEMDS) by creating the Task Force Group to begin the development of a NEMDS. With common definitions, taxonomy of terms, and nomenclatu re related to nursing education, research findings on similar educational issues and topi cs can be pooled together so that patterns can be observed across studies, and data can be aggregated across settings. Understanding these patterns will allow educators to make interpretations, compare findings, facilitate evidence-based changes in their educational programs, and draw significant conclusions regarding educational expe riences (Goossen,et al,1998; Jeppson, 2002; Baernholdt, 2003; Junger,2004; Moody, 2004). The NEMDS can also serve as a benchmark and a guideline for nursing education researchers about the te rms and elements used in nursing education research processes and outcomes (Junge r 2004; Moody, etal, 2004; NLN, 2004; PITAC, 2004). NEMDS has great potential to enhance the quality of nursing education services and research synthesis. Statement of the Purpose The purpose of this study is to deve lop a deeper understanding of minimum dataset, and to create a generic methodology for building Nursing Education Minimum Dataset (NEMDS), by exploring previous e xperiences of developing various MDS in
10 different countries. This study will apply an ad aptation of system theory to serve as the conceptual framework for building the ta xonomy and classification system of nursing education essential data elements. Specific Study Aims Study Aim 1: Identify the essential domains, commonly used data elements and essential terms used by the nursing education community. Study Aim 2: Adapt the system model to serv e as a conceptual framework and taxonomic schema for organizing the essential data elements in nursing education. Study Aim 3: Describe the steps and the methodological process of developing a nursing education minimum dataset (NEMDS). Significance of the Study The proposed generic methodology of buildi ng NEMDS is expected to address an important gap in the equality of nursing edu cation practice and rese arch synthesis. The NEMDS has the potential to enhance the ev idence based teachi ng (EBT) in nursing education (Stevens, 1999). Currently the implementation of EBT in nursing education is very limited due primarily to three factors: first, the wea knesses that exists in generating and using knowledge to guide nursing education practices through in-depth predictive research that focuses on teaching and learning experiences, environments, and educational programs. What we have at present is very few quant itative, descriptive studies confined to traditional formal courses and curriculums, with limited numbers of diverse samples and sample size, and infrequent publication of these studies (Yonge, 2005). Second, The
11 difficulties that exist in funding educational research nationally and internationally make it impossible to advance the knowledge relate d to nursing education; Third, and most relative to our study, is the absence of methodologies and databases to collect and share national and international data related to nursing educati on for advancing the knowledge building process and to promote an EBT in nursing education practices (Young, 2005). Therefore, establishing NEMDS can enable nu rsing educators to systematically collect, organize, store, retrieve, and analyze specific data elements and educational variables to generate a large body of knowledge through rigor ous research that can guide and advance nursing education practice.(Junger, 2004; Moody, 2004; PITAC, 2004) Based on studies and examples of building various minimum data set in healthcare and nursing practice, (Huber, 1997; Goo ssen, 1998; Saba, 1992; Baernholdt, 2003; Fahrenkrug, 2003), a common and nationally ac cepted NEMDS will have the following potential implications: Assist nurse educators and researchers in analyzing selected variables across programs at any level of the system and to examine how selected input variables are related to the process and output va riables in nursing ed ucation practices. (Junger, 2004). Serve as an assessment and planning tool to be used in administrative databases for research, funding, and policy making applications of nursing programs by systematically monitoring nursing edu cation quality and outcome indicators (Fahrenkrug, 2003). Increase nursing educators opportunities to communicate and collaborate with educators from other discipli nes to identify nursing edu cators needs, describe
12 nursing educators roles, and address quality of nursing ed ucation services, outline methodologies and guidelines for nursing education process and outcomes (NLN, 2004; PITAC, 2004). Enable comparison of common data elements and outcomes across academic programs at a state, regional, national or international levels, by producing key information and bench marking indicato rs for various types of evaluation parameters (Junger, 2004; Moody, 2004). There have been no previously published effo rts that have focused directly on describing the methodology for building a NEMDS specifically. Therefore the proposed generic methodology is an effort to identify the essent ial items and data elements used in nursing education and to explain the process of building NEMDS. Summary This chapter included an introduction that explains the need and importance of standardizing health care langua ges in general and for deve loping taxonomies and dataset for the nursing community specifically. It disc usses the different NMDS that exist in the nursing profession and their value to nursing practice and research. It explains the importance of developing a NEMDS and the potential advantages to the nursing education community. This chapter also include s statement of the problem, statement of the purpose, specific study aims, and the sign ificance of the problem The next chapter reviews the literature on the methods used previously in developing a taxonomies and
13 data sets, presents a conceptual framework for developing taxonomy to build a NEMDS, and defines terms used in the design and methodology.
14 Chapter Two: Review of Literature This chapter reviews research related to the key variables of the study using the following search terms that were related to the study concepts: nursing minimum dataset, nursing database, classification systems, taxonomies, ontology, nursing data elements, coding scheme, unified nursing language sy stem, focus group, Delphi studies, on-line surveys, and nursing education. The review is based on a computerized literature search of the following databases, 1980 to 2005: Cumula tive Index in Nursing and Allied Health Literature (CINAHL), Medline, and Psych Info. Additional studies were identified through citations in other published articles and manual searches. The literature search identified 290 articles related to the search terms used. These resources were reviewed, screened, and narrowed down to 75 which met the inclusion criteria. Articles were excluded from the literature review if they only tangentially relate d to the study purpose. Most of the articles did discuss the development and uses of a minimum dataset, as well as the use of Delphi method in depth. Th e discussion of the literature includes only researchbased articles, books, and research conference presentations. Both quantitative and qualitative research articles focusing on the variables were included in the review. The theoretical frame work is discussed in de pth. The chapter concl udes with a section on definition of terms.
15 Minimum Dataset in Nursing and Health Care Definition and Value The NMDS was defined by Werley and others as a minimum data set of items of information with uniform definitions and categories concerning the specific dimension of nursing which meets the information needs of multiple data users in the healthcare system. The NMDS includes those specific items of information, which are used on a regular basis by the majority of nurses across all types of settings (Werley et al,. 1991, p.422; PITAC, 2004). In an era of growing pressure from the nursing profession, policy makers and society to justify and legitimize nursing cont ributions to health care and its cost (Werley et al, 1991; PITAC, 2004), there is a need for developi ng a unified nursing language system (Coenen & Schoneman, 1995; PITAC, 2004). Standardized nursing language, if collected on an ongoing basis, enables nurses to evaluate services and to compare data across populations, settings, geographical areas, and times (Delaney et al., 1992). Exchanging existing sources of information that ar e based on a common architecture with standardized data definitions will enable computer-aided decision support, automated error detection, and rapid analysis for re search (PITAC, 2004). The nursing community began developing classification schemes, nomenclatures, and taxonomies through research (McCormick et al., 1994). The initiative for an NMDS started in the United States of America (USA). Established by Werley and others in 1991, it has been accepted widely as a tool to describe nursing care systematically (Clark & Lang, 1992; Mortensen, 1996; Sermeus and Delesie, 1997; Goossen et al., 2000) The Nursing Minimum Dataset (NMDS)
16 represents the first attempt to standardize the collection and retrieval of essential nursing data (Werely et al, 1991). It is widely used in clinical nursing practice today. Coenen et al., (2001, p.9) stated that The NMDS and a nursing information system using standardized classification systems for nur sing diagnosis, interventions and outcomes provides an opportunity to de scribe nursing practice. In conclusion the purposes of the NMDS s are to establish comparability of nursing data across population, describe the di versity of population and the nursing care of patients and families across settings, define variability of nursing activities, determine the complexity of nursing work load, and es tablish general indicator s for the quality of nursing services from benchmark information. The NMDS can be also used for projecting nursing care tre nd analysis, resource allocati on, budget negotiation, funding determination, and policy making. It can stimul ate nursing research th rough links to the detailed data existing in nursing informati on systems and other healthcare information systems and dataset. (Saba, 1992a; Ande rson & Hannah, 1993; Coenen & Schoneman, 1995; McCormick, & zielstorff, 1995; Averi ll et al., 1998; Prophet, Deleney, 1998; Goosen et al., 2000; Coenen al., 2001). The focus today on evidence-based practice in nursing urges nursing educators to modify their curricula and teaching methods to incorporate research findings from nursing practice and research. To build the knowledge base for nursi ng education, one of the first important steps is to define the terms and data elements used in nursing education, and to develop an ontology and ta xonomy in nursing educat ion that allows our discipline to organize an d manage knowledge electron ically. (Junger, 2004; Moody, 2004; NLN, 2004)
17 The process of reaching consensus regarding the essential data elements needed to build a taxonomy, ontology, and MDS in any nur sing field is considered to be very complex, dynamic, ongoing, and a starting point for creating an infrastructure to organize information that builds the knowledge-base for nursing Steps for Developing MDS The process of building any MDS is very complex, time consuming, laborintensive, and requires several stages to accomplish.(Goossen, et al, 1998). For developing any MDS, the following five important steps have to be completed consecutively: 1) identification of data item or element as a variable; 2) accurately defining each variable; 3) determining the unive rsal value of each variable; 4) useing of appropriate terminology in documenting the variable; and 5) aggregating and coding data into databases for different purposes of h ealth care management, research, and policy (Goosen et al., 1998). The NMDS for example first identified items related to hospital patient demographics, medical diagnosis, nur sing process, structure, interventions, outcomes of nursing care, and complexity of care (Werley et al., 1991; Delaney et al., 1992; Rantz, 1995; Keenan & Aquilino, 1998; Denehy & Poulton, 1999). McCormick et al. (1994) emphasized the need to develop th e specifications of the resources and the procedures required to map language to identif y concepts and specific data elements so that uniformity can be attained. Many nur sing languages used today to describe nursing practices supported the development and id entification of NMDS. Belegen and TrippReiner (1997) confirms that taxonomies of North American Nursing Diagnosis (NANDA), Nursing Interventions Classifi cation (NIC), and Nursing Outcomes
18 Classification (NOC) were th e building blocks of the nur sing knowledge on the NMDS. The American Nursing Associations (ANA, 2001) committee on databases to support clinical nursing prac tice also recognized t hose classification systems (Goossen et al., 2000). But accessibility and utiliz ation of computerized data continues to be a challenge because standardization in definitions, c odes, classification, and consensus on common terminologies of health information are difficult. Complexities of integrating health information from diverse sources, and lack of investment in information technologies by the health care industry are the main cause s for lack of the needed standardization (Renner & Swart, 1997). Another example of MDS in nursing is the Nursing Management Minimum Dataset (NMMDS), which began in 1989. Huber and Delaney, the co-principal investigators of the NMMDS project based the conceptual framework for the data needs of nurse executives on Donabedians classi c ideas of structure, process and outcomes as the components of quality (Donabedian, 1980) The NMMDS was also based on the Iowa model of nursing administrati on (Gardner et al., 1991) and on the NMDS of Werley and Lang (1998). Huber and Delaney found difficulty in standardization of definitions due to the lack of uniformity of the management elements in the nursing management literature. Therefore, research with nursing experts in management, informatics and uniform data sets was conducted to identify and devel op a research-based N MMDS (Huber 1997). As a result eighteen acute care-based NMMD S variables were developed, and research studies to establish validity and usefulness were conducte d in two forms. Consensus surveys and consensus-building invitational wo rkshops were used to test the acute-carebased data set for its portability and linkage potential across settings. Surveys were sent
19 to selected, non-acute care settings such as long term care, occupational health, ambulatory care, home health, and community health. The two surveys, focused on consensus and determining the adaptability of the NMMDS to each setting. The eighteen variables, clustered into the broad categor ies of environment, nurse resources, and financial resources. Definitional variations and measurement issues made reaching consensus across settings difficult and slow. Va riables such as cost and satisfaction were the most difficult for the group to reach a consensus. Warner describes that one way for identifyi ng the exact term is to just extract the specific terms we want information about, from existing controlled vocabularies and taxonomies and languages, or, sometimes, if we are building a taxonomy or MDS in a new field and there are no other resources av ailable, the only opti on, is to start from scratch to identify them via using a focus group method (Warner, 2005). The next step in building the MDS, is finding common and accu rate definitions of the term and data elements (Goossen, et al, 1998). Once we have th e term then: we need to add control of synonyms on the terms considered equivalent by defining the terminology to be specific to our meanings and needs. Finding common definitions for the terminologies used in the data elements for building any taxonomy or dataset has been the repetitive obstacle in the literatures ever since, Kirt, (1985) raised severa l concerns in the process of developing Nursing Diagnosis Classification (NOC) related to unified common item definitions. Kirt identified the problems of lack of clarity in the level of abstractedness or concreteness of the terms; the need for id entifying common denomina tors to clarify the use of cues, signs and symptoms, and defin ition characteristics as major problems. She stated that to move towards universal valid ation, massive data about the use of nursing
20 diagnosis were needed, and that clear con ceptual and operational definitions of nursing diagnoses that are acceptable and useful in multiple practice settings was required (Creason, Poue, Nelson & Hoyt, 1985; Jones, 1982; Kim et al, 1984). After specifying the terms or the data el ements, and defining it accurately, the next step is choosing the best, most consis tently clear and unambi guous labels or names available for the content by which the user wi ll navigate. Leahy quoted a Chinese adage, the beginning of wisdom is calling things by their right na mes (Leahy, 2004). There are several options for obtaining labels and determining the universe values of those terms; the option selected depends on what we want to do with the term and what resources we have at our disposal to create the labels, agreed terminologies, and categories of the essential variables.(Goossen, et al, 1998; Warner, 2005). Several steps follow this to develop a classification system, and taxonomy to build a MDS. Ontology is the next step, and this means categorizing and labeling the da ta elements into a specific scheme that helps classification (ICONS, 2002). This st ep requires grouping the identified and defined data elements into major categories or classifications by arranging the terms into one or more hierarchies proceeding from general to specific (Werley & Lang 1988). Determining other associative or related term relationships among terms or labels is essential for developing a navigation scheme within the developed taxonomy or other taxonomies (Warner, 2005). For instance, Karpiuk et al. (1997) studied the comparability of nursing diagnosis and nursing intervention ac ross eight settings in South Dakota using the16 category classification scheme developed by Werley and Lang (1988). Coding the data elements accurately a nd consistently will help automated collection, storing and retrieving of data. Codi ng the data element need to be transparent
21 to all data users (Robins, Braddock, & Fryer, 2002). There is no one sp ecific way to code data elements, codes can be specified us ing Arabic or Roman numeral numbers, alphabetic letters of the language in which the MDS is built, or any other codes that helps the automated documentation based on the in formation technology available in hand. Theoretical and empirical validation of the taxonomy and the MDS is essential before presenting the MDS to the specific community of the data users ( Reynold,1971; Turner, 1986; Ryan-Wegner,1992; Goossen,e t al, 1998; Griens,Goossen, Vander Kloot, 200; Warner, 2005). Finally disseminating the MD S to the specific data user community and aggregating the data in th e national data repository systems is the ultimate important step for the dataset to have a global meaning ( Goossen, 1998) Adaptation of this process in nursing education can be useful in developing a bluepr int to build a NEMDS. NMDS in Different Countries Although nursing data are usually absent from health care data collection systems around the world (Clark & Lang, 1992), many c ountries are developing NMDS systems. In Belgium, all general hospitals were requi red by law in 1988 to collect data for an NMDS four times per year. In Australia the objective of the Community Nursing Minimum Data Set Australia (SNMDSA) is to introduce standardization and comparability of community nursing data (A ustralian Council of CNS, (Renwick1994). In Canada, the Alberta Association of Regi stered Nurses suggested the inclusion of nursing components into the Hospital Medica l Records Institute (HMRI) data bases, addressed as health information nursing components (Hannah et al., 1995). In
22 Switzerland an NMDS is under developmen t (Van Gele, 1996; Weber, 1996). The national health services in England esta blished an information management and technology strategy that incor porates nursing care clini cal data. The United Kingdom (UK) is in its early stages of developi ng nursing care description items on the NMDS (Wheeller, 1991, 1992). In Scotland the Core Community Minimum Data Set Scotland (CCMDS) includes nursing data in a multi-disc iplinary data set for use in automated records. In the Netherlands there is an increased interest by several professional organizations to develop a Dutch NMDS fo r policy development, funding, budgeting and staff allocation in the health care system (Goossen et al., 2000; Griens, Goossen, & Vanderkloot, 2001). Benefits of NMDS Goossen et al. (1998) identified the advant ages the above countries are finding in developing and adapting the NMDS. These include collection of and computerized documentation of retrievable nursing data, excellent opportunities for comparing and contrasting nursing practices at different levels, ability to sh are nursing data with other health professional databases, predict re source allocation needs (Saba and Zuckerman, 1992), serve as a cost-effective data abstracti on tool, establish retros pective validation of the defining characters of nursing elements; de termine the costs of direct nursing care; and serve as a mean of forecasting freque ncy and trends in nursing diagnosis, intervention, and outcome. The computerized standardized data aids in the problem detection and solution, decision making, polic y revision and reformulation (Delaney et al., 1994).
23 Taxonomies, Standardized Languages, and Classification Systems The terms taxonomy, ontology, classificati on are often used interchangeably. They are ways of organizing information of things into categories (I.I.P, 1993; Warner, 2005; Oppenheimer, 2001; I.C.O.N.S, 2002). They are controlled vocabularies and organized lists of words and phrases or notati on systems, that are used initially to tag content, and then to find it through naviga tion or search (Warner, 2005). There are hundreds or thousands of controlled vo cabularies floating around in nursing. Unfortunately a great deal of disagreement exis ts as to the individua l definitions of each of the controlled vocabularies and their classi fication. This is the main reason why there are many challenges associated with developing accurate nursing standardized taxonomies Taxonomy (Greek ward taxinomia, taxi = order and nomos = law) refers to the classification of things or the principles underlying the classifi cation it self. Almost everything can be classified according to some taxonomic schema. Taxonomies can be hierarchical in structure or sc hematic that refers to arrangem ents based on the relationship between the different data elements. A ta xonomy might be a simple organization of objects into groups or a tree stru cture of classifications for a given set of objects where at the top of this structure is a single cla ssification and below are the more specific classifications that apply to subsets of the to tal set of classified objects (Warner, 2005). Taxonomy can be artificial or natural, hierarchic by order, or schematic by relation. Following the development of NMDS, and in the beginnings of 1950s, there was a tremendous effort by many nursing researcher s to standardize the nursing language in the main three categories and elements of NMDS: nursing diagnosis, nursing intervention
24 and nursing outcomes (Prophet, Deleney, (1998) ; Deleney, et al, (1992). The research team has described 433 nursing interventions, eac h with a label, definition, and a list of defining activities (McCloskey & Bulecheck, 1995; Bowles, Naylor, (1996), as well as, delineated 196 outcomes, each with an outcome label, a definition, outcome indicators, and a measurement scale (Johnson & Mass; Gudmundsdottir, Delaney,Thoroddsen, Karlsson, 2004). At present there are only five majo r classifications or taxonomies for Standardized in Nursing Languages (SNL s): North American Nursing Diagnosis (NANDA), The Nursing Intervention clas sification (NIC), The Nursing Outcome Classification (NOC), The Home Health Cla ssifications (HHC), and The International Classification of Nursing Practice (ICNP), Among the few recent initiatives of deve loping classifications in nursing, see Table 2. is the study of Classifying Nursing Sensitive Patient Outcomes, the research conducted at IOWA College of Nursing by (Mass, Johnson, & Moorhead, 1996). The Iowa study describes the resolution of conceptual and methodological problems that define the inductive approaches followed to develop NOC Another example of the initiatives in nursing, is the study by Osoba (2002) in Canada, he proposed a taxonomy of psychometrically based, health related quality of life instruments related to three levels of decision-making of health care: macro, meso, and micro levels. In recent decades, various disciplines in di fferent countries, espe cially in medicine and psychology, have been conducting qualitative and quantitative research to develop different taxonomies and classification systems. For example, The department of Internal
25 Medicine, Communication, Psychiat ry, and the health services research center in primary care at the University of California, deve loped a classification system of patients requests in office practice (Kravitz, Richard, Bell, Ca rol, 1999). Another study by Robins, Braddock, & Fryer (2002), was an atte mpt to classify a nd categorize ethical issues of undergraduates in medical education. Department of Soci al and Organizational Psychology in the Netherlands developed a taxon omy of situations to reflect self and social identity (Ellemers, Spears, Doosje Bertjan, 2002). Cloninge rs (2005) published a handbook on Classification of Sanities, focuse d on developing a taxonomy of well-being. Stucki (2005) developed a classificati on for rehabilitation medicine by adapting The International Classificat ion of Functioning and Health (ICF). Oppenheimer, (2001) in the department of nutrition sciences, at Brooklyn College suggested a new classification of population. Psychiatrist around the globe are reconsidering how to classify the conditions they treat, for example; McHugh, 2005 published a commentary article for grouping mental disorder into cl usters based on adapti ng the International Classification of diseases (ICD). Similarly, department of Psychology at University of Iowa developed a diagnostic taxonomy in Psyc hiatry based on the structure of DSM-IV (Clark, Watson, Reynolds, 1995). The strategies used to develop the ta xonomies in all these studies and many others, included reviewing liter ature, searching other database s, concept analysis, use of surveys of experts, and focus group method. Th e starting point for developing any of the above taxonomies was to determine the con cepts that represen t the field, develop standardized definitions for those concepts, and reach consensus on the terminologies
26 used. The final stage was arranging thes e terminologies, according to a specific relational, and hierarchical sc hemes in an ontology that l eads to the birth of a taxonomy. Burkhart and colleagues emphasized that taxonomy developers must identify the concepts relevant to nursing, categorize the concepts into discrete labels, and continually update the taxonomies to capture changes in nursing practice. The problem is that today many nursing standardized taxonomies have emerged, which has caused conceptual confusion over the terms and terminology be ing used. The ANA database committee is continuously calling for a unified nursing data base that conceptually links all nursing standardized taxonomies based on conceptu al congruence. According to ANA the nursing community never did create one unified nursing database, but rather developed individual nursing standardized taxonomies that partially suppor t linking the clinical nursing terms across databases. They further add that the method for developing and linking standardized taxonomies is thr ough mapping (ANA, 2001; Burkhart, Konicek, Rndebra, Moorhead, and Rowich, 2003). Neve rtheless to compare nursing data across diverse populations, geographic areas and times, accurate reference terminology is essential for intervocabulary mapping. In othe r words, further efforts are needed not only to build unified definitions/ vocabularies/ classifications of various data elements at different level of abstrac tion, but also to compare a nd link the data to existing information systems (Delaney, Reed, & Clark, 2000).
27 Table 2 List of Taxonomies and Classi fication Systems in Nursing Name of the Nursing Taxonomy and Classification System Abbreviation North American Nursing Di agnosis Association NANDA Nursing Intervention Classification NIC Nursing Outcome Classification NOC Home Health Classifications HHC International Classification of Nursing Practice ICNP Information technology influences predictive changes in the health care systems where nursing education acts as an agent of change. It is the responsibility of nursing educators to shape nursing practice and prepar e members of interdisciplinary health care teams that demonstrate flexibility, accountabil ity and leadership in dealing with everchanging environments. At the moment ther e are two nursing orga nizations that are engaged in progressive work to develop a NE MDS, the American Association of College Nursing (AACN), and the National League fo r Nursing (NLN). The AACN has goals of promoting educational reforms, providing standards and resources, and fostering innovation to advance professional nursing e ducation, research and practice. The AACN has identified eight hallmarks to inform st udents and new graduates, nurse educators, executives, and practicing nurses about the key characteristics of health care settings that promote professional nursing practice. Th ese hallmarks focus on utilization of technological advances in c linical care and information systems to create a Nursing Education Minimum Dataset (AACN, 2003). Si milarly, the National League for Nursing
28 has five goals to promote quality nursing education and prepare the work force to meet the needs of diverse populations in a consta ntly changing health care environment (NLN, 2002). These five goals are: 1) setting standards of quality nursing education; 2) faculty development; 3) promotion of evidence base d teaching, 4) devel oping and 5) providing strategies to evaluate educa tional outcomes, student achievement, and nursing work force competencies. The NLN meets these goals through four different advisory councils that have identified several task groups directed toward accomplishing these goals. The NLN Nursing Education Minimum Data Set (NEMDS) Task Group is charged with identifying data elements of nursing education, standard izing vocabularies, creating nomenclatures, and developing a common taxonomy related to nursing education (NLN, 2004). The NEMDS will lay the foundation that will advance nursing education knowledge development, and advancement of nursing e ducation research. Focus group discussion with expert nurse educators, administrators, and researchers is a common preferable method to first identify the key data elemen ts in nursing education. This method has been widely used in identifying the data element of other MDS. It has also been used to obtain clusters, classifications, or grouping and categorization of the data elements into main domains and, finally, it has been used to define and reach consensus regarding the labeling of a common specific terminology of the data elements. The focus group method usually precedes the Delphi process and he lps to identify the variables needed to construct the instrument that can be used in the Delphi survey. The goal of the focus group is to reach saturation about the issue or topic in hand to produce an ontology of essential terms and concepts (deClercq, Blom, Hasman, Korsten, 2000; Biolchini, & Patel, 2004).
29 The Delphi method on the other hand, is the common approach for reaching consensus among experts on a common taxonomy and ontology of the essential terms and data elements necessary to build the NEMDS. Focus Group Method Use of focus groups is a way to better understand how people f eel or think about an issue, product, or service from a special type of grouping in terms of purpose, size, composition, and procedures (Krueger & Casey, 2000). The purpose is to listen and gather information. Participants are selected because they have certa in characteristics in common that relate to the t opic of the focus group (Belenge r, Bernhardt, Goldstrucher, 1976; Belisle, 1998). The researcher creates a carefully planned series of discussions designed to obtain perceptions about a defined area of interest in a permissive, nonthreatening environment and without pressuri ng participants to vote or reach consensus (Debus, 1990). The discussion is usually conducted several times with similar homogenous types of participants in a social interaction to identify trends and patterns, and reach saturation (Goldman & MacDonal d,1987). The whole idea of the focus group is to produce qualitative data regarding a sp ecific issue from a focu sed discussion, to be used by researchers to make decisions (Krueger, 1998). Focus groups have been helpful in asse ssing needs, generating information for constructing questionnaires, developing pl ans, testing new ideas, and developing outcomes (Greenbaum, 1998). Focus group met hod is used often in developing the questionnaires for building taxonomies and the MD Ss (Jackson, et al, 2003; Kravitz, Bell, Franz, 1999; Volrathongchi, Delaney, Phuphaibul, 2003). Once the questionnaire has
30 been developed with the main data elements, Delphi survey methods come in handy for reaching consensus on these data elements. The Delphi Method The Delphi method is an established me thod of conducting nursing research. The Delphi method has been defined as, A met hod for systematic colle ction and aggregation of informed judgment from a group of expert s on specific questions or issues (Reid, 1988), in a cost effective and time efficient manner (Skews et al., 2000; Keeney et al., 2001). It is a significant me thodological tool for solv ing problems, planning, and forecasting (Polit & Hungler, 1995). It is hi ghly motivating and interesting (Phill, 1971) and educational for the participants and the researcher (S tokes, 1997). Consensus occurs after surveying information using a sequent ial questionnaire, ite ration, and controlled feedback in a series of rounds (Goodman, 1986; Jones & Hunter, 1995). A summary of each previous round is usually communicated to, and evaluated by the panel before the next round of questionnaires is sent because the views of the participants converge through informed decision making (Duffield, 1993). It is an accepted and useful techniqu e for achieving a consensus of views among expert panels regarding a given area of uncertainty or lack of empirical evidence, through utilization of questionnaires, ite ration and the provision of fee dback, and with full, partial or quasi anonymity (Phill, 1971; Reid, 1988; McKenna, 1994; Hasson, Keeney, & McKenna, 2000). The name Delphi is inspired by the temple complex at the city of Delphi in Greece where the Greek God Apollo Pzthias was a master in prediction of the future (Evertt, 1993). Therefore, the Delphi method is associated with forecasting the
31 future. It has been widely used for the past sixty years in business, industry and healthcare research w ith a variety of methodological in terpretations and modifications (Powell, 2003; Hanafin, 2004). It is believed that the first national Delphi study was used in 1944 to predict the outcome of a nuclear strike on the United States; it was initiated by RAND, Research and Development Corporat ion of the American military (Dalkey & Helmer, 1963; McKenna, 1994; Gupta & Clarke, 1996). Today it has gained wide popularity, having been used in more then 1000 different studies. It has been used more then 300 times in nursing and health resear ch in the past 15 years (Boweles, 1999). Predicting future change is one of the main motivations for using Delphi techniques. Delphi as a research methodology has been presented as a survey (Wang et al., 2003), as a method (Linstone &Turoff, 1975; Cr isp et al., 1997), as a procedure (Rogers & Lopez, 2002, and as a technique (Broom feild and Humphries, 2001; Snyder-Halpern, 2002; Sharkey & Sharples, 2001). For this study purposes we refer to it as a survey. Types of Delphis A number of different types of studies that used the Delphi methods have modified the technique itself. Hasson et al ., (2000) and Hanafin (2004) list ed and defined four types: The Classical Delphi This is the traditional Delphi where true anonymity, iteration, controlled feedback, a nd stability in responses on an issue are a must. This type uses the traditional mailing of que stionnaire to reach consensus. The Policy Delphi This type of Delphi is ai med at developing policies and promoting participation by obtaining as many divergent opinions as possible to have
32 polarized group responses and structured conflicts. It may provide only selective anonymity as some of the responding groups might meet. The Decision Delphi This type of the Delphi is used for decision making purposes and social developments. The decisi on makers often involved in the problem participate in the Delphi to reach consensus based on structured thinking. It only provides quasi-anonymity as participants are nomina ted for their positions and expertise and mentioned by their names. Their responses, however, are anonymous to other participants. The Real Time or Electronic Delphi : This type is a modification that makes use of computer technology where responses using a voting system are made known immediately to the assembled panel. The inte rnet presents endless possibilities for this type of approach (Berreta, 1996). Questionn aires are emailed electronically to each participant, surveys are completed online, and data is directly downloaded into a database on completion of each round. The data is auto matically transferred from the web-based system to an Excel spreadsheet and is ready for analysis (Nathan et al., 2003). This also provides only quasi-anonymity as the researcher has a full kn owledge of the participants identity. Delphi surveys can be objective and qua ntitative in nature (Blackburn, 1999; Monti & Tingen, 1999), or they can be subjec tive and qualitative in nature (Fitzsimmons &Fitzsimmons, 2001; Hanafin, 2004). The follo wing explanations are in continuous debate for defining and defending the credib ility of Delphi surveys. Robson (1993) argued that the position of the researcher in the Delphi st udies remains as an uninvolved observer only for expert inclusion, data coll ection, statistical meas ures application, and
33 consensus identification. That is why this method is objectiv e and quantitative, and the participants are positivist because they are agreeing on a single reality. On the other hand, Schwandt (2000) states that we are all cons tructivists by nature, because every mind is active in constructing knowledge and every pa rticipant is able to build an opinion. However most importantly it is the arrange ments that the researcher makes of the necessary environmental inputs as a feedback, that bu ild up the true internal representation of the topic within the partic ipants. It is not built only by their intrinsic capacities for reason, logic or conceptual processing. Relatively in the Delphi surveys, a process of individual feedback about group opinion with opportunities for respondents to change their decisions primarily on the basis of the specific feedback provides a cl ose example for the use of environmental inputs to build up internal representation rega rding the issue under study. And that is why a Delphi survey also can be subjective and qualitative in nature as participants are constructivists (Gergen, 1995, p.18; Hanafin, 2004). Individual attitudes and beliefs do not form in a vacuum; people need to listen to others attitudes and understandings so that they can recognize their own (Marshal & Rossman, 1995; Reed & Roskell, 1997). This is the central aim of suggesting the use of Delphi survey in research to produce a NE MDS. We are seeking to achieve consensus among the participants (constr uctivist) through pr oviding opportunities to recognize and acknowledge the contributions of each partic ipant. Because we assume that multiple realities exist, we need to explore and study them all to be able to choose from them and to reach to a decision.
34 Delphi Survey Benefits The Delphi survey has several advantages It is an efficient and economical way of combining collective human intelligence, knowledge, and capabi lities of a group of experts (Lindeman, 1975; Jones, Sandes on, & Black,1992; McKenna, 1994; Murphy et al., 1998). It may be used to develop both qua litative and quantitativ e data (Reid, 1988), provide controlled anonymous feedback, and tolerate large panelist diversity (Keeney, Hasson, & McKenna, 2001). In addition, it lack s interviewers and researchers bias (Hitch & Murgatroyd, 1983). The Delphi tec hnique also reduces geographical limitations (Jones & Hunter, 1995), helps minimize the effect s of groups interacti ons, and facilitates free expression of opinions (Goodman, 1987; Murphy et al., 1998; Snyder-Harpen, 2002). Objectivity of process and outcome of the Delphi me thods are maintained as the biasing effects of factors such as personality traits, senior ity and experience are minimal due to the anonymity among respondents (J airath & Weinstein, 1994; McKenna, 1994). Other advantages related to the use of the Delphi questionnaires are the capacity to capture a wide range of inter-related vari ables and multidimensional features (Gupta & Clarke, 1996) and the quality of respondents contribution is enhanced as they can complete the questionnaire on their leisure pace and time. The last advantage reduces time pressure and allows for in-depth re flections and contemplation of responses (Linstone & Turoff, 2002; Snyder-Harpen, 2002).
35 Online Delphi Survey Benefits: The literature shows that, in comparison of electronic surveys to the traditional mailed surveys, researchers (Kiernan, Oyler, Kiernan and Gilles, 2005) have found that the electronic surveys had an effec tive response rate of 95%, signi ficantly better than that of the traditional mailed surveys (79%). Also the completion rate of the qualitative questions that measure the knowledge, attitudes, behaviors and intentions were equal in both surveys. In conclusion they stated th at the electronic surveys can achieve as effective response rate as a traditional mail su rvey; be as effective in the completion of quantitative questions; elicit longer, more substantive qualitativ e answers than the traditional mail survey; and evoke the same evaluative views. The use of electronic surveys may result in high data quality, less time, and low costs (Dillman, 2000Archer, 2003; Morerel-Samuel, 2003). The responses from the Delphi subjects on the essential data elements can be analyzed using multidimensional scaling to identify similari ties and dissimilarities among the data elements. The MDS method can furt her refine the instrument by grouping the data elements statistically into more specific meaningful dimensions. Multidimensional Scaling The rate of increase of human understandi ng depends on organizing concepts that allow us to systematize and compress large amount of data. Systematic classification generally precedes understanding (Schiffman, Reynolds, and Young 1981). Multidimensional scaling can help systematize data in areas where organizing concepts
36 and underlying dimensions are not well developed. Multidimensional Sc aling is a useful mathematical tool that enables the representati on of similarities of obj ects spatially as in a map. Objects judged to be similar are repres ented as points close to each other in a resultant geometrical space, and objects judged to be dissimilar are represented as points distant from each other in the same space. Besi des expressing all combinations of pairs of similarities and differences with a group of objects, MDS also represents the underlying relationship among the objects under study (She pard 1972). MDS gives more meaningful representation and interpretable solutions for the data by obtaining measures of similarities among objects under study. The co mputational strategy is to find spatial arrangements of law dimensionality where th e rank order of the distances between items in the space correspond with the rank order of similarity measures in the data with minimal error (Schiffman, Reynolds, and Young, 1981). Multidimensional scaling does not require a prior knowledge of the attribute to be scaled; rather it provides a space that reveal s dimensions relevant to the object. The dimensions underlying a given set of stimulus are typically unknown in advance, and the problem of determining them is the majo r purpose of MDS (Lantermann & Feger 1980). However, interpretations of the dimensions are skills that develop with experience and through knowledge of the proper ties of the objects (stimuli) under scaling. In order to capture the full complexity of the data, the points are allowed to assume positions within 2, 3 or even 4 dimensional space. However a lowerdimensional representation is more parsimonious in that; 1) it represents the same data by means of a smaller number of numerical parameters (the special coordinates of the points); 2) to the extent that fewer parameters are stimulated from the same data, each is generally based upon a large subset
37 of the data, this gives greater statistical relia bility; and 3) most significantly a picture or model of dimensional space is much more accessible to human visualization. On the other hand, sometimes one or two dimensions are not enough to accommodate the full complexity of the relations hip of items on given data (Shepard 1972; Young, Hamer, 1987; Young, Hariss, 1994). History of Multidimensional Scaling: MDS has primarily two phases in its devel opment. The first phase was started in 1938 by Young & Householder who explained the matrix of distances in Euclidean space, followed by thePrinceton appro ach by Torgerson in 1952 which achieved a workable method of classical MDS in psychology that inspired people associated with Gulliksens psychometric group at Princeton Univ ersity. Ten years late r, phase two of the development was completed by ShepardKruskal when he put the conceptual basis for the Non metric variety of MDS under the name (analysis of proximities). Since then it has been used in different disciplines: ergonomics-(Coury, 1987), Forestry (Smith and Iles, 1988), Biometrics (Lawson and Ogg, 1989), ecology (Tong, 1989), and Nursing (Young, Hamer, 1987; Houfek, 1992; Wilson & Retsas, 1997; Griens, Goossen, & Kloot, 2001). How to use MDS To explain how the MDS works, the United States map was used by (Schiffman, Reynolds, and Young 1981). Asking one to meas ure with a ruler the distances among 10 diversely located American cities is straight forward projec t, but MDS did the opposite. It took the set of distances, (found in a table at the bottom of maps), and recreated the map.
38 The distances were represented by points or s pecial coordinates in the special model in such a way that the significant f eatures of the data about thes e distances were revealed in the geometrical relations among points. The resulting special representation attempted to capture fundamental properties of the distances solely by setting them into correspondence with positions within a spatial continuum. A computer program, Alternating Least Squares Scali ng (ALSCAL) procedure, was used in an attempt to fit the data in such a way that the di stances between cities in the de rived space were in the same ratio as the flying distances used as data. Th rough eight iterations, i. e., the rank order of distances between pairs of cities along the line were compared with the rank order of the flying distances. In each itera tion, a large measure of error is removed, as it improves the position that indicates reduction in differences between the ra nk order of distances in the space and the rank order of the flying distances between cities. A drop in stress levels in eight iterations from .80 to a stress level of .45 was accepted to stop further iteration. The stress measure is the square root of the norma lized residual sums of squares, expressed as Kruskals stress, which value should be preferably lower than 0.10 (Kruskal & Wish, 1978). The MDS procedures through numerous tria ls (iterations) recovers a meaningful direction hidden in the matrix of empiri cal data to determine underlying geometrical structure or model (U.S.A. map), from a collection of distances among objects in a space (cities) (Shepard 1972). Th erefore, recently multidimensional scaling became the preferable statistical method used by mini mum dataset builders for constructing the instrument in developing taxonomies. One example is the study by Griens, Goossen, & Kloot, (2001). They explored the minimum dataset for the Netherlands using
39 multidimensional scaling techniques. The techni que helped in assigning scale values to the nursing data elements under investigati on in such a way that similarities and dissimilarities between them were made possible to explain, and aided in making decision regarding the number of data elemen ts and the categories to be included in developing the NMDS. Conceptual Framework The conceptual framework that was used to guide the methodology in this study is adopted from the systems theory. Generally the ideas of classic systems theory with structure, process, and outcomes was used widely in developing many Nursing Minimum Dataset, such as Nursing Management Mi nimum Dataset (NMMD), and the Clinical Nursing Minimum Dataset (NMDS), and other dataset such as North American Nursing Diagnosis (NANDA), Nursing Intervention Classification (NIC), and the Nursing Outcome classification (NOC). The system mode l is flexible, dynamic, user friendly and is well known in many disciplines. It was th e most pragmatic choice to adapt systems theory for the development of NEMDS becau se not only did it allow flexibility and facilitate the codification scheme for nursing education language that could be electronically read, interpre ted, and monitored, but also the system model makes it possible for coding schemes to commensurate with other nursing vocabularies. After an extensive literature search was conducted in nursing, education, and reviewing several models from the sociology, education, and ps ychology literatures for identifying the main nursing education variables that are comm only used. An example of a model from another field is Tinoss student integration Model (1975) from sociology. This model
40 focused on higher education community. Another example is Bean and Eatons Psychological Model focused on organizationa l process of higher education, which incorporated background, organizational, e nvironmental, and attitudinal and outcome variables. The systems model approach wa s adapted to build a taxonomic schema for engineering the essential terms and data elements in nursing education. The methodology in this study adapted many attributes and variables data elements from the above two models from sociology, into the systems model. Each of the three domains; input, pro cess, and output included nursing educational terms and data elements related to four major categor ies; Students, Organization, Faculty, and Curriculum. Each category incorporated se veral essential educational items data elements. System input items for the student category included data elements such as: demographic data, academic profile, admission tests, recruitment plan, and retention in program. The organization category included elements such as type of institution, philosophy and mission, type of governance, type of funding human/ non human resources, and training programs. The faculty category included data elements such as demographic data, faculty profile, and type of faculty. The curriculum category included data elements such as level of nursing pr ograms from BSN, MSN, PhD, or others, the type of curriculum, distance le arning and web based courses. System process items in the student category comprised da ta elements such as ongoing student evaluation, level of involvement with extramural activities /extra curriculum activities, le arning skills, level of adaptation to environment and diversity. The organization category comprised data elements such as number and type of outreac h, events and community interactions, level of congruency between the departments and the mother college or university in (the goals
41 objectives, strategic plans, marketing, budgeti ng, evaluation criteria, etc.). The faculty category comprised data elements such as teaching loads, committees and meetings, teaching methods and ongoing faculty evalua tions. The curriculum category comprised data elements such as total credit hours ongoing program assessment, faculty/student classroom and clinical ratios, c linical / theory credit hours ratio. System outcome items for student category included STUDENT OUTCOMES such as, graduation rate, attrition rate, certification examination pass rate, student satisfaction, and employment rate after graduation, honors/awards, progr ess to graduate studies, refereed publications and competencies. For the organization ca tegory the data elements included ORGANIZATIONAL OUTCOMES such as accred itation status, ra nking status, and funding status. Faculty category data el ements included. FACULTY OUTCOMES such as publications/textbook, promotions on j ob, research funding, honors or awards, and scholarships. The curriculum categor y data elements included CURICULUM OUTCOMES such as program evaluation, co urse evaluation, program and course accrediting status, see (Figure 1.). The system model is defined as a whole which functions as a whole by virtue of the interdependence of its pa rts (Rapoport, 1968). Also defined the system model as a set of objects together with relationship between the objects (parts of the system), and their attributes (the properties of the object). (Hall&fagen, 1950). It is important to understand that all the data elements that are listed, classified, categorized, and coded under each of the system model domains: structure; process; and outcome, have to be in the outcome form, measurable, and quantifiable to help data collection and analysis, other wise there will be no use of gathering all these data if no meaningful conclusion can be reached
42 regarding them. It is suggested that when developing a NEMDS, a panel of expert educators should rate these terms and data elements using Delphi method based on the following: i) Does the item add important in formation about the school, faculty, student, and curriculum; ii) is the item measurable and quantifiable; iii) is the item essential for the NEMDS; vi) is it feas ible to measure the item? The above data elements are the very ba sic variables that are commonly used by any nursing education community around the wo rld, further adaptation and introduction of various specific data elements can be done based on the needs of each school, state or country and their specific edu cational systems. This framew ork is just an example to show how to adapt the systems model fo r constructing the taxonomy of nursing education. Data elements under any of the above mentioned categories can be repeated in any of the three domains of the system theory based on the need and type of information needed regarding that specific data element. The nursing education data elements organized in this system model need to be consistently coded for automated documen tation. Once the nursing education data elements are classified, categorized, a nd coded, it can be called nursing education ontology that can form the taxonomy to bu ild the NEMDS. Figure1. is a graphic arrangement of basic and general educationa l data elements that precede the ontology formation. Specific nursing education terminologies and definitions presented by the Interagency Collaborative on Nursing Statistics (ICONS) are available in the appendix of this paper to help in further sub classifi cation of the specific nursing education data elements by those who will be interested in build the NEMDS or taxonomies.
INPUTS 1. STUDENT Demographic data Academic Profile Admission Tests Recruitment plan Retention Program 2. ORGANIZATION Type Institution Philosophy and Mission Type Governance Type Funding Type of program Human/ Non Human resources Training programs 3. FACULTY Demographic data Faculty Profile Type of Faculty 4. CURRICULUM Level of nursing programs Program Focus Distance learning /web based courses PROCESS 1. STUDENT Ongoing student evaluation Level of Involvement with extramural activities /extracurriculum activities. Level of learning skills Level of adaptation to environment and diversity 2. ORGANIZATION Number & type of outreach Events / communityinteractions Level of congruency 3. FACULTY Teaching Loads Committees and meetings Teaching Methods Ongoing Faculty evaluation 4. CURRICULUM Total Credit Hours Ongoing Program assessment Faculty/student classroom and clinical ratios Clinical/theory credit hours ratio OUTCOMES 1.STUDENT Demographic data Academic profile Graduation Rate Attrition rate Certification Pass Rate Student Satisfaction Employment rate Honors/Awards Progress to grad. studies Refereed Publications Competencies 2.ORGANIZATION Accreditation status Ranking status Funding status Awards and recognitions 3. FACULTY Publications/ Textbooks Promotions on job Research funding Honors or Awards /Scholarships 4. CURRICULUM Program evaluation Course Evaluation. Program / course Accreditation status Figure 1. Example of Adaptation of Systems Model to Develop a Taxonomy for Building NEMDS 43
44 Definition of Terms For the purpose of this study the ke y terms are defined as follows: Taxonomy: a process of classification a nd organization of a particular set of information, items, or events, of a particular purpose, into a category within a complex hierarchy (I.C.O.N.S., 2002). Ontology: a study of the categories of th ings within a domain, providing a logical framework for knowledge representation. Work on ontologies involves schema and diagrams for showing relationships betw een different things (I.C.O.N.S., 2002). Dataset: A sequence list of individual data items (entity, attribute, or class) each with a clear label and set of permissible values (code set, and/or subclassification) forming a specification which help to desc ribe pre-defined information (NHS, 2005). Classification Scheme: A system or pr ocess designed to s upport the reliable categorization of complex textual data values into a mutually exclusive predefined structure. Classification can be differentiated from frames by the presence of rules and coding standards.(NHS, 2005). Coding System: A means of codifying si mple textual expressions or values to support information retrieval. A coding fram e does not require coding rules or complex categorization structure. A set of agreed-upon symbols, frequently numeric of alphanumeric, attached to concept representation or terms with regard to their form or meaning (NHS, 2005). Minimum Dataset: a minimum data set of items of information with uniform definitions and categories concerning the specific dimensions of nursing which meets the
45 information needs of multiple data users in the healthcare system. (Werley et al, 1991, p.422; PITAC, 2004). Nomenclature: A system of designati ons (terms)elaborated according to preestablished rules (I.C.O.N.S., 2002) Database: A collection of interrelated data often with controlled redundancy, organized according to a scheme to serve one or more applications; the data are stored so that they can be used by se veral programs without concer n for data structures or organization(I.C.O.N.S., 2002). Controlled Vocabulary: Terminologic dictionary contai ning and restricted to the terminology of a specific subject field or of related subject fields and based on terminologic work (I.C.O.N.S., 2002) Unified Nursing Language System: A system resulting from mapping terms among multiple nursing vocabularies and clas sifications schemes (I.C.O.N.S., 2002) Systems Model: a conceptual organiza tional tool for organizing taxonomy of terms used in nursing education, based on the three major components of the systems model in engineering (Input, Process, Output) (NLN, 2003). Inputs: An event external to a system which modifies the system in any manner. A variable at the boundary through which in formation enters, the set of conditions, properties or states which affect a change in a system's behavior, the medium of exogenous control. ( Krippendorff, 2004 ). Process: A process is a naturally occurring or designed sequence of operations or events possibly taking up time, space, expertis e, or other resources, which produces some outcomes (define or undefined). A process may be identified by the changes it creates in
46 the properties of one or more objects under its influence. Process may be categorized as singular, recurrent, or periodic. A singular process would be one which occurs only once. Few processes in nature can be considered si ngular. Most processes found in nature are recurrent or repeat more than once. Recurri ng processes which repeat at a constant rate are considered periodic. (Krippendorff, 2004 ) Outputs or Outcomes: Any change produced in the surrounding by a system. A variable at the boundary through which inform ation exits. The products, results or the observable parts (subsystem) of a system 's behavior. The medium through which a system may exogenously contro l others. An output could co nceivably include all of a system's behaviors, but it becomes an informa tive concept only if so me of its variables remain inaccessible to an obs erver or have no effect. (Krippendorff, 2004) Summary This chapter reviewed the re search that has be en already published that relates to the primary variables of the proposed study. It discusses the history and the process of developing taxonomies, standardized language s, and minimum dataset in nursing and healthcare around the world. It also explains the value of Minimum Dataset in nursing practice, education and research. This chapter includes an explanation for adopting the Delphi method in this research, and descri bes the different types of Delphi and the advantages of using the real-time electro nic Delphi. The definition and purpose of multidimensional scaling, a history of its development, and an explanation of how to use multidimensional scaling were discussed briefly in this chapter. A description of the
47 conceptual framework the systems approach that was adapted for the study was included. Definition of the variables such as ta xonomies, ontology, minimum dataset, system model, input, process, and output were provided. Chapter three will present the study design and the methodology for building a NEMDS.
48 Chapter Three: Design and Methodology In this chapter the study design and methodology are reviewed, the review includes a thorough description of the sample, inclusion criteria, data collection instrument, data collection procedure, and data analysis plan. The research design proposed is a prospective descriptive study that identifies a generic methodology that can be used to develop a taxonomic schema fo r engineering the essential terms and data elements commonly used in nursing education to build a NEMDS. Criteria for Sample Selection of Experts The sample can be selected via a purpos ive (judgmental) sampling technique from any nursing membership database, but prefer ably from nursing education membership databases. The population of nursing educators ne eds to be screened by expert criteria to validate that we are building a minimum data set based on the responses and conclusions of respondents nominated for their expertise in nursing education. Participants can be identified as nursing education experts if they meet the following inclusion criteria: 1) 10 or more years of experience in full time acad emic teaching in a Bachelor of Science in Nursing (BSN), Master of Science in Nurs ing (MSN), PhD doctoral programs, or any other type of doctoral programs in nursing; 2) recognition as a leader in nursing education as evidenced by five or more nursing educat ion research publications in refereed nursing journals. The inclusion criteria for subjects in addition to the previous two criteria can be
49 that the subjects must have an expressed co mmitment to participate and complete all stages of the study including the focus group discussion and the Delphi rounds. Nursing educator-experts have to be currently teaching in any university based undergraduate and/or graduate nursing programs because the BSN is the current entry level required for nursing service and education in many countri es around the globe. The study to create a NEMDS can also include nursi ng educator experts who are researchers or currently holding administrative positions as Deans and program directors because of the knowledge they posses regarding nursing education. Exclusion criteria include educators teachi ng in nursing programs located outside the countrys boundaries and those nursing educators who can not speak the language fluently in which the NEMDS is going to be built. (due to the linguistic barriers and difficulties involved in reaching consensus on exact terms, meanings, definitions, and terminologies). Population and Sample Size Due to the nature of the methodology of developing a NEMDS that uses the Delphi method, the assumption is that the bigger the sa mple size, the better the statistical power we will have regarding the consensus on the taxonomy of essential data elements of nursing education to build NEMDS. Several points need to be explained. Precision of sample size is dependent on the following: 1) Population size of nurse educators within the databases. 2) Diversity and variation in the population characteristics. We need a larger sample size for high variations, and if we dont know the leve l of variation in advance, we can take a
50 conservative approach assuming more populat ion diversity on a dichotomous (50/50) split. 3) Subgroups that exist within the sample for which estimates are needed, for example BSN, MSN, and PhD, nursing programs. 4) Sampling error, we need to have a tolera ted sampling error at or + 3 percent at the 95% confidence level for the whole sample size(after taking out the ineligibles and nonrespondents). A nationally representative sample can be selected though a non probability purposive sampling technique (Dillman, 2000). A ccording to this su rvey methodologist, the sample size for the Delphi study varies de pending on the level of desired margin of error. The sample size in the Delphi publishe d studies varied widely between less than 20 (Duffield, 1993) to more than 200 (Broomfield, 2001) To estimate the starting sample size first one needs to estimate the number of questionnaires needed in the final sample, and then work backward, assuming 90 % of email addresses will be usable, 80% of the remaining subjects will respond, and 10% of the returned questionnaires will be illegible or incomplete. Final sample size .9 0.8 0.9 = starting sample size (Salant & Dillman, 1994) It is important to have an up-dated, comple te, and accurate list of nursing educators membership database regardi ng their addresses and emails. Methods and Steps for Building NEMDS Based on adapting the definitions of MDS by Werley and others, similarly the Nursing Education Minimum Dataset (NEMDS ) definition is as follows: A minimum
51 dataset of educational items of informa tion with uniform definition and categories concerning nursing education which meets the information need of multiple educational data users in the nursing education system. The NEMDS includes those specific items of information that are used on a regular basi s by the majority of nurse educators and nursing education community acr oss all types of nursing progr ams and schools. Thus the NEMDS would add specific information to the existing nursing education data and statistics. The initial steps in build ing the NEMDS are to: 1) Identify educational concepts and data elements Keeping the above definition in mind one can easily say that the taxonomy and MDS development is a theoretical operation in which groups, classes, or sets of terms or data elements are systematically organized and linked according to some criterion. Therefore the first step is to identify essential terms and relevant nursing education data elements, and to conduct a concept synthesis. This first step may be derive d inductively or deductively to formulate categories, classifications, taxonomies, and a coding system (Wagner, 1999). The use of focus group proceedings and an extensive literature review to design the online of the Delphi survey questionnaire is the first suggested step in constructing the NEMDS. The focus group can be used either before cons tructing the survey instrument to identify what data elements can be included in the questionnaire or after th e consensus is reached by the Delphi survey for further clas sifications, labeling, coding, and ontology formations.
52 As discussed previous ly the literature review reveal ed four major nursing education categories: Student, Or ganization, Faculty, and Curriculum. Arranging these categories in three key domains of the system model, i nput; process; and outcome can serve as the blueprint for identifying and classifying the essential data elements for building NEMDS. Presenting these categories to approximately si x focus groups with expert nurse educator participants from three major levels of nursing programs: BSN, MSN, and different Doctoral programs, (two discussions with each group), can lead to saturation of the information obtained. It is preferable to have 6-8 members in each discussion group (Kruger, 1999). The same inclusion criteria ca n be used to select focus group subjects as is used to select in the Delphi subjects to maintain the consistency of expert nurse educators. Structured focus group social interaction and discussion is an initial attempt to identify inductively, and to include as many terms and educational data elements as possible, in order to avoid eliminating any terms prematurely. Another purpose served by the focus group is clustering or identifying data elements that seem closely related to each category and each domain (Walker & Avant, 1995). The audio or video taped focus group proceedings can be analyzed by four readers. Key educational words, phrases, metaphors and topics can be identified by mu ltiple readings. Furthermore, patterns of connections between different data elements can be identified through discussion among all readers during serial meetings at which the coding of each transcript is compared among viewers. 2) Define the data elements as nursing education terminologies A rigorous qualitative method of focus group helps in developing common vocabularies, nomenclatures, and
53 classifications of the educational data elem ents. Carefully and accurately defining the nursing terminology, labeling or naming them grouping and clustering the identified elements by consensus into major distinctive themes or categories, helps to build the classifications and sub-classifications of data elements to create the taxonomy for building the NEMDS. The concep tual framework presented in Figure 1 can be used as a guideline to formulate the questi ons for the focus group discussions. 3) Coding Once there is consensus on definitio n and specific unified terminology labeling or naming for each data element and terms, the next step is coding the data elements for automated documentation that can help in collection, storage and retrieval of each data element consistently throughout the different domains and major categories. This is essential, although there is no one specific method recommended to accomplish this. Each organization field or discip line can choose different methods or codes accessible to their information technology. Ho wever, consistency in coding of all variables is crucial so that the same data element can be represented in two different categories. This will require two different c odes for the same data element. One example is the data element of student demographics, which is labeled the same but repeated in two domains input and outcome. Each will gi ve different meaning to the information when the data is collected therefore their co des have to be consistent yet different. The coding step may be accomplished through the use of the same focus groups as used in steps 1 and 2 or could be achieved using newly formed focus groups. 4) Building a Taxonomy : Based on the results of the focus groups, the taxonomy can be constructed. First, ontology can be designed, which means a graphical representation of the domains, categories, classification and subclassifications of the essential terms and
54 data elements as shown in Figure 1. Next, an instrument that includes a listing of all obtained essential nursing educat ion data elements and terms can be designed in a Likert scale for a three rounds Delphi survey to gain consensus on the elements (Henry, et al, 1987; pilot & Beck, 2004). Cons ensus can also be gained on the constructed taxonomy that can serve as the conceptual framewor k for building the NEMDS. Consensus on the taxonomy is better reflected on the NEMDS when the criterion of 70% agreement is reached on the majority of data elements on the questionnaire (Deshpande, & Shiffman, 2005). 5) Empirical and theoretical validation Once consensus is reached regarding the nursing education taxonomy with schematically and hierarchically arrange d essential nursing education data elements, evaluation of this generic taxonomy empirically and theoretically is an important step in va lidating the NEMDS. To evaluate the taxonomy empirically the developers can pilot test the taxonomy and the NEMDS by verifying that similar essential nursing education data elem ents and terms appear in more than one study despite different samples and data co llection methods. Empirical validation can occur even if the same educational terms a nd data elements have been conceptualized differently. For example, the faculty tenured item in the educational system of the United States may not be the case in the nursing education system of other countries like Saudi Arabia, but there may be a similar concep t under different terminology. In addition the final taxonomy may have different codes, name s, or sub-classifications, but should have all the data elements that represent the fi eld of nursing education comprehensively (RyanWenger,1992; Goossen, et al, 1998; Grie ns, Goossen, Kloot, 2001). Both, global reliability (or the extent to which educator s could consistently use the entire taxonomy
55 across all categories), an d the categorybycategory reliab ility, needs to be identified by the panel of expert raters when developing this taxonomy. The taxonomy theoretically can be validated also An important issue in evalua ting or validating any taxonomy is related to its theoretical structure and the c onsistency of concepts used in building that taxonomy so that we can indicate that the pu rpose of the MDS to generate comparable data (Reynolds, 1971; Turner, 1986; Ryan-Wenger, 1992; Walker & Avant, 1995: Goossen 1998) is guaranteed. 6) Disseminate and aggregate the data The real validation of such a NEMDS will not be clear until it is disseminated to the nur sing education community including students; educators, administrators, researchers, and other nursing education data users, and the actual utilization of the NEMDS in their operational activities and practices within their organizations occurs. Only following actual us e will true data be available for validation and further refinement of the NEMDS. A ggregation of the NEMDS in national and international databases is essential for the data collected to have a global meaning for guiding the advancement of nursing educati on practice and research (Goossen, et al, 1998). Figure 2 presents a graphi cal representation of the e ssential steps in building an NEMDS.
Steps for Building NEMDS 2) Defining the data elements as nursing education terminologies 3) Coding the nursing education data elements for automated documentation 4) Building the nursing education taxonomy (conceptual framework to build the NEMDS) 6) Dissemination of the NEMDS to the nursing education community and aggregation of the NEMDS into national databases 5) Empirical and theoretical validation of the nursing education taxonomy and NEMDS 1) Identifying educational concepts and data elements Created by adapting the steps to build MDS from Goosen et al., 1998. Figure 2 Steps for Building Nursing Education Minimum Dataset 56
57 Data Collection Instrument With the advantages of computer technology in the 21 st century it is more convenient, economical, and fast er to use the electronic re al time Delphi methods to reach consensus in a speedy yet accurate fashion. Online surveys reduce the attrition rates, increase response rates, and reduce costs that make th e inclusion of a large sample size a possibility. Survey Monkey ( http://www.surveymk.com/home.asp ) is a revolutionary intelligent tool that enables researchers to create professional online surveys and survey questionnaires quickly and easily with an unlimited number of questions, spanning an unlimited number of pages. The main features of this soft ware are that it helps in: 1) Designing the survey using just a web browser and their intuitive survey editor. Researchers are able to select from over a dozen types of questions (single choice, multiple choice, rating scales, drop-down menus, and more...). These powerful options allow researchers to require answers to any question, control the flow with custom skip logic, and even randomize answer choices to e liminate bias. In addition, they are able to have complete control over the colors and layout of the survey. 2) Collecting responses automatically by simply cutting and pasting a link to the survey, researchers also are able to use a popup i nvitation generator to maximize response rate, and use an automated email notification and list management tool to track the respondents. 3) Analyzing results, researchers are able to view results as they are collected in realtime. They are able to watch live graphs a nd charts, and then dig down to get individual responses and securely share survey results with others. Powerful filtering allows
58 displaying only the responses researchers ar e interested in. They are able to even download the raw data into Excel or SPSS. Survey monkey can create hundreds of questions and can reach an e ndless number of participants ; it has many advantages over other similar software as: it can create Skip Logic (Conditional Logic), can customize the path a respondent takes through the survey by adding skip logic, and can eliminate unnecessary confusion by skipping non-appl icable questions, and "order bias" by randomizing answer choices. It can reduce "dropouts" and overall frustr ation; it help in improving the quality of the data by requiring an answer for every question; it can give the survey a professional feel by using a logo up to 50K in size at the top of every page in the survey; it also can create custom themes for every element of the survey for fonts, sizes, and colors. It can generate custom popup invitations for each website. By simply cutting and pasting the code into any webpage it will start generating invitations to increase response rates, To minimize annoya nce to visitors, invitations only popup once. It also can serve as custom redirect Once the survey is completed, respondents are redirected to the page of choice. Finally it helps to filter results a powerful feature that helps in finding patterns in the results. It asks questions such as: "Show me only those respondents who answered choice x in question y." It is possible to filt er any questions in the survey (even open-ended). The entire results section reflects researchers' filter choices. Results can be shared, others can vi ew the results without giving them access to researchers' account, and the researcher has control as to which results can be visible and how the results may be used. Results also can be downloaded automatically in numerical form as well as the text form to a local computer for further analysis, and summary
59 results can be taken into Excel to create gr aphs. Detailed results can be saved into the hard drive for safekeeping, researcher s can be in a complete control. These incredible features of this soft ware make it difficult to resist suggesting its application to the methodology proposed to develop a NEMDS. A survey questionnaire containing of a lis t of several terms and data elements commonly used in nursing education (as in the proposed systems model) can be constructed using information obtained fr om the literature search and focus group discussion results The items can be organized in three domains ba sed on the conceptual framework; Input, Process, and Output, under four major categories: student, organization, faculty, and curriculum. The list of items on the ques tionnaires can be designed in a 7 point Likerttype scale. Participants need to be asked to read each item and rate their level of agreement about whether to include that item in the dataset or not, with a rating of one being strongly disagree and seven bei ng strongly agree, based on the following criteria: i) does the item add important information about th e school, faculty, student, and curriculum; ii) is the item measurable; iii) is the item essen tial for the NEMDS; and vi) is it feasible to measure the item. They can also be encourag ed to express their op inions and points of view and write their comments about each item in the provided space on the questionnaire. Questions regarding demographic information such as age, sex, race, ethnicity, and number of years teaching in nursing programs, number of publications, nursing specialty, and others have to be constructed on th e first round of the Delphi questionnaire along with the screening questions. The logic can be built in the questionnaire in such a way that if the nur se educators did not answer or meet the
60 inclusion criteria, the survey will automatical ly end and the participant will not proceed. The estimated time for responding to the ques tionnaire should not exceed 35 minutes to complete (Dillman, 2000). The logic of all item s on the questionnaires must be completed prior to moving to next question. Thus the expectation is that there will be no missing data. A complete descriptions of the responding method and a business hour telephon e number (preferably a toll free number), as well as the email address of the researcher need to be provided in a cover letter along with the instrument. Participants should be encouraged to ask any questions they may have regarding participation, the ques tionnaire, or the research itself. Two experts or more with extensive e xperience in survey construction should review the questionnaire for face and conten t validity. The experts will revise the questionnaire for greater clar ity and ease of completion. Cont ent validity to evaluate the relevance of the elements and to determ ine the content representativeness can be achieved through the use of the content validity index (CVI). In the CVI relevance ratings of the data elements are done using a sevenpoint ordinal rating scal e with 1 representing an irrelevant data element and 7 representing an extremely relevant data element. The proportion of two experts who rated the questio nnaire as content-valid determines the CVI for each data element. Cohens Kappa tech nique can be used to assess experts interrater reliability on items. A pilot testing of the first round of questionnaire needs to be conducted using a sample of 10 or more nur sing education experts. The same expert inclusion criteria can be used in selecting respondents for th e pilot testing as to nominate the study subjects. Questionnaire revision and modification based on the experts responses during the pilot testing is helpful.
61 Institutional Review Board Approval from the appropriate Institutional Revi ew Board must be obtained prior to data collection. Because this is an on line electronic Delphi survey conducted by participation of volunteer healthy adults, informed consen t waivers may be granted. The first round of the questionnaire can be accompanied with a cover letter that should provide complete information and detailed explanations about the nature of the research and the researchers expectations of the participants. Subjects completion and submission of the first round of the questionnaire will be considered to be their consent to participate in the study. All subjects identifying information can be kept confidential and private. Full anonymity is maintained in this type of study. Data Collection Procedures Initially an online invitation needs to be sent to all nursing ed ucators selected for the sample to participate in the online Del phi study to develop an NEMDS. In that invitation a brief orientation to the study can be presented. A few days later, the first round of the Delphi survey questionnaire, a unique Universal Re source Locator (URL) link to the website where the questionnaire is located, an ID numbe r, and a password to access the site can be sent along with a cover letter. This lett er should explain in detail the nature of the study, the aims of the resear ch, how and why the subjects have been nominated to participate in this study, the im portance of their partic ipation, the benefits and risks of participating, and the developers expectations of the participants. Instructions regardi ng the method of responding, suggested return date and time, and a complete address, business telephone numbers, and an email address of the researcher
62 needs to be provided. This faci litates answering any questions that respondents may have regarding their particip ation, the questionnaire, or the res earch itself. Because this cover letter will be used as an informed consent form, participants agreeing to participate, responding and submitting the first round of the questionnaire are considered as having provided their informed consent. Each participant needs to be asked to rate and comment on all the items on the questionnaire. Using a voting system, respons es can be made known immediately to the assembled panel. Responses to each r ound of questionnaires are analyzed and summarized. Draft feedback with graphic summ aries can be returned to the experts for suggestions and revision, along with the revised and modified questionnaire based on the result from previous rounds. The respondents then can reformulate their opinions with the knowledge of the groups viewpoint in mind. This process of response-analysisfeedback can be repeated in rounds two a nd three using the same software, Survey Monkey, until general consensu s of 70% agreement among participants is obtained. Anonymity of data and respondents can be maintained only if the online Delphi is used without the focus group method, However confidentiality can be maintained through out the study. Data can be held on personal, rather than network computers and data handling can be limited to few people. A ten day window to submit the survey electronically can be given for each round. Because the online Delphi can be fully anonymous, an email reminder has to be sent to all participants Each round can be directly downloaded into Excel in a numeri cal data format and transferred to SPSS for analysis.
63 Data Analysis Four types of statistical methods can be used to analyze the data. First: A descriptive statistical analysis of the participant responses to each item on the Likert scale on the survey, using mean, me dian, and standard deviation can suggest consensus (Werley,1986). Some researchers conf irm that the degree of agreement can be assessed by variances, that is, the lower the variance the greater the consensus disagreement index is (Deshpande & Shiffman, 2005). The literature indicates that it is important to know at what level of agreement/disagreement consensus was reached. McKenna (1994) and Williams and Web (1994) indicated that accep table levels of agreement using a Likert-type sc ale are reached at 51% and 55 % levels of consensus. A consensus level of 70% will be acceptable for building the NEMDS. The mean, a measure of central tendency and the standard deviation, a measure of spread, represent the amount of agreement or disagreement within a panel on an item. All data can be analyzed using statistical analysis software (e.g., SPSS). Second: Content analysis usi ng the content validity index (CVI) for each item added by the Delphi experts will evalua te the relevance of the element and determine the content representativness (Miil es &Huberman, 1994). Third: The multiple-rater Cohens kappa technique can be used to assess experts interrater reliability on items in the instrument. Several examples exist in the literature (Colling, 2000; Siegal, 1988, Fleis, 1997). A Kappa of 0 indicates that observed agreement among raters is equal to agreement caused by chance alone. A kappa of 1.0 indicates perfect agreement among raters, be yond what would be expected by chance.
64 Kappa rating needs to be calculated for result s of each round of the Delphi survey as well as an over all multiple rater kappa. Fourth: Explanatory data analysis using the Multidimensional Scaling, a non metric approach can be conducted. The Multidimension al Scaling can give a more meaningful and interpretable solution of the instrument under study by obtaining measures of similarities and differences between severa l items among the three different levels of nursing programs. The computational strategy is to find special arrangements of law dimensionality where the rank order of the distances in the space correspond with the rank order of similarity measures between items with minimal error. A computer program ALSCAL can be used to: 1) find a low dimensional space in which the points in the space represent the items being studied, and the original similarities and differences between the items among various groups of nursing educators, as well as to 2) represent the relationships among items as a geometric model or picture. The data matrix can be used to calculate a matrix of proximity scores between all pairs of items in the three different nursing programs. Proximity scores will reflect the degree of similarity or dissimilarity am ong a set of nursing education items being compared on different nursing education programs. The Multidimensional Scaling will help identify the relationships between those various nursing educati on data elements and group them under a few groups or dimensions. Based on the responses of the Delphi panel which will be subjected to several iter ations to reach the lowest stress level possible, the researchers will be able to re ach conclusion regarding the nursing education classification system(conceptual framework), or the taxonomy to develop the minimum dataset. The Multidimensional Scaling resu lts either will confirm the proposed
65 conceptual model in the study, or suggest othe r classifications for nursing education data elements under lowest dimensionality. Summary This chapter described the possible res earch design, population, samples size, and sampling techniques. It explained two methods for data collection du ring different stages of the NEMDS development, focus group and Delphi methods. It discussed systematically the essential st eps for developing the NEMDS. The chapter also described the statistical techniques that can be used for data analysis and explained the advantages of using the multidimensional analysis to build a taxonomy of nursing education data el ements for developing the NEMDS.
66 Chapter Four: Discussion and Conclusion The purpose of this study was to deve lop a deeper understa nding of minimum datasets and to create a generic me thodology for building a NEMDS. The study attempted to fulfill three objectives: 1) iden tify the essential domains and commonly used data elements and essential terms in nursing education; 2) adapt the system model to serve as a conceptual framework and a ta xonomic schema for organizing the essential data elements in nursing education; 3) desc ribe the steps and th e methodological process of developing NEMDS. The steps formulated in this study, based on the literature re view of previous experiences in developing minimum datasets in general, and in nursing specifically, represent a generic methodology for building the NEMDS. Due to the gap in concept synthesis of specific nursing education terminology, the stud y focused first on developing a hierarchic, schematic taxonomy of essent ial educational data elements. Because taxonomy development includes systematically organizing concepts and criterion links, its construction was considered a concep tual framework (Wegner, 1992; Rasch, 1987). The presented taxonomy in this study used seve ral attributes to group and classify data that are increasingly inclusive and suggested ways for expanding the classification further by adding more specific data elements that ar e particular to any e ducational organization. The four major categor ies of student, organization, f aculty, and curriculum were kept consistent through out the th ree domains of the systems model: input; process; and
67 outcome. Clustering nursing education data el ements according to input, process, and outcome helps to link or/and distinguish cau se from effect. Each cluster or grouping encourages the nursing education research er to be directed at uncovering the hypothesized causal elements for a specific outc ome. Furthermore distinctions identified by these simple clusters or groupings can render nursing educa tion practice more intelligible by all observers. Using two different methods of data co llection such as focus group and online Delphi methods to build the NEMDS, although each serves a different purpose, can lead to a more valid NEMDS through findings of both quantitative and qualitative data from an expert nursing educator panel. Consensu s and empirical and theo retical validation of the ontology and the taxonomy of the NEMDS will ensure the production of comparable data that can help in the evaluation and development of either the entire nursing education practice or some specific compone nts of it. The consistency of the four categories used to cluster the data el ements, student, organization, faculty, and curriculum, can make aggregation of data and comparison possible, and research questions can be addressed us ing these differentiating catego ries within the NEMDS. For example, researchers can compare variables of student outcomes fr om the output domain with teaching methodologies from the process domain to have some meaning to the data collected and allow predictions of outputs as sociated with various teaching methods. The NEMDS can allow data to be collected once, but used many times by different people, at different times, in different settings to make various inferences and conclusions regarding nursing education practices, nationally and in ternationally if we have a consistent taxonomy and nursing education language (Fayyad,1996; Epping et al, 2000).
68 The proposed methodology for developing the NEMDS meets the criteria of having an accurate dataset, because the categ ories and data elements in the taxonomy (the conceptual frame work used to develop NEMD S) are mutually exclusive, exhaustive, and consistent with the concepts that help nursing education researchers to describe, explain, and predict the outcomes in the field of nursing education. The NEMDS can help the transformation of simple information to m eaningful knowledge that can be used by the school, state or country to advance nurs ing education, research, and practice. Summary This chapter discussed the characteris tic of the proposed generic methodology for developing a NEMDS including the issues of: identifying, defining, and unifying the nursing education terminologies,; benefits of using various data collection methods and data analysis techniques, and the possibilities of upgrading, expa nding and adding further sub-classifications to the proposed nursi ng education taxonomy to build the NEMDS.
69 Chapter Five: Limitations and Recommendations for Further Research The literature has revealed several limitations to the process of developing minimum datasets as well as their uses in various disciplines. Th e process of developing this methodology supported the limitations from prev ious research and also revealed the following limitations: The most important limitation is the lack of clear definitions of variables and unified terminologies that constitute the universe of values for each variable. Sometimes the concepts of MDS data elements (terms) match the names of the vocabularies used, but the definitions of the MDS data element (terms) differ (Wheeller, 1992). A unified and standardized nursing education vocabulary, definitions, and defined relationships between nursi ng education terms and data elements is crucial for building a NEMDS (Delaney & Moorhead, 1995). The reliability and validity of the database often are confused w ith the validity and reliability of the classification system (Ryan and Delaney, 1995). Updating existing MDS systems is expensive because it requires upgrading the existing data collection methods, changing classifications, instrume nts, and educating new users (PITAC, 2004). Goosen and others (1998) confirmed that the NMDS that have been developed and applied in many countries have some co mmon similarities, but there are also
70 differences in purpose, content, sampling techniques, research designs, data collection approaches, analysis and dissemination pro cesses, and in the development stages. Consistency in those approaches and pro cess, and in the information technologies used in developing a NEMDS can lead to a more universally accepted NEMDS and meaningful data regarding our nursing education practices. Although it is obvious that the idea of a NEMDS will help in having several advantages in assessing and improving the nursing educational structures, processes and outcomes, however the national and international comparisons of nursing education data will not be possible unle ss we have a unified nursing education language. Only a unified international nursing education taxo nomy with common nursing education terminologies and definitions will allow the aggregation and comparison of nursing education data across th e globe. If there is a difference in the level of consensus on the standardized incidence and prevalence estimates of specific and important data elements to be included in the NEMDS, or if the dataset is nationally and internationally incompatible with the items needed to construct the NEMDS, then it will limit the selection of res earch questions that can be answered by the dataset, and there will be nothing to compare. Most MDS intend to meet the data needs of users at all levels, administrators, researchers, educators, and providers. A growing body of evidence supports the assertion that MDS provides the disciplin e with substantial benefits regarding budgeting, financing, allocating resources, a nd assessing and evaluating services and for research. However, the literature in ge neral and the nursing literature specifically lack the presence of empirical research that demonstrates the a dvantages of minimum
71 datasets in nursing. We need to have sound sc ientific practical rese arch to answer the questions about whether the MDS are worth the effort and costs spent to develop them. As Goossen stated we need to ba lance the benefits and costs of creating minimum datasets with the results of using MDS. At the present time, there is no one univer sally uniform MDS in nursing in general or in nursing education specifical ly that is used with consistency world wide. To build one however, an international effort and coordination is needed. Collaboration between nursing education MDS developers around the world is important during the early planning and building stages, as well as during the MDS validation process to ensure the production of comparable data across geographical settings and time. Last but not least, understanding human natu re and the variations among the existing cultures, and understanding the differences and similarities in the needs, difficulties, and resources of different populations, nur sing education programs, and countries around the globe can help MDS developers to better identify the terms and data elements needed for building a more universally accepted MDS. Summary This chapter discussed the main obstacles commonly faced by the MDS developers, based on reviewing previous di fferent experiences and the process of developing MDS. It listed specific limita tions of developing MDS, and suggested recommendations for building one in nursing ed ucation. It also included directions for future research related to NEMDS.
72 References American Association of Colle ges of Nursing. (2003, April). Strategic Plan. Retrieved July 2004, from http:// www.aacn.nch.edu/contactus/strtplan.htm American Nurse Association (ANA). ( 2001). American Nurses Association recognized languages for nursing. Re trieved February, 2, 2004from http://www.nursingworl d.org/nidsec/nilang.htm Anderson, B., & Hannah, K. J. (1993). A Canadian NMDS: A major priority. Canadian Journal of Nursing Administration, 6 (2), 7-13. Application. Hillsdale, NJ: Erlbaum. Archer, T. M. (2003). Web based survey, Journal of Extension, 41 (4), 1-5. Averill, C. B., Marek, K.D., Zielstor ff, R., Kneedler, J., Delaney, C., & Milholland, D. K. (1998). ANA standards for nur sing datasets in information systems. Computers in Nursing, 16(3), 157-161. Baernholdt, M. L. (2003). Why an ICPN? Links among quality, information and policy International Nursing Review 50, 73-78. Bean, J.P., & Eaton, S.B. (2000). A Psychological Model of College Student Retention. In Braxton (Ed.), Reworking the Student Departure Puzzle Vanderbilt University Press: Nashville, TN. (p. 57).
73 Belegen, M.A., & Tripp-Reiner, T. ( 1997) Implications of nursing taxonomies for middle range theory development. Journal of Advanced Nursing Science, 19(3):3749. Belisle, P. (1998). Digital reco rding of qualitative interviews. Quirks Marketing Research Review, 12 (11), 18, 60-61. Bellenger, D.N., Bernhardt, K.,L., & Goldstrucker, J.,L.,(1976). Qualitative research techniques: Focus group interviews In T.J.Hayes & C.B. Tathum (Eds.), Qualitativeresearch in marketing(pp.10-25). Chicago:American Marketing Association. Beretta, R. (1996). Issues in research: A critical review of the Delphi technique. Nurse Researcher, 3(4), 79-89. Biolchini, J., & Patel, V.L. (2004). From thesauri to ontology: knowledge acquisition and organization. MEDINFO (CD): 1525. Blackburns, S. (1999) Think. Oxford: Oxford University Press. Belegen, M. A., & Tripp-Reimer, T. ( 1997). Implications of nursing taxonomies for middle-range theory development. Advances in Nursing Science, 19(3), 37-49. Bellenger, D. N., Bernhardt, K.L., & Go ldstrucker, J.L. (1976). Qualitative research techniques: Focus group interviews In T.J. Hayes & C.B. Tathum (Eds.), Qualitative Research in Marketing (pp. 10-25). Chicago: American Marketing Association. Bowles, K., & Naylor, M. (1996). Nursing classification systems. Image-The Journal of Nursing Scholarship, 28 (4), 303-308. Bowles, N. (1999). The Delphi technique. Nursing Standard, 13(45), 32-36.
74 Broomfield, D., & Hamphris, G.M. ( 2001). Using the Delphi technique to identify the cancer education requi rements of general practitioners. Journal of Medical Education, 35 (10), 928-937. Burgun, A., & Bodenreider, O. (2001). Mapping the UMLS Semantic Network into general ontologies. Proceedings of the AMIA Symposium, 81-85. Burkhart, L ., Konicek, R ., Moorhead, S ., & Androwich, I (2005). Mapping parish nurse documentation into the nursing interventions classi fication: A research method. Computer Informatics, Nursing, 23 (4): 220. Carlton, K.H., Rayan, M.E., & Siltberg, L.L.(1998). Designing courses for the Internet: A conceptual approach. Nurse Educator, 23 45-50. Chaffin, A., & Maddux, C.D. ( 2004). Internet teaching methods for use in baccalaureate nursing education. Computers Informatics, Nursing, 22 (3):132-142. Clark, L.A., Watson, D., & Reynold, S. ( 1995). Diagnosis and classification of psychopathology: Challenges to the curre nt system and future directions. Annual Reviews of Psychology, Inc. Palo Alto, CA: 46(1995), 121-153. Clark, J. and Lang, N.M. (1992). Nursing s next advance: An international classification for nursing practice. International Nursing Review, 39 : 109-112. Cloninger, C. R. (2005). Character Strengths and Virt ues: A Handbook and Classification [Communications and Updates: Book Forum: Et hics, Value, and Religion], The American Journal of Psychiatry, 162 (4), 820-821. Coenen, A., McNeil, B., Bakken, S., Bickfo rd, C., & Warren, J. J. (2001). Toward comparable nursing data: American Nurses Asso ciation criteria for da tasets, classification systems, and nomenclatures. Computers in Nursing, 19(6), 240-248.
75 Coenen, A., Ryan, P., Sutton, J., Devine, E. C., Werley, H. H., & Kelber, S. (1995). Use of the NMDS to describe nursing interventions for select nursing diagnoses and related factors in an acute care setting. Nursing Diagnosis, 6 (3), 108-114. Coenen, A., & Schoneman, D. (1995). The NMDS: use in the quality process. Journal of Nursing Care Quality, 10 (1), 9-15. Cohen, L., Manion, L., & Morrison, K. ( 2000). Research Methods in Education. 5 th Ed. London: Routledge Falmer. Colling, K. B. (2000). A taxonomy of pa ssive behaviors in people with Alzheimers disease. Journal of Nursing Scholarship, 32(3), 239-244. Coury, B.G. (1987). Multidimentional scali ng as a method of assessing internal conceptual models of inspection tasks. Journal of Ergonomics 30, 959-973. Creason, N.S., Pouge, N.J., Nelson, A.A., & Hoyt, C.A. (1985). Validating the nursing diagnosis of impaired physical mobility. Nursing Clinics of North America, 20 669-683. Crisp, J., Pelletier, D., Duffield, C., & Adams, A. (1997). The Delphi Method. Nursing Research 116-118. Dalkey, N., & Helmer, O. (1963). An expe rimental application of the Delphi method to use of experts. Management Science, 9 458-467. De Clercq, P.A., Blom, J.A., Hasman, A., & Korsten, H.H. (2004). An ontologydriven approach for the acquisition a nd execution of clinical guidelines. Study of Health and Technological Information, 77 714-719. Debus, M. (1990). Handbook for excellence in focus group research. Washington, DC: Academy for Educational Development.
76 Delaney, C., & Moorhead, A. (1995). The NMDS, standardized language, and health care quality. Journal of Nursing Care Quality, 10 (1), 16-30. Delaney, C., Mehmert, P., Prophet, C., Be llinger, S., Huber, D., & Ellerbe, S. (1992). Standarized nursing language for health care information systems. Journal of Medical Systems,16 (4), 145-159. Delaney, C., Reed, D., & Clark, M. ( 2000). Describing patient problems and nursing treatment patterns using nursing mi nimum datasets (NMDS & NMMDS) and UHDDS repositories. Proceedings of the AMIA Symposium 176-179. Delphi Group Report. (2002). Taxonomy & Content Classification, Delphi Group Report. Denehy, J., & Poulton, S. (1999). Inform ation management and the computer: The use of standardized language in individualized health care plans. Journal of School Nursing, 15(1), 38-45. Deshpande, A.M., Shiffman, R.N. (2003). Delphi rating on th e internet. AMIA Annual Symposium Proceedings. 828. Dillman, D.A. (2000). Mail and internet surveys: the tailored design method (2 nd ed.). New York: John Wiley. Donabedian, A. (1980). Exploration in quality Assessment and monitoring, Vol. 1: Definition of quality and approaches of its assessment Ann Arbor, Michigan: Health administration Press. Duffield, C. (1993). The Delphi techni que: A comparison of results obtained using two expert panels. International Journal of Nursing Studies, 30 (3), 227-237.
77 Edols, L. (2001). Taxonomies are what? Free Print (97) Retrieved July 2005, from elements. Computers in Nursing, 15(2), 23-32. Ellemers, N., Spears, R., & Doosje, B. ( 2002). Self and social identity. Annual Reviews of Psychology Inc., Palo Alto, CA. 53, 161-186. Everett, A. (1993). Piercing the veil of th e future: a review of the Delphi method of research. Professional Nurse, 9 (3), 181-185. Fahrenkrug, M. A. (2003) Development of a nursing data set for school nursing. Journal of School Nursing 19(4), 238-248. Fayyad, U.M., Piatetsky-Shapiro, G., Smyth, P., & Uthurusamy, R. (1996). Advances in knowledge discovery and data mining (ed.). American Association of Artificial Intelligence Menlo Park, CA. Fiander, M., & Burns, T.A. (2000). A De lphi approach to describing service models of community mental health practice. Journal of Psychiatric Services, 51 (5), pp 656-658. Fitzsimmons, J.A. and Fitzsimmons, M.J. (2001). Service management: operations, strategies and information technology (4 th ed.), Boston: McGraw Hill. Fleiss, J.L. (1971). Measuring nominal scale agreement among many raters. Psychological Bulletin, 76 (5), 378-382. Fleiss, J.L. (1981). Statistical me thods for rates and proportions (2 nd ed.). New York: Wiley. Fry, M., & Burr, G. (2001). Using the Del phi technique to design a self-reporting triage survey tool. Journal of Accident and Emergency Nursing, 9 (4), 235-241.
78 Gassert, C.A. (1998). The challenge of meeting patients needs with a national nursing informatics agenda. Journal of the American Medica l Informatics Association, 5(3), 263-268. Gergen, K.J. (1995). Social construction and the educational process. In L. Steffe, & J. Gale, Constructivism in education (pp. 17-40). Hove, UK: Lawrence Erlbaum Associates. Goldman, A.E., & Mcdonald, S.S. (1987). The Group Depth Interview Englewood Cliffs, NJ: Prentice Hall. Goodman, C. M. (1987). The De lphi technique: A critique. Journal of Advanced Nursing, 12(6), 729-734. Goossen, W.T.F., Epping, P.J.M., Feuth, T., Dassen, T.W.N., Hasman, A., & van den Heuvel, W. J. A. (1998). A co mparison of nursing minimal datasets. Journal of the American Medical Informatics Association, 5 (2), 152-163. Goossen, W.T.F., Epping, P.J.M., Feuth, T ., van den Heuvel, W.J.A., Hasman, A., & Dassen, T.W.N. (2001). Using the NMDS for the Netherlands (NMD SN) to illustrate differences in patient populations and variations in nursing activities. International Journal of Nursing Studies, 38 (3), 243-257. Goossen, W.T.F., Epping, P.J.M., Van den Heuvel, W.J.A, Feuth T., Frederiks, C.M.A., Hasman, A. (2000). Development of nursing minimum data set for the Netherlands (NMDS): identification of categories and items. Journal of Advanced Nursing 31, 536-547. Graham, B., Regehr, G., Wright, J.G. ( 2003). Delphi as a method to establish consensus for diagnostic criteria. Journal of Clinical Epidemiology 56(12), 1150-1156.
79 Greenbaum, T.L. (1998). TheHhandbook for Focus Group Research Thousand Oaks, CA: Sage. Griens, A.M.G.F., Goossen, W.T.F., & Van der Kloot, W.A. (2001). Exploring the NMDS for The Netherlands using multidimensional scaling techniques. Journal of Advanced Nursing, 36(1), 89-95. Gudmundsdottir, E., Delaney, C., Thoroddsen, A., & Karlsson, T. (2004). Nursing and health care management and policy. Tr anslation and validation of the Nursing Outcomes Classification, labels and defi nitions for acute care nursing in Iceland. Journal of Advanced Nursing, 46 (3), 292-303. Gupta, U.A. & Clarke, R.E. (1996). Th eory and application of the Delphi technique: A bib liography (1975-1994). Technological Forecas ting and Social Changes Hall, A.D. & Fagen, R.E. (1950). Forward. In L. Nicoll (Ed.). Perspectives on nursing theory, (3 rd ed.). New York: Lippincot. Halsted, J. & Coudret, N., (2000). Impl ementing Web-Based Instruction in a School of Nursing: Implicati ons for Faculty and Students. Journal of Professional Nursing, 16(5), 273-281. Hanafine, S., (2004). Review of literature on the Delphi technique. Oxford, UK: Blackwell. Hannah, K. (2001). Canada. In V.K. Saba, et al., (Eds), Essentials of computers for nurses: Informatics for the new millennium (pp. 479-488). McGraw Hill. Hannah, K., Duggleby, W., & Anderson, B. (1 995). The development of essential data elements in Canada. MEDINFO. Paper presented at the 8 th World Congress on Medical Informatics.
80 Hardiker, N.R.B.S. (2003). Logical ontology for mediating between nursing intervention terminology systems. Methods Informatics Medicine, 42 (3), 265-270 Hardiker, N.R.B.S. (2001). Mediating between nursing inte rvention terminology systems. Proceedings of the AMIA Symposium: pp 239-243. Hardiker, N.R.B.S. (2004) Requirement of tools and techniques to support the entry of structured nursing data. Paper presented at MEDINFO 2004, San Francisco, CA. Hasson, F., Keeney, S., & McKenna, H. (2000). Research guidelines for the Delphi survey technique. Journal of Advanced Nursing, 32 (4), 1008-1015. Henry B., Moody, L.E., Pendergast J., ODonnell, J., Hutchinson SA, & Scully G. (1987). Delineation of nursing administration research priorities. Journal of Nursing Research, 36 (5), 309-314. Hitch, P. J., & Murgatroyd, J. D. (1983). Professional communications in cancer care: a Delphi survey of hospital nurses. Journal of Advanced Nursing, 8 (5), 413-422. Houfek, J.F. (1992). Nurses perceptions of the dimensions of nursing care episodes. Nursing research 41(5), 280-285. Huber, D., Schumacher, L., & Delane y, C. (1997). Nursing Management Minimum Dataset (NMMDS). Journal of Nursing Administration, 27(4), 42-48. I.C.O.N.S. (2002). Interagency colla borative on nursing statistics. Nurses, Nursing Education and Nursing Workforce: Definitions. AACN. Retrieved May 2004, from http://www.iconsdata.org/educationrelated.htm Iowa Intervention Project. IIP. (1993). The NIC taxonomy structure. Image: Journal of Nursing Scholarship, 25 187-192.
81 International Medical Informatics Asso ciation IMIA. (1999) Recommendations of the international Medical Informatics Asso ciation on education in health and medical informatics. Retrieved February 4, 2004, from http://www.imia.org Jackson, V.A., Palepu, A., Szalacha, L., Caswell, C., Carr, P.L., & Inui, T. (2003). Having the right chemistry: A qualitative study of mentoring in academic medicine. Journal of Academic Medicine, 78 (3), 328-334. Jairath, N., & Weinstein, J. (1994a). The Delphi methodology (part one): a useful Jairath, N., & Weinstein, J. (1994b). The Delphi methodology (par t two): a useful administrative approach. Canadian Journal of Nursing Administration, 7 (4), 7-20. Jeppson, L. Nursing Education Minimum Datase : Adding Quality to Nursing. Paper presented at the Nursing Info rmatics 96:263, University of Iowa. Johnson, M., Gardner, D. Kelly K., et al. (1994). The Iowa model: a proposed model for nursing administration. Journal of Nurse Economy. 9 (4), 255-262. Johnson, M., Mass, M. (1997) Nursing Outcomes Classification Iowa Outcomes project. St. Louis, Mo: Mosby. Jones, J. & Hunter, D (1995). Consensus Methods for Medical and Health Services Research. British Medical Journal, 311 376-380. Jones, J. Sandeson, C., & Black, N. (1992). What will happen to the quality of care with fewer junior doctors? A Delphi study of consultant physicians views. Journal of the Royal College of Physicians of London. 26(1), 36-40. Jones, P.E., (1982). Developing terminology: University of Toronto experience (1978). In M.J. Kim & D.A. Moritz (Eds.), Classification of Nursing Diagnosis
82 Proceedings of the 3rd and 4th National C onference, (pp. 138-145). New York: McGrawHill. Junger, A., Berthou, A., & Delaney, C. (2004 ). Modeling, the essential step to consolidate and integrate a national NMDS Amsterdam: IOS Press. Karpiuk, K.L., Delaney, C., & Ryan, P. (1997). South Dakota st ate wide Nursing Minimum Dataset Project. Journal of Professional Nursing, 13, 76-83. Keenan, G., & Aquiline, M.L. (1998). St andardized nomenclatures: Keys to continuity of care, nursing accountability and nursing effectiveness. Journal of Outcomes Management of Nursing Practice 2 (2):81-6. Keeney, S., Hasson, F., & McKenna, H. P. ( 2001). A critical revi ew of the Delphi technique as a research methodology for nursing. International Journal of Nursing Studies, 38 (2), 195-200. Kiernan, N.E., Oyler, M.A., Kiernan, M., & Gilles, C., (2005). Is a web survey as effective as a mail survey? A field experiment among computer users. American Journal of Evaluation. 26 (2), 245-252. Kim, M.J. et al., (1984). Clinical valida tion of cardiovascul ar nursing diagnosis, (ed), Classification of Nursing Diagnosis Proceeding of the 5th national conference. St. Louis: Mosby, 128-137. King, I. (1998). Nursing informatic s: a universal nursing language. Florida Nurse, 46(1), 1-3, 5, 9. Kravitz, R. L., Bell, R.A., & Frans, C.E. (1999). A taxonomy of requests by patients (TORP): A new system for understandin g clinical negotiation in office practice. The Journal of Family Practice, 48 (11), 872-878.
83 Krippendorff, K. (2004). Web Dictionary of Cybernetics and Systems Definitions. Retrieved July 2004, from http://www.asccybernetics Kritek, P.B. (1984). Report of the group on taxonomy, from Classification of Nursing Diagnosis. Proceedings of the 5th national conference. St.Louis: Mosby, pp 4658. Krueger, R. A. & Casey, M.A. ( 2000). Focus Groups: A Practical Guide for Applied Research (3 rd ed.). Thousand Oaks, CA.: Sage. Krueger, R. A., & King, J.A. (1998). Involving Community Members in Focus Groups Thousand Oaks, CA: Sage. Kruskal, J.B., & Wish, M. (1978). Multidimensional Scaling Beverly Hills, CA: Sage. Lang, N. M., Hudgings, C., Jacox, A., Lancour, J., McClure, M. L., McCormick, K., et al. (1995). Toward a national database for nursing practice. In An emerging framework: Data system advances for clinical nursing practice. American Nurses' Association American Nu rses Publishing, # NP-94. Lanterman, E.D., & Feger, H. (1980). Similarities and Choice Lang Druck AG, Bern-liebefield, 54-70. Lawson, W.J., & Ogg, P.J. (1989). Analys is of phonetic relationships among populations of the avian genus Batis (Platyst eirinae) by means of cluster analysis and multidimensional scaling. Biometrics Journal, 31 243-254. Leahy, C.W. (2004). The Birdwatchers Companion to North American Birdlife. Princeton, NJ: Princeton University Press.
84 Lehman, J. (2003). Taxonomies for pract ical information management. NIE Enterprise Search. Retrieved from http://www.searchtools.com/info/classifiers.html Lindeman, C.A. (1975). Delphi Survey of priorities in clini cal nursing research. Nursing research 249(6), 434-441. Linstone, H.A., & Turoff, M. (2002) eds. The Delphi Method Techniques, And Applications Portland State University. Marek, K., Kneedler, J., Zielstorff, R., De laney, C., Marr, P., Averill, C., et al. (1997). Nursing information and dataset evaluation center. In Nursing Informatics: The Impack of Nursing Knowledge on Health Care Informatics Proceedings of 1997 6 th Triennial International Congress of IMIA-NI, Nursing Informatics of International Medical Informatics Associa tion (Gerdin, U., et al.). IOS Press 46 (257-62). Marshal, I.C. & Rossman, G.B. (1995). Designing Qualitative Research 2 nd Edition, London: Sage. Mass, M., & Delaney, C. (2004). Nursi ng process outcomes linkage research: Issues, current status, and health policy implications Journal of Medical Care, American Public Health, 4 (2), 40-48. Mass, M.L., Johnson, M., & Moorhea d, S. (1996). Classifying Nursing Sensitive Patient Outcomes. Image-The Journal of Nursing Scholarship, Sigma Theta Tau International, 28(4), 295-301. McCloskey, B.A., & Diers, D.K. (2005). Effects of New Zelands health reengineering on nursing and patient outcomes. Journal of Medical Care, 43 (11), 11401146.
85 McCloskey, J. C., & Bulecheck, G. (1995). Nursing Intervention Classification Iowa Intervention Project, ( 2nd ed.), St. Louis, MO: Mosby. McCloskey, J. C., Gamma, & Bulecheck, G. (1995). Iowa Intervention Project (IIP) Validation and coding of the NIC Taxonomy structure. Image-The Journal of Nursing Scholarship, 27 (1), 43-49 McCormick, K. A., & Zielstorff, R. (1995). Building a Unified Nursing Language System (UNLS). In An emerging framework: data system advances for clinical nursing practice American Nurses Publishing, #NP-94. McCormick, K. A., Lang, N., Zielstorff, R., Milholland, K., Saba, V., & Jacox, A. (1994). Toward standard classification sc hemes for nursing language: Recommendations of the American Nurses Association Steering Committee on Databases to Support Clinical Nursing Practice. Journal of the American Medical Informatics Association, 1(6), 421-427. McCormick, K., Renner, A. L., Mayes, R., Regan, J., & Greenberg, M. (1997). The Federal and private sector roles in the development of minimum data sets and core health data elements. Computers in Nursing, 15 (2), 23-32. McHugh, P.R. (2005). Striving for cohe rence: psychiatrys efforts over classification. Journal of the American Medical Association, 293 (20), 25262528. McKenna, H. P. (1994). The Delphi technique: a worthwhile research approach for nursing? Journal of Advanced Nursing, 19 (6), 1221-1225. Miles, M.B., & Huberman, M.A. (1994). Qualitative Data Analysis (2 nd ed). Thousand Oaks, CA: Sage.
86 Monti, E.J., & Tingen, M.S. (1999). Multip le paradigms of nursing sciences. Journal of Advances in Nursing Sciences, 21 (4), 64-80. Moody, L. (2004). Taskforce to Develop a Nursing Education Minimum Dataset (NEMDS). NLN Nursing Education Research Advisory Council (NERAC). Moorhead, S., Head, B., Johnson, M., & Maas, M. (1998). The nursing outcomes taxonomy: Development and coding. Journal of Nursing Care Quality, 12 (6), 56-63. Morrel-Samuels, P. (2003). Web surveys hidden hazards: Companies replacing paper surveys with web-based version. Harvard Business Review, 81 (7), 16-17. Mortensen, R. (1996). International classification for nursing practice (ICPN) with telenurse introduction Copenhagen, Denmark:Danish Institute for Health and Nursing Research Murphy, M.K., Black, N.A., Lamping, D.L., McKee, C.M., Sanderson, C.F.B., Askham, J., et al. (1998). Consensus devel opment methods and their use in clinical guideline development. Health Technology Assessment, 2 (3), 2p. Nathens, A.B., Rivara, F.P., Jurkovich, G.J., Maier, R.V., Johansen, J.M., & Thompson, D.C. (2003). Management of the in jured patient: Identification of research topics for systematic review using the Delphi technique. Journal of Trauma, Injury, Infection, and Critical Care, 54 (3), 595-601. National Health Services. (2005). Health and Social Care Information Center, Dataset Development Program: Definition of terms. Retrieved July 2005, from http://www.icservices.nhs.uk/dataset/pages/dataset_definitions.asp
87 National Health Services. (2005). Populat ion Health and Services Management Information. The National Health Development Programme. SNOMED CT Spectre Workshop. National League for Nursing (2002). Advisory councils. Retrieved from http://www.nln.org/abou tnln/nlncouncils.htm National League for Nursing (2004). Priori ties for research in nursing education. Retrieved from http://www.nln.org/aboutnln/research.htm National League for Nursing. (2002). Na tional League for Nursing GoalsOur Ends. Retrieved from http://www.nln.org/aboutnln/ourmission.htm National League for Nursing. (2003). Nurs ing Education Minimum Dataset Task Force Report. Retreived November 2004, from http://www.nln.org/abou tnln/research.htm Oppenheimer, G.M. (2001). Paradigm lost: race, ethnicity, and the search for new population taxonomy. American Journal of Public Health, 91 (7), 1049-1055. Osoba, D. (2002). A taxonomy of the uses of health-related quality-of-life instruments in cancer care and the clin ical meaningfulness of the results. Journal of Medical Care, 40 (6), III-31-III-38. P.I.T.A.C. (2004). Revolutionizing hea lth care through information technology. Presidents Information Technology Adviso ry Committee, June 2004, Report to the President. National Coordination Office for Information Technology Research and Development. Retrieved November 2004, from http://www.nitrd.gov Patience, N., & Chalmers, R. (2002). Unstructured data management: The elephant in the corner. The 451 Report Retrieved July 2005, from http://www.searchtools.com/info/classifiers.html
88 Pew. (1998). Recreating health professi onal practice for a new century. The 4th Report of the Pew Health Professions Commission. Phill, J. (1971). The Delphi method: S ubstance, context, a critique, and an annotated bibliography. Socio-Economic Planning and Sciences, 5 57-71. Polit, D., & Hungler, C. (1995). Nursing Research: Methods, Appraisal and Utilization (3 rd ed.). Philadelphia: Lippincott. Polit, D., & Hungler, C. (2004). Nursing Research Principles and Methods (7 th ed.). Philadelphia: Lippincott Williams Wilkins. Polit, DF, & Beck, CT. (2004). Nursing Research Principles and Methods (7 th ed.). Philadelphia: Lippincott Williams Wilkins. Powell, C. (2003). The Delphi te chnique: Myths and realities. Journal of Advanced Nursing, 41(4), 376-382. Prophet, C. M., & Delaney, C. W. (1 998). Nursing outcomes classification: Implications for nursing information system s and the computer-based patient record. Journal of Nursing Care Quality, 12 (5), 21-29. Rantz, M.J. (1995). Quality measurem ent in nursing: where are we now? Journal of nursing care quality, 9 (2), 1-7. Rapoport, A. (1968) Forward. In L. Nicoll (Ed.). Perspectives on nursing theory (3 rd ed.). Lippincott. 385. Rapoza, J. (2002). Standard target categorization. E Week. Retrieved July 2005, from http://www.searchtools.co m/info/classifiers.html Rasch, R.F.R, (1987). The nature of taxonomy Image: Journal of Nursing Scholarship, 19 (3), 147-149.
89 Reed, J. & Roskell,V. (1997). Focus group: Issues of analysis and interpretation. Journal of Advanced Nursing (26) 765-771. Reid, N. (1988). The Delphi technique: It s contribution to the evaluation practice. In Professional Competency and Quality Assurance in the Caring Professions (pp. 230254). London: Chapman & Hall. Renner, A. L., & Swart, J. C. (1997). Patient core dataset: Standard for a longitudinal health/medical record. Computers in Nursing, 15(2), Supplement S7-13. Renwick, M., National minimum datase ts: The Australian experience. Information Technology, 4 (3), 49-53. Reynolds, P.D. (1971). A Primer in Theory Construction New York: BobbsMerill. Roberts-Witt, S.L. (1999). Practical ta xonomies: Hard-won wisdom for creating a workable knowledge classification system. Knowledge Management: Built to Order. Retreived July 2005, from http://www.searchtools.com/info/classifiers.html Robins, L.S., Braddock, C.H., Fryer-Edw ards, & Kelly, A. (2002). Using the American Board of Internal Medicines Elements of Professionalism for undergraduate ethics education. Journal of Academic Medicine, 77 (6), 523-531. Robson, C., (1993). Real World Research Oxford: Blackwell. Rogers, M.R., and Lopez, E.C., (2002). Id entifying Critical Cross Cultural School Psychology Competencies. Journal of School of Psychology, 40 (2), 115-141. Ryan, P., & Delaney, C. (1995). Nursing minimum dataset. Annual Review of Nursing Research, 13 169-194.
90 Ryan-Wegner, N.M. (1992) A taxonomy of childrens coping strategies: A step toward theory development. Journal of Orthopsychiatry, 62 (2), 256-263. Saba, V. K. (1992a). The classification of home health care nursing: diagnoses and interventions. Caring, 11(3), 50-57. Saba, V. K. (1992b). Home hea lth care classification, part 2. Caring, 11(5), 5860. Saba, V. K., & Zuckerman, A.E. (1992) A new home health classification method. Caring, 11(10), 27-34. Saba, V., & Zuckerman, A.E. (1992). A home health care classification system. In Lun, K.C., et al. (Eds.). MEDINFO, 92(3), 34-38. Savant, P., & Dillman, D.A. (1994). How to Conduct Your Own Survey New York: John Wiley & Sons. Schiffman, S.S. Reynolds M.L., & Young, F.W. (1981) Introduction to MDS: Theory, methods, and application. United Kingdom: Academic Press. Schwandt, T.A. (2000). Three epistemologi cal stances for qualitative inquiry. In N.K. Denzin, & Y.S. Lincoln (Eds.), Handbook of Qualitative Research (2 nd ed.). London: 189-213. Sermeus, W., & Delesie, L. (1994). The re gistration of a NMDS in Belgium. In Nursing Informatics: An International O verview for Nursing in a Technological Era (pp. 144-149). Amsterdam, The Netherlands: Elsevier North Holland. Sharkey, S.B., and Sharples, A.Y., (2001) An approach to consensus building using the Delphi technique: developing a learni ng resource in mental health. Journal of nursing education today, 21: 398-408.
91 Shepard, R.N. (1972). Introduction to Volume 1. In R.N. Shepard, A.K. Romney, & S.B. Nerlove (Eds.), Multidimensional scaling: Theory and applications in the behavioral Sciences, Vol 1, pp. 1-20. New York: Seminar Press. Skews, G., Meehan, T., Hunt, G., Hoot, S., & Armitage, P. (2000). Development and validation of clinical indicators for mental health nursing practice. Australian and New Zealand Journal of Me ntal Health Nursing, 9(1), 11-18. Smith, N.J., & Iles, K. (1988). A graphical depiction of multivariate similarities among sample plots. Canadian Journal of Forestry Research, 18 467-472. Snyder-Harpen, R. (2002). Indicators of organizational readiness for clinical information technology/systems innovation: A Delphi study, intern ational journal of medical informatics(63) 179-204. Snyder-Harpen, R., Thompson, C. & Schaffer, J. (2003). Comparison of mailed vs. internet applications of the Delphi t echnique in clinical informatics research. Retrieved November 2004 from http ://www.amia.org/pubs/symposia/ D200120.PDF. Stagger, N., Gassert, C.A., & Curran, C. (2002). A Delphi study to determine informatics competencies for nurses at four levels of practice. Journal of Nursing Research 12 (51), 383-389. Stevanovic, R.,Tiljak, H., Stanic, A ., Varga, T., Jovanovic, A. (2005). International classification of primary car e and its application in Croatian health. Journal of ActaMed Croatica, 59 (3), 267-271. Stevens, K.R., (1999). Advancing evidence -based teaching. In K.R. Stevens, & V.R.Cassidy (Eds.), Evidence-Based Teaching: Current Research in Nursing Studies, 34(1)63-71.
92 Stokes, F. (1997). Using the Delphi Techni que in Planning of a Research project. On the Occupational Therapists Role in Enabling People to Make Vocational Choices Following Injury. British journal of Occupational therapy, p. 263-267 Stucki, G. (2005). Interna tional Classification of F unctioning, Disability, and Health (ICF): A promising framework and classification for rehabilitation medicine. American Journal of Physical Medicine & Rehabilitation, 84 (10), 733-740. Sullivan, E. J., (1997). A changing higher education environment. Journal of professional nursing, 13, 143-148. Tinto, V. (1988). Student Integration M odel. Leaving College Chicago: University of Chicago Press. Tong, S.T.Y. (1989). On nonmetric multid imensional scaling, ordination and interpretation of matorral vegetation in law land Murcia. Vegetatir, 79 65-74. Torgerson, W.S. (1958). Theory and Method of Scaling New York: Wiley. Turner, J.H. (1986). The Structure of Sociological Theory (4th ed.). Chicago: Dorsey Press. Van Gele, P. (1996). Standardization et la classification des soins infirmiers: essentielles? (Is eine standardizatisie rung und klassifiziner ung der pflegtatigkeit notwendig?) PCS Ne ws, (23):30-36 Volrathongchi, K., Delaney, C.,& Rhuphaibul R. (2003). Nursing and health care management issues: NMDS development and implementation in Thailand. Journal of Advanced Medicine, 43 (6), 588-596.
93 Wagner, P. S., (1999). The trend to ward regional governance in nursing management in Canada In J.M. Hibberd & D.L. Smith (Eds), (2 nd ed.) pp.109-133. W.B. Saunders, Toronto, Ontario. Walker, L.O., & Avant, K.C. (1995). Strategies for Theory Construction in Nursing (3 rd ed.). Norwalk, CT: Appleton & Lang. Wang, S.J.B., Middleton, L.A., Prossere, C.G., Bardon, C.D., Spurr, P.J., Carchidi, A.F., Kittler, R.C., Goldszr, D.G., Fairchild, A.J., Sussman, G.J., Kuperman, D.W., Bates, A., (2003). A cost benefits analysis of electroni c medical record in primary care. American journal of medicine. 114(5), 397-403. Warner, A.J. (2005) A taxonomy primer. Lexonomy Retrieved July 2005, from: http://www.lexonomy.com/publications/a taxonomy.primer.html Weber, P. Les donnes minimales de soins infirmiers in soins infirmiers Krankenpflege Soins Infirmiers, 4: 13-7. Werley, H. H., Ryan, P., & Zorn, C. R. (1995). The NMDS: A framework for the organization of nursing language. In An Emerging Framework: Data System Advances for Clinical Nursing Practice American Nurses' Association. American Nurses Publishing, #NP-94. Werley, H., & Lang, N. (1988). Identification of the NMDS. New York: Springer. Werley, H.H., Devine, E.C., Zorn, C.R., Ryan, P., Westra, B. (1991). The nursing minimum datset: Abstraction tool for standardized, comparab le, essential data. American journal of public health 81(4),421-42. Wheeler, Q. D. (2004). Taxonomic tria ge and the poverty of Phylogene. Philosophical Transcript of the Royal Society, London. Biology 359 : 571-583.
94 Wheeller, M. (1992). Terminology and minimum datasets for the nursing profession. Information Technology in Nursing, 5 (3), S1-5. White, D. (2005). National Health Servi ces. Population Health and Services Management Information. The National Health Development Programme, The National Dataset Development Programme. A SNOMED CT Spectre Workshop Retrieved July 2005, from http://www.icservices.nhs.uk/dataset/pag es/presentations/s pectre/default.asp Williams, P., & Webb, C. (1994). The Delphi Technique: A methodological discussion. Journal of Advanced Nursing, 19, 180-186. Wilson, J.M., & Retsas, A.P. (1997). Pe rsonal construct of nursing practice: Comparative analysis of three groups of Australian nurses. International Journal of NursingStudies, 34(1): 63-71. Wilson, M., Engelharde, G., & Draney K. (1997). Objective measurement theory into practice, Vol 4 (113-157). London: Ablex Corporation. Yonge, O.J., Anderson, M., Profetto-McGrat h, J., et al. (2005). An inventory of nursing education research. International Journal of Nu rsing Education Scholarship, 13 : 77-79. Young, F.W. & Hamer, R. (1987). Multidimensional Scaling: History, Theory, and Application Hillsdale, NJ: Erlbaum. Young, F.W. & Harris D.F. (1994). Multidimensional scaling. SPSS Professional Statistics, 6.1. Chicago: SPSS Inc. Zielestorff, R.D., Lang, N.M., Saba,V .K., McCormick, K.A, & Milholland D.K.(1995) Towards a unified language for nur sing in the U.S.: Work of the American
95 Nurses Association Steering Committee on Da tabases to Support Clinical Practice. Medinfo, 8, Part 2: 1362-6.
97 Appendix A: Nursing Education-Related Terminologies 1. Practical/Vocational Nursing Program* A program of instruction, usually 12 to 18 months in length, generally within a high school, vocatio nal/technical school or community/junior college setting, the comple tion of which results in a diploma or certificate of completion and eligibility to apply for licensure as an LPN/VN. 2. Basic (or Entry or Generic Level) Program*. A program of instruction that prepares individuals for entry into registered nurs e practice and eligibility to apply for licensure as an RN. 3. LPN/VN to Associate Degree in Nursing Program*. A program of instruction to prepare registered nurses that is specifical ly designed to admit individuals licensed as practical/vocational nurses a nd, at completion, awards an associate degree in nursing and eligibility to apply for licensure as an LPN/VN. 4. Diploma Nursing Program*. A program of instruction, usually two to three years in length, within a hospital-based structural un it, the completion of which results in a diploma or certificate of completion and elig ibility to apply for licensure as an RN. 5. Associate Degree Nursing Program*. A program of instructi on, usually two years in length, generally within a j unior or community college the completion of which results in an associate de gree (e.g., AS, AA, AAS, ADN, etc.) with a major in nursing and eligibility to apply for licensure as an RN. 6. Baccalaureate Nursing Program*. A program of instruction, usually four years in length, within a senior college or univers ity, the completion of which results in a baccalaureate degree (e.g., BA, BS, BSN, etc. ) with a major in nursing, if not already licensed as an RN, and eligibility to apply for licensure as an RN. 7. Master's Nursing Program. A program of instruction within a senior college or university that builds on baccalaureate co mpetencies and focuses on an area of specialization and the completion of which results in a master's degree (e.g., MSN, MS, MA) with a major in nursing and, if not already licensed as an RN, eligibility to apply for licensure as an RN. 8. Doctoral Nursing Program. A program of instruction within a senior college or university that prepares a clinical, educatio nal, or research scholar and the completion of which results in a Doctoral degr ee in nursing (e.g., Ph.D., DNSc, Ed.D). 9. Nurse Doctorate or Doctor of Nursing Program (ND). A program of instruction in a senior college or university that prepares clinical practitioner/scholars to assume advanced practice clinical and leadership roles. Generally ND programs are designed as generic (basic or entry-le vel) programs for individuals wi th bachelor's degrees in a discipline other than nursing. Upon comple tion, graduates are awarded a doctor of
98 nursing (ND) degree. In general, ND students are eligible to apply for RN licensure after the first two years of the program. (Note: This program is different from a Doctoral Program in Nursing.) 10. Post-Master's Certificate. A formal, post-graduate program that admits RNs with master's degrees in nursing and, upon completion of a specialized area of study, awards either a certificate or other evidence of completion. (Note: This program is different from short term c ontinuing education programs.) 11. Post-Doctoral Program in Nursing. A program environment for research training designed to attract highly qualified candidates. Postdoctoral fellows must hold a doctoral degree in nursing a nd are expected to remain active in research upon completion of the program. 12. Basic (or Entry Level) Program. A program of instruction that prepares individuals for entry into registered nurse practice and el igibility to apply for licensure as an RN. 13. Continuing Education Program An educational offering designed to help nurses maintain or expand their competence in their role. Such offerings may include workshops, institutes, self-study, clinical conferences, staff development courses, individual study, or other options. They do not include study for an academic degree or academic certificate (e.g., post-master). 14. Program Articulation. A process through which two or more nursing programs cooperate to accommodate the learning needs and career goals of students, as they progress from one level of preparation to another, with minimal repetition and duplication of lear ning experiences. 15. Academic Year. A designated period of time institutions use to measure a quantity of academic work to be accomplished by a student, or to define the period of time in which an academic year-based appointee renders services. Generally, an institution defines its own academic year, for example, from the beginning of the fall term through the end of the spring term. 16. Academic Health Center. An academic health center consists of an allopathic or osteopathic medical school, at least one other health pr ofessions school or program, and at least one affiliated or owned teaching hospital. 17. Chief Executive Officer Nursing Education Unit. The individual who has primary and ultimate responsibility for a nursing academic unit. This may be the Dean, Director, Department Head, Chairperson, or other institutionally-determined title. 18. Non-Nurse Faculty. Individuals who teach nursing students selected courses (e.g., pharmacology, nutrition, statistics), but who, themselves, are not nurses. These individuals may hold full or part-time f aculty appointments in the nursing academic unit.
99 19. Full-time Faculty. Those members of the instructional, administrative, or research staff of the nursing academic unit who ar e employed full-time as defined by the institution, hold academic rank, carry the full scope of faculty responsibility (e.g., teaching, advisement, committee work), and receive the rights and privileges associated with full time employment. This faculty may be tenure d, tenure-track, or non-tenure track (given that there is a tenure system in the institution). 20. Part-Time Faculty. Those members of the instructiona l, administrative, or research staff of the nursing academic unit who ar e employed part-time as defined by the institution, may or may not hold academic rank, carry responsibility for a specific area (e.g., teaching a single course), and ma y carry any number of titles (e.g., adjunct, clinical instructor). This faculty is typically not eligible for tenure. 21. Tenure A system designed to protect faculty members' academic freedom and to provide enough financial security to attract able individuals to the profession. It is an affirmative commitment by an institution to a faculty member, generally offered after a probationary period of employment, as a right to continuing employment. 22. Tenured Faculty. Full-time faculties who have met the teaching, scholarship, service, and other criteria and requirements for tenur e, as established by the institution, and have been awarded permanent or con tinuous employment at that institution. 23. Tenure-Track Faculty. Full-time faculty in a proba tionary period of employment preliminary to consideration for tenure. Tenur e-track faculty are expected to meet the teaching, scholarship, service, and/or other criteria established by the institution for reappointment and eventual awarding of tenure, but do not claim any right to permanent or continuous employment at that institution. 24. Non-Tenure-Track Faculty. Full-time faculty employed in institutions with tenure that are not expected to meet a ll the teaching, scholarship, se rvice, or other criteria associated with tenure at that institution. Non-tenure-track faculty, for example, may not be required to engage in scholarly ac tivities or may have an increased teaching responsibility. In addition, they do not claim any right to permanent or continuous employment at the institution. 25. Enrollments The number of students who are offi cially recognized by a school and program as being enrolled in that program, as of a given date. (Note: This includes transfer students and re-admissions.) 26. First-Time Enrollments. All students enrolled in a nursing program who have never before been enrolled in any nursing program. 27. Basic (or Entry Level or Generic) RN Enrollments. The number of students enrolled in a program preparing them for RN licensure eligibili ty, as of a given date. 28. R.N.-to-Baccalaureate Enrollments. The number of already-licensed RNs enrolled in a baccalaureate nursing program, as of a given date.
100 29. Headcount The total number of individuals enrolled in a nursing program (i.e., LPN/VN, diploma, associate degree, generic/basic baccalaureate, RN baccalaureate, masters, etc.) on a specified date. It incl udes (1) all nursing st udents (students who have been formally accepted into the nursing program whether or not they have taken any nursing courses) and (2) admissions and transfer students. Excluded are (1) prenursing students (students who have not been formally accepted into the nursing program), (2) leave of absence students, a nd (3) continuing educa tion students, unless they are degree-seeking. 30. Full-Time Undergraduate Student. A student enrolled in an associate degree, diploma, or baccalaureate program who is registered for 12 or more semester hour credits (or their equivalent) in a particular semester and who is eligible for awards, scholarships, appointments, etc. that are limited to students enrolled on a full-time basis. 31. Part-Time Undergraduate Student. A student enrolled in an associate degree, diploma, or baccalaureate program who is registered for less than 12 semester hour credits (or their equivalent) in a particul ar semester and who is not eligible for awards, scholarships, appointments, etc. that are limited to students enrolled on a fulltime basis. 32. Full-Time Graduate Student. A student enrolled in a ma ster's or doctoral program who is registered for 9 or more semester hour credits (or thei r equivalent) in a particular semester and who is eligible fo r awards, scholarships, appointments, etc. that are limited to students enrolled on a full-time basis. 33. Part-Time Graduate Student. A student enrolled in a ma ster's or doctoral program who is registered for less than 9 semest er hour credits (or th eir equivalent) in a particular semester and who is not eligib le for awards, scholarships, appointments, etc. that are limited to students enrolled on a full-time basis. 34. Graduations The total number of individuals who have completed and been graduated from a nursing program within a specified time period. 35. Graduate from Post-RN Program. An individual already li censed as an RN who has completed an academic program of study beyond the initial nursing education, leading to an associate, baccalaureate or higher degree. 36. Graduate from Basic (or Entry-level or Generic) Program. An individual who has graduated from a state-approved program and is eligible to apply for initial licensure as an RN. Retrieved from the Interagency Coll aborative on Nursing Statistics 10 URL: ICONS: Nurses, Nursing Education, and Nursing Work force: Definitions. Re trieved on June 2004 from http://www.iconsdata.org/educationrelated.htm Copyright 2003
About the Author The author received her BSN and MSN in nursing administration in Saudi Arabia. Her clinical experience is in adult critical care, emergency, and trauma care, and nursing leadership. She worked in military, educat ional/university, and Arabian American Oil Company hospitals. She has taught nursing in vari ous health sectors, includi ng the Ministry of Health, and is a faculty member at the College of Nursing (since 1989), teaching BSN students a wide range of nursing courses. She is consid ered one of the pioneer s in contributing to the nursing profession in the Kingdom, clinically and academically. During her studies for the Ph.D., she join ed the Honor Society of Nursing, Sigma Theta Tau International, and the National League for Nursing (NLN). Her two-year service on the task force group to develop A National Nursing Education Minimum Data Set at the NLN continues today. She has at tended and participated in several nursing conferences nationally and internationally.