Educational policy analysis archives

Educational policy analysis archives

Material Information

Educational policy analysis archives
Arizona State University
University of South Florida
Place of Publication:
Tempe, Ariz
Tampa, Fla
Arizona State University
University of South Florida.
Publication Date:


Subjects / Keywords:
Education -- Research -- Periodicals ( lcsh )
non-fiction ( marcgt )
serial ( sobekcm )

Record Information

Source Institution:
University of South Florida Library
Holding Location:
University of South Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
E11-00106 ( USFLDC DOI )
e11.106 ( USFLDC Handle )

Postcard Information



This item has the following downloads:

Full Text
xml version 1.0 encoding UTF-8 standalone no
record xmlns http:www.loc.govMARC21slim xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govstandardsmarcxmlschemaMARC21slim.xsd
leader nam a22 u 4500
controlfield tag 008 c19989999azu 000 0 eng d
datafield ind1 8 ind2 024
subfield code a E11-00106
0 245
Educational policy analysis archives.
n Vol. 6, no. 15 (August 19, 1998).
Tempe, Ariz. :
b Arizona State University ;
Tampa, Fla. :
University of South Florida.
c August 19, 1998
Note on the empirical futility of labor-intensive scoring permutations for assessing scholarly productivity : implications for research, promotion/tenure, and mentoring / Christine Hanish, John J. Horan, Bethanne Keen, [and] Ginger Clark.
x Research
v Periodicals.
2 710
Arizona State University.
University of South Florida.
1 773
t Education Policy Analysis Archives (EPAA)
4 856

xml version 1.0 encoding UTF-8 standalone no
mods:mods xmlns:mods http:www.loc.govmodsv3 xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govmodsv3mods-3-1.xsd
mods:relatedItem type host
mods:identifier issn 1068-2341mods:part
mods:detail volume mods:number 6issue 15series Year mods:caption 19981998Month August8Day 1919mods:originInfo mods:dateIssued iso8601 1998-08-19


1 of 11 Education Policy Analysis Archives Volume 6 Number 15August 19, 1998ISSN 1068-2341 A peer-reviewed scholarly electronic journal. Editor: Gene V Glass Glass@ASU.EDU. College of Education Arizona State University,Tempe AZ 85287-2411 Copyright 1998, the EDUCATION POLICY ANALYSIS ARCHIVES.Permission is hereby granted to copy any a rticle provided that EDUCATION POLICY ANALYSIS ARCHIVES is credited and copies are not sold. A Note on the Empirical Futility of Labor-Intensive Scoring Permutations for Assessing Scholarly Produc tivity: Implications for Research, Promotion/Tenure, and Me ntoring Christine Hanish John J. Horan Bethanne Keen Ginger Clark Arizona State UniversityAbstract The measurement of scholarly productivity is embro iled in a controversy concerning the differential crediting of coauthors. Some researchers assign equivalent shares to each coauthor; others employ weighting sy stems based on authorship order. Horan and his colleagues use simple publication tot als, arguing that the psychometric properties of labor-intensive alternatives are unkn own, and relevant ethical guidelines for including coauthors are neither widely understo od nor consistently followed. The PsycLIT and SSCI data bases provided exhaustive pub lication and citation frequencies for 323 counseling psychology faculty. All PsycLIT scoring permutations yielded essentially identical information; inter-correlatio ns ranged from .96 to unity. Moreover, all PsycLIT methods correlated highly with SSCI wit hin a very narrow band. Since attention to the number and/or ordinal position of coauthors yields no useful information, productivity should be defined parsimo niously in terms of simple publication counts. Implications for research, prom otion/tenure, and the mentoring of graduate students are discussed. Publishing behavior is perhaps the most revered and reviled variable in education


2 of 11and psychology. The bipolar affect it generates und oubtedly derives from the fact that although the act of publishing is inextricably entw ined with status and the reward system of a scientific discipline (e.g., promotion, tenure merit pay, and the like), the criteria for evaluating what an individual publishes are much le ss clear (Merton, 1973). The concepts of productivity impact and quality are often used interchangeably as descriptors, yet there are important methodological and psychometric differences. Productivity refers to the quantity of publications attributable to a given scholar, expressed as a lifetime total or a yearly rate when divided by the scholar's professional age. Impact generally means how frequently that ind ividual's work is cited by other authors, which likewise can be expressed as a lifet ime total or a yearly rate. Quality is almost never assessed directly; productivity and im pact, though, frequently pose in its place (see Keen, Horan, Hanish, Copperstone, & Trib bensee, 1998). Since vita entries provide no assurance that a document really exists, the assessment of productivity is usually confined to t he number of publications by an individual that appear in large data-bases such as ERIC or PsycLIT (Horan & Erickson, 1991). The gate-keeper functions in these data-base s, however, infuse raw counts of productivity with elements of quality. For example, PsycLIT only lists articles that appear in refereed journals recognized by APA as re levant to the discipline of psychology. The assessment of impact is likewise us ually restricted to full citation histories contained in large holdings such as SSCI, though sm aller segments of that data base and/or fewer numbers of outlets have been used (alb eit, unreliably, see Horan, Hanish, & Beasley, 1995). SSCI is more often associated with quality than is PsycLIT, but that kudo may not be warranted. Hanish, Horan, Keen, St. Peter, Ceperich, and Beasley (1995) reported high relationships between PsycLIT and SSCI; moreover, other limitations of SSCI are less well known and underst ood. For example, SSCI scores may be inflated by hidden self-citations, citations by prolific colleagues, advisees, or significant others, the notoriety of a study rather than its importance, and so forth (see Horan, Hanish, Keen, Saberi, & Hird, 1993). The measurement of productivity has bec ome embroiled in a controversy concerning the differential crediting of coauthors. Some researchers (such as Bohn, 1966; Goodstein, 1963; Goodyear, Abadie, & Walsh, 1 983; Katz & Brophy, 1975; Tinsley & Tinsley, 1979; Walsh, Feeney, & Resnick, 1969) give each coauthor equal partial credit (e.g., a third of a point to three c oauthors of a given article); others (such as Delgado & Howard, 1994; Ellis, Haase, Skowron, & Ka minsky, 1993; Howard, 1983; Howard, Cole, & Maxwell, 1987; Osipow, 1985; Skovho lt, Stone, & Hill, 1984) apply various weighting formulas based on the ordinal pos itions of coauthors (e.g., first author receives half of the credit, the second author 30% of the credit, and the last author the final 20%). In contrast, Horan and his colleagues ( e.g., Hanish, et al.,1995; Horan & Erickson, 1991; Horan, Weber, Fitzsimmons, Maglio, & Hanish, 1993b) have always used simple raw PsycLIT totals for each author, arg uing that the psychometric properties of the foregoing schema are unknown, and APA's ethi cal guidelines for assigning authorship are neither widely understood nor consis tently followed (e.g., see Fine & Kurdek, 1993; Goodyear, Crego, & Johnston, 1992). The present study, therefore, attempted to clarify the relationships between the various scoring permutations of PsycLIT with each o ther and with SSCI. Although the same scoring controversy could apply to coauthorshi ps listed in ERIC or in other data bases, we chose PsycLIT because its refereed holdin gs are obtained independent of author consent, and thus provide a more meaningful basis for comparison with other


3 of 11indices of scholarly merit.MethodSubjects Hanish et al. (1995), identified the en tire population of academic counseling psychology faculty ( n = 323) who were members of Division 17 and who had governance responsibilities in any active doctoral training program; for each individual, they secured complete PsycLIT data from 1974 to 199 1 and SSCI data from 1971 to 1991. In the present study we updated all PsycLIT a nd SSCI data on these individuals to be current to 1996.Measures The PsycLIT data base includes all Psychological Abstracts references attributable to individual authors published from 1 974 to present. A search by author name yielded a full bibliographical citation list f or that author including coauthors and abstracts. These data were scored according to six different methods described as follows: Method 1 used by Horan and his associates (e.g., Horan & E rickson, 1991; Hanish et al., 1995), awards a single point to each author for each publication regardless of the number of coauthors or their ordinal positio n. If an individual has 13 sole or coauthored publications in the PsycLIT data base hi s or her score will be 13. Method 2 is relatively popular (e.g., Bohn, 1966; Goodstein 1963; Goodyear, Abadie, & Walsh, 1983; Katz & Brophy, 1975; Tinsley & Tinsley, 1979; Walsh, Feeney, & Resnick, 1969); coauthors receive equal p artial credit (e.g., a third of a point to three coauthors of a given article). First and last authors are treated alike. Method 2 and all methods that follow are increasing ly labor intensive in that they require the computation and summing of various amou nts of credit for each bibliographic entry on a given author's publication record. Method 3 (Delgado & Howard, 1994; Howard, 1983) awards one point to sole authors. The first and second authors of a coauthor ed publication receive .67 and .33 points, respectively. If three coauthors are in volved, the differential credit allocations are .50, .30, and .20. Additional coaut hors result in decreasing credit for all. Method 4 (Howard, Cole, & Maxwell, 1987) uses a very comple x formula to compute the differential allocation of credit. As w ith Method 3, authors and coauthors receive declining amounts of credit as th eir numbers increase and their ordinal positions descend. Method 5 (Osipow, 1985; Skovolt, Stone, & Hill, 1984) award s sole authors and first authors 5 points, second authors 4, third aut hors 3, and fourth authors 2; all subsequent coauthors receive a score of 1. Points a re thus constant across ordinal position.


4 of 11 Method 6 was devised by Ellis, Haase, Skowron, and Kaminsky (1993). Weights depend on the number of authors, the order of autho rship, and the value of the article using the method of Skovolt, Stone, and Hil l (1984). For example, an article with three coauthors has a value of 12 whic h is derived by adding five points for the first author, four points for the se cond author, and three points for the third author. The first author's credit then is 5/12 or .417; the second author's credit is 4/12 or .333 and so on. For articles with more than four coauthors, the fifth and subsequent authors receive equal shares o f .067 such that, for example, the fifth and sixth authors would each receive .034 The credit consequences of the six diff erent productivity scoring methods on the coauthors of a given article can be seen in Table 1 .Table 1Template for Productivity Scoring Methods Indicatin g Comparative Credit by Number and Ordinal Position of Coauthors. Author/ Coauthors Method 1 Horan Method 2 Walsh Method 3 Howard 1 Method 4 Howard 2 Method 5 Skovholt Method 6 Ellis 1/11.0001.0001.0001.005.0001.0001/21.000.500.670.6005.000.5562/21.000.500.330.4004.000.4441/31.000.333.500.4745.000.4172/31.000.333.300.3164.000.3333/31.000.333.200.2103.000.2501/


5 of 11 1/ The names are those of researchers most close ly associated with the various scoring methods. Under Author/Coauthors, 1/1 = sole author, 1/2 = first author of an article by two authors, 2/3 = second author of an article by three authors, etc. SSCI is a compilation of citations to a given sole or first author by that same author and other scholars from 26 disciplines in th e social and behavioral sciences. Cited authors are arranged alphabetically in bound volume s covering the years 1966 to present. Our search was confined to the SSCI volumes paralle ling our PsycLIT database. Below each cited author's work in SSCI is a list of indiv iduals who referenced that work along with abbreviated outlet information. We used two SS CI scoring methods, namely, the grand total and the grand total minus obvious selfcitations. An obvious self-citation occurred when a first author cited himself or herse lf in a first-authored reference. SSCI makes no provision for detecting "hidden" self-cita tions, for example, second authors citing their first-authored works.Procedures Procedures for faculty identification, biographical information, reliability analyses, and so forth are described in Hanish et a l. (1995). The new PsycLIT and SSCI raw data obtained for the present study were secure d in the same fashion. Each of the 323 faculty publication histories was then coded ac cording to the methods described above by doctoral students working independently. T his, of course, was an extremely time-consuming process. A random sample of 1752 pub lications was rechecked by additional students; disagreements between coders w ere trivial (1.9%). To facilitate further work in this area, a priori scoring templates are presented in Table 1. For example, if an individual is listed as third of fou r authors on a particular publication, the columns contain the precalculated author-position s cores for each of the six methods.


6 of 11 Results The actual raw data on which all analys es are based are being made available to the reader. From this point, the data files can be accessed in EXCEL, SPSS or ACII format. Of 323 individual faculty, only 10 had no evidence of publishing history in the PsycLIT and SSCI data bases. A similar number excee ded 65 publications and 650 citations. The median faculty member in our study h ad 13 publications in PsycLIT and was cited in SSCI 50 times including an average of 3 obvious self-citations. Table 2 depicts the correlations involving PsycLIT scoring permutations with each other and with SSCI.Table 2Correlations between PsycLIT and SSCI scoring permu tations Variable PsycLIT Method 2 Walsh PsycLIT Method 3 Howard1 PsycLIT Method 4 Howard2 PsycLIT Method 5 Skovholt PsycLIT Method 6 Ellis SSCI Total SSCI Minus SelfCites PsycLIT Method 1 Horan .961.963.965.998.966.711.669 PsycLIT Method 2 Walsh .997.998.971.999.703.659 PsycLIT Method 3 Howard1 1.00.975.999.701.654 PsycLIT Method 4 Howard2 .9761.00.703.657 PsycLIT Method 5 Skovholt .976.712.669 PsycLIT Method 6 Ellis .704.659 SSCI Total .995Note: The names are those of researchers most close ly associated with the various scoring methods. The relationships among the six scoring methods for assessing productivity are remarkably high. No individual pairwise correlation was lower than .96; several r 's reached unity. Similarly, the Pearson r between SSCI total and SSCI minus obvious


7 of 11self-citations also approached unity (.995). More importantly, however, despite the fact that productivity and impact reflect different concepts and derive from disparate assess ment methodologies, the relationships between these variables, regardless of scoring meth od, were strong and consistent. All six PsycLIT scoring permutations correlated with SS CI total inside a very narrow band of .701 to .712; and the band remained high and nar row (.654 to .669) when obvious self-citations were deleted.Discussion Our data reflect the lifetime publishin g behavior of an entire population of academic faculty affiliated with doctoral training programs in counseling psychology. Although we have not established that the foregoing relationships hold true in other sectors of science, there are no a priori reasons to think otherwise. Essentially, the controversy involving the comparative merits of var ious methods for assessing scholarly productivity has been settled. All PsycLIT scoring permutations yield essentially identical information; inter-correlations range fro m .96 to unity. Moreover, all of these PsycLIT methods also correlate with SSCI data at a fairly high level and within a very narrow band. Several implications are apparent. For example, future researchers are now informed that labor-intensive scoring permutations are not cost beneficial in comparison to the use of simple raw scores to assess an indivi dual's scholarly productivity. The law of parsimony demands that a scholar's productivity be defined in terms of the number of articles carrying his or her name; attention to the number and/or ordinal position of coauthors yields no useful information. It would be interesting to observe if t he behavior of promotion and tenure committees will change as a result of increased awa reness of the relationships reported in this study. Such committees can exhibit highly v ariable judgment even within the same institution. Collaborative research, for examp le, is sometimes valued ("has good collegial relationships"), sometimes denigrated ("n eeds to demonstrate more independent scholarship"); our findings suggest that the phenom enon of coauthoring is simply a facet of academic life, not a basis for evaluation. Finally, we hope that our data eliminat e a thorny disincentive to the formation of good mentoring relationships. Scoring methods 2 thr ough 6 clearly advantage those in differential power relationships who chose self-int erest over propriety while still staying within the letter of relevant ethical codes. Reptil ian supervision modes are predictable, though no less abhorrent in the context of promotio n, tenure, and merit pay systems that, for example, heavily weight sole authorships. Half of the publications by our institution's counseling psychology faculty in the PsycLIT data b ase involve students as coauthors, a percentage possibly comparable to that displayed in many other graduate programs. In contrast to labor-intensive, and empirically unwarr anted alternatives, the use of simple raw scores to assess productivity contributes to th e class-action benefit of everyone at no cost to anyone.ReferencesBohn, M. J. (1966). Institutional sources of articl es in this journal of counseling psychology--Four years later. Journal of Counseling Psychology, 13, 489-490.


8 of 11Delgado, E. A., & Howard, G. S. (1994). Changes in research productivity in counseling psychology: Revisiting Howard (1983) a decade later Journal of Counseling Psychology, 41, 69-73. Ellis, M. V., Haase, R.F., Skowron, E. A., & Kamins ky, L. (1993, August). Institutional affiliations of contributors to scholarly and profe ssional activities in counseling psychology: 1987-1990. Paper presented at the Annual Meeting of the Ameri can Psychological Association, Toronto, Canada.Fine, M. A., & Kurdek, L. A. (1993). Reflections on determining authorship credit and authorship order on faculty-student collaborations. American Psychologist, 48, 1141-1147.Goodstein, L. D. (1963). The institutional sources of articles in the Journal of Counseling Psychology. Journal of Counseling Psychology, 10, 94-95. Goodyear, R. K., Abadie, P. D., & Walsh, W. B. (198 3). Graduate school origins of Journal of Counseling Psychology authors: Volumes 1 5-28. Journal of Counseling Psychology, 30, 283-286. Goodyear, R. K., Crego, C. A., Johnston, M. W. (199 2). Ethical issues in the supervision of student research: A study of critical incidents. Professional Psychology: Research and Practice, 23, 203-210. Hanish, C., Horan, J. J., Keen, B., St. Peter, C. C ., Ceperich, S. D., & Beasley, J. F. (1995). The scientific stature of counseling psycho logy training programs: A still picture of a shifting scene. The Counseling Psychologist, 23, 82-101. Horan, J. J., & Erickson, C. D. (1991). Fellowship behavior in Division 17 and the MOMM cartel. The Counseling Psychologist, 19 253-259. Horan, J. J., Hanish, C., & Beasley, J. F. (1995). A methodological reply to a motivational charge. The Counseling Psychologist, 23, 125-128. Horan, J. J., Hanish, C., Keen, B., Saberi, D., & H ird, J. S. (1993a). When examining the cerebral functioning of Division 17, which organ sh ould we dissect? The Counseling Psychologist, 21, 307-315. Horan, J. J., Weber, W. L., Fitzsimmons, P., Maglio C. J., & Hanish, C. (1993b). Further manifestations of the MOMM phenomenon: Rele vant data on editorial board appointments and membership composition. The Counseling Psychologist, 21, 278-287. Howard, G. S. (1983). Research productivity in coun seling psychology: An update and generalization study. Journal of Counseling Psychology, 30, 600-602. Howard, G. S., Cole, D.A., & Maxwell, S. E. (1987). Research productivity in psychology based on publication in the journals of the American Psychological Association. American Psychologist, 42, 975-986. Katz, G. M. & Brophy, A. L. (1975). Institutional s ources of articles in the Journal of Counseling Psychology, 1962-1973. Journal of Counseling Psychology, 22, 160-163.


9 of 11Keen, B., Horan, J. J., Hanish, C., Copperstone, J. Tribbensee, N. (August, 1998). Publication frequency, citation frequency, and qual ity of counseling psychology research Paper presented at the annual meeting of the Amer ican Psychological Association, San Francisco.Merton, R. K. (1973). The sociology of science: Theoretical and empirical investigations. (N. W. Storer, Ed.). Chicago: The University of Ch icago Press. Osipow, S. H. (1985). Skovholt, Stone, and Hill's ( 1984) "Institutional affiliations of contributors to scholarly and professional activiti es in counseling psychology: 1980-1983" -A critique. Journal of Counseling Psychology, 32, 466-468. Skovholt, T.M., Stone, G. L., & Hill, C.E. (1984). Institutional affiliations of contributors to scholarly and professional activiti es in counseling psychology: 1980-1983. Journal of Counseling Psychology, 31, 394-397. Tinsley, D. J. & Tinsley, H. E. A. (1979). Trends i n institutional contributions to the Journal of Counseling Psychology. Journal of Counseling Psychology, 26, 152-158. Walsh, W.B., Feeney, D., & Resnick, H. (1969). Grad uate school origins of Journal of Counseling Psychology authors. Journal of Counseling Psychology, 16, 375-376.About the AuthorsChristine Hanish Homepage: Christine Hanish is a doctoral student in counseling psychology at Arizona State University. She works for ASU's Preventive Interven tion Research Center which specializes in the development and validation of pr ograms for children, adolescents, and family. She is currently immersed in a research pro ject attempting to establish the norms of scholarly behavior for academic counseling psych ologists. John J. Horan Email: Homepage: I am a professor of counseling psycholo gy at Arizona State University. I graduated from Michigan State University and taught at Penn State before moving to ASU in 1985. Most of my writing has focused on the evaluation of cognitive-behavioral intervention strategies. For more than a decade I have been exam ining the experimental construct validity of these interventions. For example, do th ey produce changes on measures of high theoretical relevance while simultaneously fai ling to effect changes on measures of low theoretical relevance? Lately, I have concentra ted on adapting and evaluating computer and Internet interventions for a variety o f counseling problems. For a quick look at how I squandered my youth, click on my web-based vita. My most important accomplishments, however, are not li sted there. I have had many extraordinary students in my career, including thos e who share this masthead. I feel privileged to have contributed to their professiona l development; they surely have enhanced my own.


10 of 11 Bethanne Keen, Ph.D. Email: Homepage: Bethanne Keen received a Ph.D. in couns eling psychology from Arizona State University in December 1997. She is currently compl eting a postdoctoral residency in psychology with a large group practice in Phoenix, Arizona. She also serves as chair of the Legislative Affairs Committee for the Arizona P sychological Association. Her dissertation, currently being prepared for publicat ion, explores the relationships between publication frequency, citation frequency and quali ty of research conducted by counseling psychologists in academe. She is current ly involved in a research project designed to illuminate the challenges faced by new Ph.D.s in psychology to achieve employment and licensure in Arizona. Her other rese arch interests include collection and analysis of clinical outcomes data.Ginger Clark Homepage: Ginger Clark is a doctoral student in c ounseling psychology at Arizona State University. She has conducted or contributed to stu dies in human sexual styles, parent education, parent education in career development, health habits, and quality of life for mid-life women. She has also written book reviews i n the area of family therapy. Clark received her Bachelor's and Master's degrees in psy chology at California State University Long Beach. She is currently in her fourth year of doctoral study, and is working toward an academic position in counseling psychology.Correspondence concerning this article should be ad dressed to John J. Horan, Division of Psychology in Education, Arizona State Universit y, BOX 870611, Tempe, AZ 85287-0611. Electronic mail may be sent to: Copyright 1998 by the Education Policy Analysis ArchivesThe World Wide Web address for the Education Policy Analysis Archives is General questions about appropriateness of topics o r particular articles may be addressed to the Editor, Gene V Glass, or reach him at College of Education, Arizona State University, Tempe, AZ 85287-2411. (602-965-26 92). The Book Review Editor is Walter E. Shepherd: The Commentary Editor is Casey D. Cobb: .EPAA Editorial Board Michael W. Apple University of Wisconsin Greg Camilli Rutgers University John Covaleskie Northern Michigan University Andrew Coulson Alan Davis University of Colorado, Denver Sherman Dorn University of South Florida


11 of 11 Mark E. Fetler California Commission on Teacher Credentialing Richard Garlikov Thomas F. Green Syracuse University Alison I. Griffith York University Arlen Gullickson Western Michigan University Ernest R. House University of Colorado Aimee Howley Marshall University Craig B. Howley Appalachia Educational Laboratory William Hunter University of Calgary Richard M. Jaeger University of North Carolina--Greensboro Daniel Kalls Ume University Benjamin Levin University of Manitoba Thomas Mauhs-Pugh Rocky Mountain College Dewayne Matthews Western Interstate Commission for Higher Education William McInerney Purdue University Mary P. McKeown Arizona Board of Regents Les McLean University of Toronto Susan Bobbitt Nolen University of Washington Anne L. Pemberton Hugh G. Petrie SUNY Buffalo Richard C. Richardson Arizona State University Anthony G. Rud Jr. Purdue University Dennis Sayers Ann Leavenworth Centerfor Accelerated Learning Jay D. Scribner University of Texas at Austin Michael Scriven Robert E. Stake University of Illinois--UC Robert Stonehill U.S. Department of Education Robert T. Stout Arizona State University


Download Options

Choose Size
Choose file type
Cite this item close


Cras ut cursus ante, a fringilla nunc. Mauris lorem nunc, cursus sit amet enim ac, vehicula vestibulum mi. Mauris viverra nisl vel enim faucibus porta. Praesent sit amet ornare diam, non finibus nulla.


Cras efficitur magna et sapien varius, luctus ullamcorper dolor convallis. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Fusce sit amet justo ut erat laoreet congue sed a ante.


Phasellus ornare in augue eu imperdiet. Donec malesuada sapien ante, at vehicula orci tempor molestie. Proin vitae urna elit. Pellentesque vitae nisi et diam euismod malesuada aliquet non erat.


Nunc fringilla dolor ut dictum placerat. Proin ac neque rutrum, consectetur ligula id, laoreet ligula. Nulla lorem massa, consectetur vitae consequat in, lobortis at dolor. Nunc sed leo odio.