ebook img

Applied evaluative informetrics PDF

316 Pages·2017·9.979 MB·English
by  MoedH. F
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Applied evaluative informetrics

Qualitative and Quantitative Analysis of Scientific and Scholarly Communication Henk F. Moed Applied Evaluative Informetrics Qualitative and Quantitative Analysis fi of Scienti c and Scholarly Communication Series editors Wolfgang Glänzel, Katholieke Univeristeit Leuven, Leuven, Belgium Andras Schubert, Hungarian Academy of Sciences, Budapest, Hungary More information about this series at http://www.springer.com/series/13902 Henk F. Moed Applied Evaluative Informetrics 123 Henk F.Moed Amsterdam TheNetherlands ISSN 2365-8371 ISSN 2365-838X (electronic) Qualitative andQuantitative Analysis ofScientificandScholarly Communication ISBN978-3-319-60521-0 ISBN978-3-319-60522-7 (eBook) DOI 10.1007/978-3-319-60522-7 LibraryofCongressControlNumber:2017950268 ©SpringerInternationalPublishingAG2017 Thisworkissubjecttocopyright.AllrightsarereservedbythePublisher,whetherthewholeorpart of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission orinformationstorageandretrieval,electronicadaptation,computersoftware,orbysimilarordissimilar methodologynowknownorhereafterdeveloped. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publicationdoesnotimply,evenintheabsenceofaspecificstatement,thatsuchnamesareexemptfrom therelevantprotectivelawsandregulationsandthereforefreeforgeneraluse. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authorsortheeditorsgiveawarranty,expressorimplied,withrespecttothematerialcontainedhereinor for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictionalclaimsinpublishedmapsandinstitutionalaffiliations. Printedonacid-freepaper ThisSpringerimprintispublishedbySpringerNature TheregisteredcompanyisSpringerInternationalPublishingAG Theregisteredcompanyaddressis:Gewerbestrasse11,6330Cham,Switzerland Preface In 1976, Francis Narin, founder and for many years president of the information company CHI Research, published a seminal report to the US National Science Foundation entitled Evaluative Bibliometrics: The use of publication and citation analysis in the evaluation of scientific activity. The current book represents a continuation of his work. It is also an update of an earlier book published by the current author in 2005, Citation Analysis in Research Evaluation. In the past 15 years,manynewdevelopmentshavetakenplaceinthefieldofquantitativeresearch assessment,andthecurrentbookaimstodescribethese,andtoreflectupontheway forward. Research assessment has become more and more important in research policy, management and funding, and also more complex. The role of quantitative infor- mation has grown substantially, and research performance is more and more con- ceived as a multi-dimensional concept. Currently not only the classical indicators based on publication and citation counts are used, but also new generations of indicators are being explored, denoted with terms such as altmetrics, webometrics, and usage-based metrics, and derived from multiple multi-disciplinary citation indexes, electronic full text databases, information systems’ user log files, social media platforms and other sources. These sources are manifestations of the com- puterization of the research process and the digitization of scientific-scholarly communication.Thisiswhythecurrentbookusestheterminformetricsratherthan bibliometrics to indicate its subject. Informetrics and quantitative science, technology, and innovation (STI) studies have enforced their position as an academic discipline, so that STI indicator development is determined at least partially by an internal dynamics, although externalfactorsplayanimportantroleaswell,notintheleastthebusinessinterests of large information companies. As its title indicates, the current book deals with theapplicationofinformetrictools.Itdedicatesamajorpartofitsattentiontohow indicatorsareusedinpracticeandtothebenefitsandproblemsrelatedtothisuse.It also discusses the relationships between the informetric domain and the research policy and management sphere, and launches proposals for new approaches in research assessment and for the development of new informetric tools. v vi Preface Following Francis Narin’s publication from 1976, the term evaluative in the book’s title reflects its focus on research assessment. But this term refers to the application domainand delineates the context inwhich informetric tools are being used.Itdoesnotmeanthatinformetricsisbyitselfevaluative.Onthecontrary,this book defends the position that informetricians should maintain in their informetric work a neutral position towards evaluative criteria or political values. Target Audience Thisbookpresentsanintroductiontothefieldofappliedevaluativeinformetrics.It sketchesthefield’shistory,recentachievements,anditspotentialandlimits.Italso discusses the way forward. It is written for interested scholars from all domains of science and scholarship, and especially for the following categories of readers. (cid:129) All those subjected to research assessment; (cid:129) Research students at advanced master and Ph.D. levels; (cid:129) Research managers and science policy officials; (cid:129) Research funders; (cid:129) Practitioners and students in informetrics and research assessment. Structure The book consists of six parts as follows: (cid:129) Part I presents an introduction to the use of informetric indicators in research assessment. It provides an historical background of the field and presents the book’s basic assumptions, main topics, structure, and terminology. In addition, it includes a synopsis, summarizing the book’s main conclusions; readers who are interested in the main topics and conclusions of this book but who do not have the time to read it all could focus on this part. (cid:129) Part II presents an overview of the various types of informetric indicators for the measurement of research performance. It highlights the multi-dimensional nature of research performance and presents a list of 28 often used indicators, summarizing their potential and limits. It also clarifies common misunder- standings in the interpretation of some often used statistics. (cid:129) PartIIIdiscussestheapplicationcontextofquantitativeresearchassessment.It describesresearchassessmentasanevaluationscienceanddistinguishesvarious assessmentmodels.Itisinthispartofthebookthatthedomainofinformetrics and the policy sphere are disentangled analytically. It illustrates how external, non-informetric factors influence indicator development and how the policy context impacts the setup of an assessment process. Preface vii (cid:129) Part IV presents the way forward. It expresses the current author’s views on a series of problems in the use of informetric indicators in research assessment. Next, it presents a list of new features that could be implemented in an assessment process. It highlights the potential of informetric techniques and illustrates that current practices in the use of informetric indicators could be changed. It sketches a perspective on altmetrics and proposes new lines in longer term, strategic indicator research. (cid:129) Part V presents five lectures with historical overviews of the field of biblio- metrics and informetrics, starting from three of the field’s founding fathers: DerekdeSollaPrice,EugeneGarfield,andFrancisNarin.Itpresents135slides and is based on a doctoral course presented by the author at the Sapienza UniversityofRomein2015,andonlecturespresentedattheEuropeanSummer SchoolofScientometrics(ESSS)during2010–2016,andintheCWTSGraduate Courses during 2006–2009. (cid:129) Finally, Part VI presents two full articles published recently by the current author in collaboration with his co-authors on hot topics of general interest in whichtheuseofinformetricindicatorsplaysakeyrole.Thesetopicsareacritical comparison offive world university rankings and a comparison of usage indi- catorsbasedonthenumberoffulltextdownloadswithcitation-basedmeasures. Acknowledgements The author wishes to thank the following colleagues for their valuable contributions. (cid:129) Dr.GaliHaleviatTheLevyLibraryoftheIcahnSchoolofMedicineatMount Sinai, New York, USA, for her contribution as a co-author of four articles presented in this book, on the multi-dimensional assessment of scientific research(Chap.8); international scientific collaborationinAsia (Chap. 12); the comparison between Google Scholar and Scopus (Chap. 14); and on a com- parative analysis of usage and citations (Chap. 19). (cid:129) Prof. Cinzia Daraio at the Department of Computer, Control and Management EngineeringintheSapienzaUniversityofRomeforhercontributiontothetext on ontology-based data base management in Sect. 12.3, and for her valuable comments to a draft version of Chap. 6. (cid:129) Prof. Judit Bar-Ilan at the Department of Information Science in Bar-Ilan University, Tel Aviv, Israel, for her contribution as a co-author to the paper on Google Scholar and Scopus (Chap. 14). (cid:129) ThemembersoftheNucleodiValutazioneoftheSapienzaUniversityofRome forstimulatingdiscussionsabouttheinterpretationandthepolicysignificanceof world university rankings discussed in Sect. 10.6. Amsterdam, The Netherlands Henk F. Moed Executive Summary Thisbookpresentsanintroductiontothefieldofappliedevaluativeinformetrics.Its main topic is application of informetric indicators in the assessment of research performance. It gives an overview of the field’s history and recent achievements, anditspotentialandlimits.Italsodiscussesthewayforward,proposesinformetric options for future research assessment processes, and new lines for indicator development. It iswrittenfor interested scholars from all domains ofscienceand scholarship, especially those subjected to quantitative research assessment, research students at advanced master and Ph.D. levels, and researchers in informetrics and research assessment, and for research managers, science policy officials, research funders, and other users of informetric tools. The use of the term informetrics reflects that the book deals not only with bibliometric indicators based on publication and citation counts, but also with altmetrics, webometrics, and usage-based metrics derived from a variety of data sources, and does not only consider research output and impact, but also research input and process. Research performance is conceived as a multi-dimensional concept. Key distinctionsaremadebetweenpublicationsandotherformsofoutput,andbetween scientific-scholarly and societal impact. The pros and cons of 28 often used indi- cators are discussed. Ananalyticaldistinctionismadebetweenfourdomainsofintellectualactivityin an assessment process, comprising the following activities. (cid:129) Policy and management: The formulation of a policy issue and assessment objectives; making decisions on the assessment’s organizational aspects and budget. Its main outcome is a policy decision based on the outcomes from the evaluation domain. (cid:129) Evaluation:Thespecificationofanevaluativeframework,i.e.,asetofevaluation criteria,inagreementwiththepolicyissueandassessmentobjectives.Themain outcomeisajudgmentonthebasisoftheevaluativeframeworkandtheempirical evidence collected. ix x ExecutiveSummary (cid:129) Analytics: Collecting, analyzing and reporting empirical knowledge on the subjectsofassessment;thespecificationofanassessmentmodelorstrategy,and the operationalization of the criteria in the evaluative framework. Its main outcome is an analytical report as input for the evaluative domain. (cid:129) Data collection: The collection of relevant data for analytical purposes, as specified in an analytical model. Data can be either quantitative or qualitative. Itsmainoutcomeisadatasetforthecalculationofallindicatorsspecifiedinthe analytical model. Three basic assumptions of this book are the following. (cid:129) Informetric analysis ispositioned intheanalyticsdomain.Abasic notionholds that from what is cannot be inferred what ought to be. Evaluation criteria and policy objectives are not informetrically demonstrable values. Of course, empirical informetric research may study quality perceptions, user satisfaction, the acceptability of policy objectives, or effects of particular policies, but they cannot provide a foundation of the validity of the quality criteria or the appropriateness of policy objectives. Informetricians should maintain in their informetric work a neutral position towards these values. (cid:129) Ifthetendencytoreplacerealitywithsymbolsandtoconceivethesesymbolsas anevenahigherfromofreality,aretypicalcharacteristics ofmagicalthinking, jointly with the belief to be able to change reality by acting upon the symbol, one could rightly argue that the un-reflected, unconditional belief in indicators shows rather strong similarities with magical thinking. (cid:129) The future of research assessment lies in the intelligent combination of indicatorsandpeerreview.Sincetheiremergence,andinreactiontoaperceived lackoftransparencyinpeerreviewprocesses,bibliometricindicatorswereused to break open peer review processes, and to stimulate peers to make the foun- dation and justification of their judgments more explicit. The notion of infor- metric indicators as a support tool in peer review processes rather than as a replacement of such processes still has a great potential. Fivestrongpointsoftheuseofinformetricindicatorsinresearchassessmentare highlighted: it provides tools to demonstrate performance; shapes one’s commu- nication strategies; offers standardized approaches and independent yardsticks; delivers comprehensive insights that reach beyond the perceptions of individual participants; and provides tools for enlightening policy assumptions. Butseverecriticismswereraisedaswellagainsttheseindicators.Indicatorsmay be imperfect and biased; they may suggest a façade of exactness; most studies adopt a limited time horizon; indicators can be manipulated, and may have con- stitutive effects; measuring societal impact is problematic; and when they are applied, an evaluative framework and assessment model are often lacking. The following views are expressed, partly supportive, and partly as a counter-critique towards these criticisms.

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.