ebook img

Experimental IR Meets Multilinguality, Multimodality, and Interaction: 9th International Conference of the CLEF Association, CLEF 2018, Avignon, France, September 10-14, 2018, Proceedings PDF

402 Pages·2018·23.799 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Experimental IR Meets Multilinguality, Multimodality, and Interaction: 9th International Conference of the CLEF Association, CLEF 2018, Avignon, France, September 10-14, 2018, Proceedings

Patrice Bellot · Chiraz Trabelsi · Josiane Mothe Fionn Murtagh · Jian Yun Nie · Laure Soulier Eric SanJuan · Linda Cappellato · Nicola Ferro (Eds.) Experimental IR Meets Multilinguality, 8 1 0 Multimodality, 1 1 S C and Interaction N L 9th International Conference of the CLEF Association, CLEF 2018 Avignon, France, September 10–14, 2018 Proceedings 123 Lecture Notes in Computer Science 11018 Commenced Publication in 1973 Founding and Former Series Editors: Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen Editorial Board David Hutchison Lancaster University, Lancaster, UK Takeo Kanade Carnegie Mellon University, Pittsburgh, PA, USA Josef Kittler University of Surrey, Guildford, UK Jon M. Kleinberg Cornell University, Ithaca, NY, USA Friedemann Mattern ETH Zurich, Zurich, Switzerland John C. Mitchell Stanford University, Stanford, CA, USA Moni Naor Weizmann Institute of Science, Rehovot, Israel C. Pandu Rangan Indian Institute of Technology Madras, Chennai, India Bernhard Steffen TU Dortmund University, Dortmund, Germany Demetri Terzopoulos University of California, Los Angeles, CA, USA Doug Tygar University of California, Berkeley, CA, USA Gerhard Weikum Max Planck Institute for Informatics, Saarbrücken, Germany More information about this series at http://www.springer.com/series/7409 Patrice Bellot Chiraz Trabelsi (cid:129) Josiane Mothe Fionn Murtagh (cid:129) Jian Yun Nie Laure Soulier (cid:129) Eric SanJuan Linda Cappellato (cid:129) Nicola Ferro (Eds.) Experimental IR Meets Multilinguality, Multimodality, and Interaction 9th International Conference of the CLEF Association, CLEF 2018 – Avignon, France, September 10 14, 2018 Proceedings 123 Editors Patrice Bellot Laure Soulier Aix-Marseille University Pierre andMarie Curie University Marseille Cedex20 Paris Cedex 05 France France ChirazTrabelsi EricSanJuan Virtual University of Tunis Universitéd’Avignon et des Paysde Tunis Avignon Tunisia France Josiane Mothe LindaCappellato Systèmes d’informations, BigData et Rec Department ofInformation Engineering Institut deRecherche enInformatique de University of Padua ToulouseCedex 04 Padua,Padova France Italy FionnMurtagh NicolaFerro Department ofComputer Science University of Padua University of Huddersfield Padua Huddersfield Italy UK Jian YunNie DIRO Universite deMontreal Montreal,QC Canada ISSN 0302-9743 ISSN 1611-3349 (electronic) Lecture Notesin Computer Science ISBN 978-3-319-98931-0 ISBN978-3-319-98932-7 (eBook) https://doi.org/10.1007/978-3-319-98932-7 LibraryofCongressControlNumber:2018950767 LNCSSublibrary:SL3–InformationSystemsandApplications,incl.Internet/Web,andHCI ©SpringerNatureSwitzerlandAG2018 Thisworkissubjecttocopyright.AllrightsarereservedbythePublisher,whetherthewholeorpartofthe material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storageandretrieval,electronicadaptation,computersoftware,orbysimilarordissimilarmethodologynow knownorhereafterdeveloped. Theuseofgeneraldescriptivenames,registerednames,trademarks,servicemarks,etc.inthispublication doesnotimply,evenintheabsenceofaspecificstatement,thatsuchnamesareexemptfromtherelevant protectivelawsandregulationsandthereforefreeforgeneraluse. Thepublisher,theauthorsandtheeditorsaresafetoassumethattheadviceandinformationinthisbookare believedtobetrueandaccurateatthedateofpublication.Neitherthepublishernortheauthorsortheeditors give a warranty, express or implied, with respect to the material contained herein or for any errors or omissionsthatmayhavebeenmade.Thepublisherremainsneutralwithregardtojurisdictionalclaimsin publishedmapsandinstitutionalaffiliations. ThisSpringerimprintispublishedbytheregisteredcompanySpringerNatureSwitzerlandAG Theregisteredcompanyaddressis:Gewerbestrasse11,6330Cham,Switzerland Preface Since 2000, the Conference and Labs of the Evaluation Forum (CLEF) has played a leading role in stimulating research and innovation in the domain of multimodal and multilingual information access. Initially founded as the Cross-Language Evaluation Forum and running in conjunction with the European Conference on Digital Libraries (ECDL/TPDL), CLEF became a standalone event in 2010 combining a peer-reviewed conference with a multi-track evaluation forum. The combination of the scientific program and the track-based evaluations at the CLEF conference creates a unique platformtoexploreinformationaccessfromdifferentperspectives,inanymodalityand language. The CLEF conference has a clear focus on experimental information retrieval (IR) as seen in evaluation forums (CLEF Labs, TREC, NTCIR, FIRE, MediaEval, RomIP, TAC, etc.) with special attention to the challenges of multimodality, multi- linguality, and interactive search ranging from unstructured, to semi-structured and structureddata.CLEFinvitessubmissionsonsignificantnewinsightsdemonstratedby the use of innovative IR evaluation tasks or in the analysis of IR test collections and evaluation measures, as well as on concrete proposals to push the boundaries of the Cranfield/TREC/CLEF paradigm. CLEF 20181 was jointly organized by Avignon, Marseille and Toulon Universities andwashostedbytheUniversityofAvignon,France,duringSeptember10–14,2018. The conference format consisted of keynotes, contributed papers, lab sessions, and postersessions,includingreportsfromotherbenchmarkinginitiativesfromaroundthe world. ThefollowingscholarswereinvitedtogiveakeynotetalkatCLEF2018:Gabriella Pasi (University of Milano-Bicocca, Italia), Nicholas Belkin (Rutgers University, NJ, USA), and Julio Gonzalo (UNED, Spain). CLEF 2018 received a total of 39 submissions, of which a total of 13 papers (nine long, four short) were accepted. Each submission was reviewed by three Program Committee (PC) members, and the program chairs oversaw the reviewing and follow-up discussions. In all, 13 different countries are represented in the accepted papers. Many contributions this year tackle the medical e-Health and e-Health multi- media retrieval challenges in different ways: from medical image analysis to query suggestion. However, there are many other topics of research in the accepted papers such as document clustering, social biases in IR, social book search, personality pro- filing,tociteafew.Asinpreviouseditionssince2015,CLEF2018continuedinviting CLEFlaborganizerstonominatea“bestofthelabs”paperthatwasreviewedasafull paper submission to the CLEF 2018 conference according to the same review criteria and PC. Among the nine invited papers, six were accepted as long and three as short. Finally,eightposterswerealsoaccepted.AlthoughtheyarenotincludedintheLNCS 1 http://clef2018.clef-initiative.eu/. VI Preface volume,postersgivetheopportunitytotheirauthorstodiscusstheirresearchduringthe conference and are accessible through the Web pages of the conference. The conference integrateda seriesof workshops presenting theresultsoflab-based comparative evaluations. CLEF 2018 was the ninth year of the CLEF Conference and the 19th year of the CLEF initiative as a forum for IR Evaluation. The labs were selected in peer review based on their innovation potential and the quality of the resourcescreated.Thelabsrepresentedscientificchallengesbasedonnewdatasetsand real-world problems in multimodal and multilingual information access. These data sets provide unique opportunities for scientists to explore collections, to develop solutionsfortheseproblems,toreceivefeedbackontheperformanceoftheirsolutions, and to discuss the issues with peers at the workshops. In addition to these workshops, the ten benchmarking labs reported results of their year-long activities in overview talks and lab sessions. Overview papers describing each of these labs are provided in this volume. The full details for each lab are contained in a separate publication, the Working Notes, which are available online2. The ten labs running as part of CLEF 2018 were as follows: CENTRE@CLEF 2018 -CLEF/NTCIR/TREC Reproducibility3 aims to run a joint CLEF/NTCIR/TRECtaskonchallengingparticipants:(1)toreproducethebestresults of the best/most interesting systems in previous editions of CLEF/NTCIR/TREC by using standard open source IR systems; (2) to contribute back to the community the additional components and resources developed to reproduce the results in order to improve existing open source systems. CheckThat!4aimstofosterthedevelopmentoftechnologycapableofbothspotting and verifying check-worthy claims in political debates in English and Arabic. Dynamic Search for Complex Tasks5: The lab strives to answer one key question: How can we evaluate, and consequently build, dynamic search algorithms? The 2018 Labfocusesonthedevelopmentofanevaluationframework,whereparticipantssubmit “querying agents” that generate queries to be submitted to a static retrieval system. Effective“queryingagents”canthensimulateuserstowarddevelopingdynamicsearch systems. CLEFeHealth6 provides scenarios that aim to ease patients, and nurses, under- standing and accessing of e-Health information. The goals of the lab are to develop processing methods and resources in a multilingual setting to enrich difficult-to- understand e-Health texts, and provide valuable documentation. The tasks are: multi- lingualinformation extraction; technologicallyassisted reviews inempirical medicine; and patient-centered information retrieval. ImageCLEF7 organizes three main tasks and a pilot task: (1) a caption prediction taskthataimsatpredictingthecaptionofafigurefromthebiomedicalliteraturebased 2 http://ceur-ws.org/Vol-2125/. 3 http://www.centre-eval.org/clef2018/. 4 http://alt.qcri.org/clef2018-factcheck/. 5 https://ekanou.github.io/dynamicsearch/. 6 https://sites.google.com/view/clef-ehealth-2018/. 7 http://www.imageclef.org/2018. Preface VII onlyonthefigureimage;(2)atuberculosistaskthataimsatdetectingthetuberculosis type, severity, and drug resistance from CT (computed tomography) volumes of the lung; (3) a lifelog task (videos, images, and other sources) about daily activities understanding andmomentretrieval; and (4) apilot task on visual questionanswering where systems are tasked with answering medical questions. LifeCLEF8 aims at boosting research on the identification of living organisms and on the production of biodiversity data in general. Through its biodiversity informatics-related challenges, LifeCLEF is intended to push the boundaries of the state of the art in several research directions at the frontier of multimedia information retrieval, machine learning, and knowledge engineering. MC29mainlyfocusesondevelopingprocessingmethodsandresourcestominethe social media (SM) sphere surrounding cultural events such as festivals, music, books, movies, and museums. Following previous editions (CMC 2016 and MC2 2017), the 2018 edition focused on argumentative mining and multilingual cross SM search. PAN10 is a networking initiative for digital text forensics, where researchers and practitionersstudytechnologiesthatanalyzetextswithregardtooriginality,authorship, and trustworthiness. PAN offered three tasks at CLEF 2018 with new evaluation resources consisting of large-scale corpora, performance measures, and Web services that allow for meaningful evaluations. The main goal isto provide for sustainable and reproducible evaluations, to get a clear view of the capabilities of state-of-the-art- algorithms.Thetasksare:authoridentification;authorprofiling;and,authorobfuscation. Early Risk Prediction on the Internet (eRisk)11 explores issues of evaluation methodology,effectivenessmetrics,andotherprocessesrelatedtoearlyriskdetection. Early detection technologies can be employed in different areas, particularly those related to health and safety. For instance, early alerts could be sent when a predator starts interacting with a child for sexual purposes, or when a potential offender starts publishing antisocial threats on a blog, forum, or social network. Our main goal is to pioneer a new interdisciplinary research area that would be potentially applicable to a wide variety of situations andtomany different personal profiles. eRisk 2018had two campaign-styletasks:earlydetectionofsignsofdepressionandearlydetectionofsigns of anorexia. Personalized Information Retrieval at CLEF (PIR-CLEF)12 provides a framework for the evaluation of personalized information retrieval (PIR). Current approaches to the evaluation of PIR are user-centric, mostly based on user studies, i.e., they rely on experiments that involve real users in a supervised environment. PIR-CLEF aims to developanddemonstrate amethodologyfortheevaluationofpersonalizedsearch that enablesrepeatableexperiments.Themainaimistoenableresearchgroupsworkingon PIR to both experiment with and provide feedback on the proposed PIR evaluation methodology. 8 http://www.lifeclef.org/. 9 https://mc2.talne.eu/. 10 http://pan.webis.de/. 11 http://early.irlab.org/. 12 http://www.ir.disco.unimib.it/pir-clef2018/. VIII Preface Avignonisfamousforitsmedievalarchitectureanditsinternationaltheaterfestival. The social program of CLEF 2018 set up a Science and Music Festival in medieval downtownatTheâtredesHalles13andsurroundinggardensfromTuesdaytoThursday. Musicisaverypopularhobbyamongmembersofthescientificcommunity.Evenings were a mix of music and participatory science around PlantNet, OpenStreetMaps, and Wikipedia. Tuesday was especially devoted to welcoming students at CLEF. On WednesdaythefocuswasonIRscientificsocietiesaroundtheworldmixingallCLEF languages in one evening. Finally, science outreach activities were carried out on Thursday; local musicians and students looking for a good time were invited to come and meet the participants of the CLEF conference. ThesuccessofCLEF2018wouldnothavebeenpossiblewithoutthehugeeffortof several people and organizations, including the CLEF Association14, the PC, the Lab Organizing Committee, the local organization committee in Avignon, the reviewers, and the many students and volunteers who contributed. July 2018 Patrice Bellot Chiraz Trabelsi Josiane Mothe Fionn Murtagh Jian Yun Nie Laure Soulier Eric Sanjuan Linda Cappellato Nicola Ferro 13 http://www.theatredeshalles.com/. 14 http://www.clef-initiative.eu/association. Organization CLEF 2018, Conference and Labs of the Evaluation Forum – Experimental IR meets Multilinguality, Multimodality, and Interaction, was hosted by the University of Avignon and jointly co-organized by Avignon, Marseille and Toulon Universities, France. General Chairs Patrice Bellot Aix-Marseille Université - CNRS LSIS, France Chiraz Trabelsi University of Tunis El Manar, Tunisia Program Chairs Josiane Mothe SIG, IRIT, France Fionn Murtagh University of Huddersfield, UK Lab Chairs Jian Yun Nie DIRO, Université de Montréal, Canada Laure Soulier LIP6, UPMC, France Proceedings Chairs Linda Cappellato University of Padua, Italy Nicola Ferro University of Padua, Italy Publicity Chair Adrian Chifu Aix-Marseille Université - CNRS LSIS, France Science Outreach Program Chairs Aurelia Barriere UAPV, France Mathieu FERYN UAPV, France Sponsoring Chair Malek Hajjem UAPV, France

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.