RESEARCHARTICLE Are 6-month-old human infants able to transfer emotional information (happy or angry) from voices to faces? An eye-tracking study AmayaPalama1,JenniferMalsert1,2,EdouardGentaz1,2,3* 1 SensoriMotor,AffectiveandSocialDevelopmentLaboratory,FacultyofPsychologyandEducational Sciences,UniverstyofGeneva,Geneva,Switzerland,2 SwissCenterforAffectiveSciences,Campus Biotech,UniversityofGeneva,Geneva,Switzerland,3 CNRS,Grenoble,France a1111111111 *[email protected] a1111111111 a1111111111 a1111111111 Abstract a1111111111 Thepresentstudyexaminedwhether6-month-oldinfantscouldtransferamodalinformation (i.e.independentlyofsensorymodalities)fromemotionalvoicestoemotionalfaces.Thus, sequencesofsuccessiveemotionalstimuli(voiceorfacefromonesensorymodality-audi- tory-toanothersensorymodality-visual-),correspondingtoacross-modaltransfer,were OPENACCESS displayedto24infants.Eachsequencepresentedanemotional(angryorhappy)orneutral Citation:PalamaA,MalsertJ,GentazE(2018)Are 6-month-oldhumaninfantsabletotransfer voice,uniquely,followedbythesimultaneouspresentationoftwostaticemotionalfaces emotionalinformation(happyorangry)from (angryorhappy,congruousorincongruouswiththeemotionalvoice).Eyemovementsin voicestofaces?Aneye-trackingstudy.PLoSONE responsetothevisualstimuliwererecordedwithaneye-tracker.First,resultssuggestedno 13(4):e0194579.https://doi.org/10.1371/journal. differenceininfants’lookingtimetohappyorangryfaceafterlisteningtotheneutralvoiceor pone.0194579 theangryvoice.Nevertheless,afterlisteningtothehappyvoice,infantslookedlongeratthe Editor:JordyKaufman,SwinburneUniversityof incongruentangryface(themouthareainparticular)thanthecongruenthappyface.These Technology,AUSTRALIA resultsrevealedthatacross-modaltransfer(fromauditorytovisualmodalities)ispossible Received:July12,2017 for6-month-oldinfantsonlyafterthepresentationofahappyvoice,suggestingthattheyrec- Accepted:March6,2018 ognizethisemotionamodally. Published:April11,2018 Copyright:©2018Palamaetal.Thisisanopen accessarticledistributedunderthetermsofthe CreativeCommonsAttributionLicense,which permitsunrestricteduse,distribution,and Introduction reproductioninanymedium,providedtheoriginal authorandsourcearecredited. Expressingemotionsviafacialexpressions,voicesorevenbodymovementshelpstotransmit one’sinternalstateandintentionstoothers[1].Humaninfantsareabletorecognizeemotions DataAvailabilityStatement:Allrelevantdataare withinthepaperanditsSupportingInformation expressedbythepeopleintheirenvironment(parents,brothersandsisters,etc.),thisadaptive files. abilityisessentialforinfantstointeractwiththesepeople[2].However,perceivingemotional expressionsisnottrivialforinfantsandthedevelopmentofthisabilitydependsonthetypeof Funding:Thisresearchwassupportedbythe SwissNationalFundfortheresearchgrant emotionsexpressedandtheirmodeofpresentation[3–5]. 100019-156073awardedtoE.G. Thespontaneousvisualpreferenceforhappyfaces,observedinspecificconditionsinnew- borns[6,7]generallypersistsuntil5monthsofageandseemstodeclineafterthat.Morepar- Competinginterests:Theauthorshavedeclared thatnocompetinginterestsexist. ticularly,at3monthsold,theamountoftimeinfantslookatahappyfaceisgreaterthanthe PLOSONE|https://doi.org/10.1371/journal.pone.0194579 April11,2018 1/17 Cross-modaltransferofemotionsininfants amountoftimetheyspendlookingataneutralone[8].Additionally,at4monthsold,infants’ firstfixationsaremoreoftendirectedtowardhappyfacesthanneutralfaces[9].Nevertheless, resultsshowthatthisvisualpreferenceisinfluencedbyotherfacialdimensions,forexample, thepreferenceforhappyfacesislimitedtofemalefacesin3.5-month-olds[10].Thisdifference maybeexplainedbythedifferentexperienceswithmaleandfemalefacesacquiredoverthe firstfewdaysoflife[11].Althoughthispreferenceforhappinessisnotreportedafter5months [12],itmaystillbeobservedinsomecasesat7months[13,14]. Thevisualdiscriminationbetweenhappinessandotherexpressionsisdemonstratedfrom2 to5months[3].Discriminationbetweensurprised[15,16]andangry(frowning)faces[17] occursat3months,andbetweenhappyandsadfacesat3–5months[18].At5months,infants areabletodiscriminatebetweenhappyandneutralfaces[19],aswellasbetweenhappyand fearfulfaces[20].Studieshaveshownacategoricaldiscriminationbetweenhappinessandsev- eralotheremotions(surprise,sadness,fear)for6–7month-oldchildren(forreviews[3–5]),as demonstratedbyidentity-invariantcategorization(i.e.infantscancategorizetheemotionpre- sentedbyadifferentidentityasthesameemotion)e.g.[5,15,18,21,22]foraudio-visualstimuli andcategoricalboundaryeffects(i.e.inemotionalmorphing,thepointinthecontinuumof emotionalexpressionwhentheinfantperceivedthefaceasaspecificemotionalcategory)(e.g. [23]).However,noevidenceforavalence-basedcategorizationofexpressions(i.e.thecatego- rizationbetweenthesameemotionalvalence,positiveornegative)wasfoundin7-month-old infants[24].Overall,thepositivediscriminationforspecificcontrastsappearsearlierinpara- digmsinvolvingjustoneoralimitednumberoffaceidentities(e.g.[17]),andlaterinpara- digmsinvolvingtheextractionofexpressionsacrossmultipleidentities[15]. Areviewofthestudiesregardingtheperceptualdevelopmentofemotionalexpressionssug- geststhatthesensorymodeinwhichastimulusispresented,whetheritbeunimodalormulti- modal,playsasignificantroleinaninfant’sabilitytodiscriminateemotions[3].Forexample, FlomandBahrick[25]showedthatinfantscandiscriminateamonghappiness,angerandsad- nessasof5monthswithunimodalauditorystimuliandasof7monthswithunimodalvisual stimuli.Furthermore,at4months,infantsareabletodiscriminateamonghappiness,anger andsadnesswithmultimodaldynamic(audio–visual)stimuli,i.e.whenthesoundsandthe emotionalfacesareshownsimultaneouslyandsynchronized.Moreevidenceofmultimodal matchinghasbeenreportedin3–4month-oldinfantsforaudio-visualmatchingofhappiness versussadness(concordant>discordant)andhappinessversusanger (discordant>concordant)expressionsofthemother[26],aswellasforvisual-olfactory matchingofhappyversusdisgustedexpressions[27].However,thevisual-olfactorymatching appearslimitedtothehappyexpressionandisnotpresentat5months.Evidenceofaudio- visualmatchingalsoexistsforhappyandangry(concordant>discordant)expressionsat6–7 months[13,28]. Itshouldbenotedthatmostofthepreviousbehavioralstudiesusedvideosinindividual testingsessions.Generally,theexperimentersmanuallycodedtheinfant’sgazeasbeingeither totheleftside,therightside,oroutsideofthescreen—generatingraw-lookingdata.Thisanal- ysisprocedureisnotveryaccurateanddoesnotallowforanexaminationofthespecificface areas(eyesandmouth)exploredbyeachinfantinfunctionoftheconditions.Fortheaimof thisstudy,werecordedtheeyemovementsthatoccurredinresponsetothevisualstimuliin eachofthe6testphasesusinganeye-tracker.Therehavebeenfewstudiesthathaveexamined ocularmovementwithaneye-trackingdeviceininfants.However,eye-tracking,whichpre- ciselycalculatesthetimeanddirectionofthegaze,allowsforspatialandtemporalprecision andaccuracy.Besidesfixationandsaccades,theeye-trackerallowsonetoexaminespecific areasofinterest(AOIs)onthestimuluspresented,suchastheeyesandmouth.Dependingon thetypeofemotion,someregionsofthefacemaybemoreusefulthanothersinhelpingan PLOSONE|https://doi.org/10.1371/journal.pone.0194579 April11,2018 2/17 Cross-modaltransferofemotionsininfants infanttodetermineanemotion.Schurginetal.’sstudy[29]showsthatbyobservinganadult’s eye-movementsonapicture,onecanpredicttheemotionthatispresentedonit. Arecentstudy[30]usingeye-trackinganddynamicemotionalfaceswithinfantsagedfrom 3to12months,showedthatyoungerinfantsfocusedtheirattentionontheeyesandthe externalfeaturesofemotionalfaces.However,thevisualattentionofolderinfants(7-and 12-month-olds)dependedontheemotionthatwasdisplayed.Inthisstudy,themouthdrew themostattentionforsmilingfaces,theeyesandeyebrowsdrewthemostattentionforfearful andangryfaces,andtheuppernoseareadrewattentionfordisgustedfaces.Anotherstudyby thesameauthors[31]demonstratedthat7-month-oldinfantslookedlongeratareasofinterest ofaneutralfaceaccordingtothevalenceoftheodorssmeltbefore.Withapleasant(straw- berry)scent,theinfantslookedmoreattheneutralface,particularlytheeyes,eyebrows,nose andmouthareas,whereaswithanunpleasant(strongcheese)odor,theylookedmoreat theuppernosearea.Asafunctionoftheinternalstatesprovokedbythesmell,theinfants searchedforreactioncuesonthefacespresented.Amsoetal.[32]foundapositivecorrelation betweenthetimespentlookingattheeyeareaandtheabilitytodiscriminatebetweenhappy andfearfulexpressionsafterhavingbeenhabituatedtofearfulexpressionsat6to11months old.Inanotherstudy,Hunniusetal.[33]showedthat4-and7-month-oldinfantslookedless atinnerfeaturearea(mouth,eyeandnose)ofthreat-relatedexpressions(angerorfear)com- paredtonon-threat-relatedexpressions(happy,sadorneutral). Nonetheless,theexistenceoftheabilitytodiscriminateemotionalexpressionsinunimodal ormultimodalconditionsdoesnotallowustodeterminewhetheritresultsfromanamodal representationoftheemotionorfromasensitivitytospecificperceptualfeatures,whether theybevisualand/orauditory.Somestudiesshowedthatinfantsusecuessuchasthesalience ofteethat4months,ratherthanemotions,whencomparingtwoemotionalfaces[34].The findingsofthebehavioralstudieswhichusedexperimentalparadigmsinvolvingjustoneora limitednumberoffaceidentitiesdonotprovethatinfantsareunabletoformemotional representations.However,theyconfirmthatsensitivitytoperceptualvariablescontributesto infants’performancesinmanyexperimentsdesignedtoassesssensitivitytoemotion. Arelevantwaytoruleoutthisdifficultyistostudytherecognitionofemotionalexpression inacross-modaltask(forreview[35]).Thesedatagaveevidencethatinfantscancodeinfor- mationinanauditoryortactilemodeandthenperceivethisinformationinavisualmode, despiteseveraldifferencesinsize,volume,texture,shape,etc(suchasnumber[36]orobject unity[37]).Inthisperspective,asimilarwaytoaddressthequestionofamodalrepresentation ofemotionswouldbetoinvestigatecross-modalemotionalcorrespondencefromauditoryto visualemotionalstimuli. Theaimofthispresentstudyistoevaluateiftheabilitytodiscriminateemotionalexpres- sionsisfoundedonthenatureofemotionperse,amodally(i.e.independentlyofsensory modalities)oronspecificphysicalcharacteristicsofstimuli(facesorvoices).Tohelpanswer thisquestion,wechosetouseaparadigmwithasuccessivecross-modaltransferfromemo- tionalvoicestoemotionalfaces.Toourknowledge,nosuchexperimenthasbeenconducted oninfants.Thiscross-modaltransferofemotionalinformationfromauditorytovisualcon- sistsoftwosuccessivephases:anauditoryfamiliarizationphaseandthenavisualtestphase. Thistaskisverydifficultbecauseitinvolvesaserialmappingprocessinwhichemotional informationisextractedinanaudioformatandtransformedintoavisualformat.Thus,if infantsareabletosuccessfullytransfertheemotionalinformation,itwouldsupportthe hypothesisthattheyareabletorecognizetheemotionsamodally,notsimplyviaphysicalfea- tures(pictorialoracoustic).Thestudiesshowingthecategoricaldiscriminationofemotions (i.e.theextractionofexpressionsacrossmultipleidentities)alsosupportthehypothesisthat infantsareabletoformamodalemotionalrepresentations. PLOSONE|https://doi.org/10.1371/journal.pone.0194579 April11,2018 3/17 Cross-modaltransferofemotionsininfants Ourexperimentconsistsofsixsequencesofcross-modaltransfersthatwereindividually showntoeachinfant.Thisstudybeganwithabaselineconditioninwhichaneutralvoicewas presentedfor20secondsduring2trials,followedbythetwoemotionalfaces(happyand angry)presentedsimultaneouslyfor10seconds.Thegoalwastoobtainthebaselineofany spontaneouspreferencesofthelookingtimebetweenhappyandangryfaces.Thiscontinued withtheexperimentalconditionsinwhichinfantsreceivedfourdifferentsequencescorre- spondingtoanemotionalvoice(happyorangry)presentedfor20seconds(auditoryfamiliari- zationphase),followedbythetwoemotionalfacesbeingpresentedsimultaneously(one familiarandtheothernovelvis-à-vistheemotionalvoice)for10seconds(visualtestphase withoutanysound). Wehypothesizedthatifinfantshadanamodalrepresentationofemotion,theywouldbe abletodetectthecorrespondencebetweenanemotionalvoiceandavisualfacecontainingthe sameemotion.Inthiscase,areactiontonoveltywasexpected:i.e.alongerlookingtimefor thenon-matchingface.Thus,weexpectedthatinfantswouldpreferthenovelface.Further- more,duetothefactthathappinessisthefirstemotioninfantsareabletodiscriminate,we expectedthehappyexpressiontobebettertransferredthantheangryone. Additionally,weexaminedwhethervisualpreferenceisdependentonspecificareas,such astheeyesand/ormouthofeachofthefaces,aftertheauditoryfamiliarization.Interestingly enough,theresultsfromtwodifferentinfantstudiesregardingthefaceareaslookedatinfunc- tionoftheemotionpresentedprovidedcontradictoryresults.Onestudy[30]showedthat infantslookedlongeratthemouthareaforhappyfacesandattheeyesforangryfaces.How- ever,theotherstudy[33]showedthatinfantslooklongeratthemouthfortheangryfaces.Evi- dently,infantsseemtobedrawntothesetwoareaswhenpresentedwiththeseemotionalfaces. Therefore,weexaminedthemouthandeyeareasforbothoftheemotionalfaces.Inaddition, toexplorethegazefurther,wealsoexaminedthefirstfixationsofeachvisualtestphasefor eachinfantandpeaklooksattheface,themouthandtheeyesforeachemotionalface. Finally,wedecidedtoinvestigatetherarelyanalyzedgendereffectduetothefactthatcon- tradictoryeffectshavebeenreportedinpreviousexperiments.Ofthosethatstudiedthiseffect, twodidnotobserveanydifferencesbetweenmalesandfemales[38,39]whileoneobservea significantdifferencebetweenfemalesandmalesinemotionrecognition,demonstratingthat 5-month-oldgirlsrecognizedemotionssimilarlyto6-month-oldboys[15]. Method Participants Twenty-fourfull-term(atleast37weeksofgestation)6-month-oldinfants(13females;mean age=6.03months±0.32,range=6.5–5.2months)wereincludedinthefinalsampleofthe study.Becauseofthedifficultytoapplytheeye-trackingtechnictoinfants,agreatnumberof datahasbeennotrecorded.Thirty-oneadditionalinfantswereobservedbutexcludedfrom thefinalsampleduetotechnicalfailureoftheeye-trackingnotbeingabletofindthepupil (seventeen),excessivemovement(two)resultinginlossofgazedata,noisyeyetrackingdata duetounsuccessfulcalibration(three)definedasmorethan2˚ofdeviationinthexandy axes,inattentivenesstostimuli(lookingatthescreenlessthanathirdoftheentiretime)(one), crying(four)orfussiness(four).Thedescriptivecharacteristicsofthefinalsampleareasfol- lows:themeanageofthemotherswas33.01(±4.6)yearsand35.56(±5.9)yearsforthe fathers.Themajorityoftheparentsthatparticipatedinthestudyweremarried(N=14)or cohabitating(N=9),whileoneparentwasasinglemotherraisingherchildalone(N=1).The familys’socioeconomicstatus(SES)wascalculatedusingtheLargoscalebasedonpaternal occupationandmaternaleducation,rangingfrom2(thehighestSES)to12(thelowestSES) PLOSONE|https://doi.org/10.1371/journal.pone.0194579 April11,2018 4/17 Cross-modaltransferofemotionsininfants [40].Themeansocioeconomicstatus(SES)ofthefamiliesusedinthesamplewas3.42±1.47, range=2–8.ApprovalforthestudywasgivenbytheEthicsCommitteesoftheFacultyofPsy- chologyandEducationalSciencesofGenevaandallparentsgavewritteninformedconsentfor theirchildren’sparticipationintheexperiment.Theexperimentwasperformedinaccordance withtherelevantguidelinesandregulations. Stimuli Theemotionalnonverbalauditorystimuliofhappiness,angerandneutralcomefromthe "MontrealAffectiveVoice"database[41].Theyareexpressiveonomatopoeicstimulibasedon theemissionofthevowel/a/.Thisauditorystimuluswasaloopofaonesecondvoicewitha breakof1secondbetweeneachrepetitionforatotalclipof20seconds.Notethatthesearethe vocalproductionsofonlyonewoman(ref:SF60).Thevolumeofauditorystimulipresentedto babiesdidnotexceed60dBA. Thevisualstimuliusedwereemotional(happyandangry)facesofawomantakenfromthe database"TheKarolinskaDirectedEmotionalFaces—KDEF"[42].Thesepicturesare9.1x9.1 cm,inblackandwhite,andarepresentedonamediumgraybackground(RGB100,100,100). Thehairisnotvisibleonthestimulitoavoidpotentialbiasesofattentionontheexternalele- mentsoftheface[43].Becausestudiesshowedthat4-month-oldinfantsdiscriminatefemale facesmoreeasilythanmalefaces[10],wetestedtheemotionalfacesrepresentedbythesame woman(ref:SF4).Facesarepresentedinpairs,pseudo-randomizedfortheleftandrightpre- sentation(Fig1). Experimentalprocedure Eachinfantwascomfortablyinstalledinasuitableseat,placedinanexperimentalcubiclein Geneva’sBabyLab.Thestimulusdisplayscreenmeasured47.5cmx30cmwithaspatialreso- lutionof1680x1050pixels.Thebabywasplacedatadistanceof60cmfromthescreen,atthis Fig1.Visualstimuli.Theangryface(right)andthehappyface(left)withfacesfromTheKarolinskaDirected EmotionalFaces—KDEF. https://doi.org/10.1371/journal.pone.0194579.g001 PLOSONE|https://doi.org/10.1371/journal.pone.0194579 April11,2018 5/17 Cross-modaltransferofemotionsininfants distance,visualstimuliwere8.7˚ofvisualangle.Tofocustheinfant’sattentiononthescreen, justbeforestartingtheexperiment,wepresentedacartoonextractedfrom“LeMondedes petits”.Thegazeonvisualstimuliwasrecordedwithaneye-trackerSMIRED250(SensoMo- toricInstrumentsGmbH,Teltow,Germany). Theexperimentstartedwitha5-pointcalibrationphasewiththeeye-tracker,ananimated imageat5differentlocationscoveringthewholesurfaceofthescreen.Thisphasewasrepeated untilasatisfactorycalibration(lessthan2˚ofdeviationinthexandyaxes)wasachieved. Inthisexperiment,eachtrialconsistedofexposuretoavoice(neutral,happyorangrypros- ody)for20secondsaccompaniedbyablackdisplayscreen,foranauditoryfamiliarization phase.Afterwards,apairofemotionalfaces(happyandangry)waspresentedfor10seconds duringthevisualtestphase.Thesideofpresentationofthehappyandangryfaceswerecoun- terbalancedforeachvoice. Theexperimentwascomposedof6trialsinthisorder:first,inordertoobtainthebaseline ofspontaneouspreferencesforinfants,aneutralvoicewaspresentedduringthefirst2trials, followedbythe2emotionalfaceswhichwerelaterallycounterbalanced.Thenext4trials,the testtrials,consistedofthepresentationofthe2emotionalvoices,eachfollowedbythe2emo- tionalfaces,laterallycounterbalancedforeachemotionalvoice,insuccession.Thehappyvoice waspresentedfirst,toavoidthetriggeringofanegativereactionbythenegativestimulus[44]. Thepresentationofthe6trials(sequencesofaudio-visualtransfer)lasted3minutesforeach infant(Fig2). Dataanalysis AllthedatawereextractedbyusingBegazeSMI’sanalyzersoftware.Thetotallookingtimein secondstothewholefaceandtotheAreasofInterest(AOI)wascalculatedbythenetdwell time(lengthoftimespentlookingtheAOIs).WedefinedAOIsasonegeneralforthewhole face(Fig3)andtwospecificonesfortheeyesandthemouth(Fig4)foreachtypeofemotional expression(datainS1Dataset).Peaklookdurationwascalculatedinmillisecondsasthelon- gestunbrokenlookatthescreenforthesame3AOIsineachemotionalface(datainS2Data- set).Weperformedrepeatedmeasuresanalysisofvariance(ANOVA)onthewholefaceand specificAOIlookingtimesandpeaklooks.Theproportionoffirstfixationstowardthefacesof eachtrial(24infantsx2trialsbyvoice=maximum48firstlooksforeachvoice)werealsoana- lyzedwithT-test(datainS3Dataset).StatisticalanalyseswereconductedusingStatistica13. Thesignificancethresholdwas.05andBonferronitestwasperformedtodeterminesignificant differences,effectsizesaregiveninpartialeta-squaredηforANOVAs. Results Baselinecondition Table1presentstheresultsofthebaselineconditionforthelookingtimeatthewholefaceand AOIs(mouthandeyes)aswellasthefirstfixationsforthehappyorangryfacepresentedafter theneutralvoice.Wefoundnosignificantdifferenceconcerningthelookingtimeattheemo- tionalfacesF(1,23)=3.135,p=.09,η=0.12,thefirstfixationsatfaces(t(47)=0.58,p=.56; singleStudent’sT-test)andthepeaklooks(F(1,23)=2.02,p=.168,η=0.08).Nodifference concerningthelookingtimeattheemotionalAOIsofthemouthF(1,23)=2.89,p=.103or eyesF(1,23)=0.15,p=.701.However,withthepeaklooks,wefoundanemotionalfaceeffect (F(1,23)=5.34,p<.05,η=0.18)suggestingangryAOIstriggeredalongerfixation(420±67 ms)thanthehappyones(307±39).Evenmore,wefoundasignificantinteractionbetween emotionalfacesandAOIs(F(1,23)=5.59,p<.05,η=0.19).Apre-plannedcomparison showedthattheangrymouth(523±125ms)seemedtoinvolvelongerfixationsthanthe PLOSONE|https://doi.org/10.1371/journal.pone.0194579 April11,2018 6/17 Cross-modaltransferofemotionsininfants Fig2.Schematicrepresentationofthesuccessivepresentationofallstimuli. https://doi.org/10.1371/journal.pone.0194579.g002 happymouth(277±61ms)(F(1,23)=6.93,p<.05)whileboththeangryandhappyeyes werelookedatequally(F(1,23)=0.16,p=.688.Theseresultsareinaccordancewiththe resultsafterBonferronicorrections;onlytheangrymouthseemedtotriggerlongerfixations thanthehappymouth(p=.03). Preliminaryanalysesaboutthegendereffectonlookingtimes A2(emotionalvoicefamiliarizationcondition:angryorhappy)x2(gender:maleorfemale)x 2(emotionalface:happyorangry)ANOVAwasperformedonthelookingtimeswiththe voiceconditionsandemotionalfacesasawithin-subjectsfactorandgenderbetween-subject factors.Thegendereffectwasnotsignificant(F(1,22)=.36,p=.56,η=.02)anddidnotinter- actwithotherfactors(allp>.05). A2(emotionalvoicefamiliarizationcondition)x2(gender)x2(emotionalface)x2(AOIs: mouthoreyes)ANOVAwasperformedonthelookingtimes,withtheemotionalvoicecondi- tions,AOIs,andtheemotionalfacesasawithin-subjectsfactorandgenderasabetween- PLOSONE|https://doi.org/10.1371/journal.pone.0194579 April11,2018 7/17 Cross-modaltransferofemotionsininfants Fig3.Areaofinterestrepresentingthewholeface.Theangryface(right)andthehappyface(left).FacesfromTheKarolinskaDirectedEmotional Faces—KDEF. https://doi.org/10.1371/journal.pone.0194579.g003 subjectsfactor.Thegendereffectwasnotsignificant(F(1,22)=.47,p=.50,η=.02)anddid notinteractwithotherfactors(allp>.05).Consequently,resultswerefurthercollapsedacross gender. Mainanalyses:Lookingtimes,firstfixationsandpeaklookatwholefaces andAOIs Table2presentstheresultsofthevisualtestphaseforthelookingtimeatfaces,theAOIs,the visualpreferencesoftheinfants,andtheirfirstfixationsforthehappyorangryfacepresented aftertheemotionalvoices(angryorhappy). A2(emotionalvoicefamiliarizationcondition:angryorhappy)x2(emotionalface:happy orangry,Fig5)ANOVAwasperformedonthelookingtimes,withthevoiceconditionsand emotionalfaceasawithin-subjectsfactor. Theemotionalvoicefamiliarizationconditionwasnotsignificant(F(1,23)=1.51,p=.23, η=.06).Theeffectoftheemotionalfacewassignificant(F(1,23)=7.42,p<.05,η=.244), withaclearvisualpreferencefortheangryface(mean±s.e.m.;seconds,2.76±0.19s.)com- paredtothehappyface(2.35±0.22s.)presented.Theinteractionbetweentheemotionalvoice familiarizationconditionandtheemotionalfacewasnotsignificant(F(1,23)=1.43,p=.24, η=.058).Nevertheless,accordingtoIacobucci[45],itispossibletoexaminetheeffectofa non-significantinteractiongivencertainconditions.Hestatedthatifasimpleeffectissignifi- cant,wecanexploreitseffectonthesecond,non-significantone.Underthesecircumstances, wecanexploreourapriorihypotheses.Therefore,pre-plannedcomparisonsshowthatinfants PLOSONE|https://doi.org/10.1371/journal.pone.0194579 April11,2018 8/17 Cross-modaltransferofemotionsininfants Fig4.Areaofinterestrepresentingtheeyesandthemouth.Theangryface(right)andthehappyface(left).FacesfromTheKarolinskaDirected EmotionalFaces—KDEF. https://doi.org/10.1371/journal.pone.0194579.g004 Table1. Resultsinthebaselineconditionofthevisualtestphaseanalyses. Bothfaces Angryface Happyface Valuetest Pvalue mean±s.e.m mean±s.e.m mean±s.e.m % % % Neutralvoice Lookingtimeatfaces 6.75±0.53 3.75±0.29 3.00±0.23 F(1,23)=3.135 .09NS 68% 56% 44% Peaklooksatfaces 691±81 389±51 301±30 F(1,23)=2.02 .168NS LookingtimeatAOIs: Mouth 1.51±0.36 0.94±0.19 0.57±0.17 F(1,23)=2.89 .103NS 22% 62% 38% Eyes 1.43±0.24 0.74±0.12 0.69±0.12 F(1,23)=0.15 .701NS 21% 52% 48% PeaklooksatAOIs: Mouth 800±186 523±125 277±186 F(1,23)=5.34 .014(cid:3) Eyes 653±125 317±57 336±68 F(1,23)=5.34 .689NS FirstfixationsatfacesNumber;%(Ntot=48) 46;96% 121;46% 25425;4% t(47)=0.58 .56NS Infants’mean±standarderrorandpercentagelookingtime(s)andmean±standarderrorofpeaklooks(ms)atfacesandtoAOIsandnumberandpercentageoffirst fixationsforthehappyorangryfaceaftertheneutralvoice. (cid:3)p<.05, NS=NonSignificantresult. https://doi.org/10.1371/journal.pone.0194579.t001 PLOSONE|https://doi.org/10.1371/journal.pone.0194579 April11,2018 9/17 Cross-modaltransferofemotionsininfants Table2. Resultsofthevisualtestphaseanalyses. Bothfaces Angryface Happyface Valuetest Pvalue mean±s.e.m mean±s.e.m mean±s.e.m % % % Happyvoice Lookingtimeatfaces 5.33±0.52 3.04±0.24 2.29±0.28 F(1,23)=4.85 .037(cid:3) 53% 56% 44% Peaklooksatfaces 796±140 446±79 350±61 F(1,23)=1.30 .265NS LookingtimeatAOIs: Mouth 1.21±0.31 0.83±0.20 0.38±0.11 F(1,23)=8.32 .008(cid:3)(cid:3) 12% 69% 31% Eyes 0.99±0.25 0.44±0.11 0.55±0.13 F(1,23)=0.54 .470NS 9% 45% 55% PeaklooksatAOIs: Mouth 589±114 347±51 242±64 F(1,23)=2.43 132NS Eyes 546±102 243±37 303±65 F(1,23)=1.09 .307NS Firstfixationsatfaces(Ntot=48) 44;92% 29;66% 15;34% t(47)=2.19 .033(cid:3) Angryvoice Lookingtimeatfaces 4.91±0.53 2.48±0.24 2.42±0.29 F(1,23)=0.04 .843NS 49% 56% 44% Peaklooksatfaces 667±101 313±38 354±63 F(1,23)=0.36 .553NS LookingtimeatAOIs: Mouth 1.19±0.29 0.72±0.13 0.47±0.16 F(1,23)=2.24 .148NS 12% 60% 40% Eyes 0.44±0.10 0.46±0.09 0.69±0.12 F(1,23)=0.04 .845NS 10% 49% 51% PeaklooksatAOIs: Mouth 740±182 429±74 311±109 F(1,23)=1.21 .285NS Eyes 492±99 214±46 278±53 F(1,23)=1.06 .313NS FirstfixationsatfacesNumber;%(Ntot=48) 44;92% 17;36% 27;64% t(47)=1.85 .069NS Infants’mean±standarderrorandpercentagelookingtime(s)andmean±standarderrorofpeaklooks(ms)atfacesandtoAOIsandnumberandpercentageoffirst fixationsforthehappyortheangryfaceafterthehappyorangryvoice. (cid:3)p<.05, (cid:3)(cid:3)p<.01, NS=NonSignificantresult. https://doi.org/10.1371/journal.pone.0194579.t002 lookedatthehappyandtheangryfaceequallyafterhearingtheangryvoice(F(1,23)=.04,p= .843).Bycontrast,infantslookedlongerattheangryfacethanthehappyfaceafterhearingthe happyvoice,(F(1,23)=4.85,p<.05)(Fig5).Insum,thelookingtimeforthehappyfaceis notaffectedbyeitheremotionalvoice.However,thelookingtimefortheangryfaceincreases afterhearingthehappyvoice. A2(emotionalvoicefamiliarizationcondition)x2(emotionalface)x2(AOIs:mouthor eyes,Fig6)ANOVAwasperformedonthelookingtimes,withtheemotionalvoiceconditions, AOI,andtheemotionalfacesaswithin-subjectsfactors.Fig6presentsthemeansandstandard errorsoflookingtimesofthevisualtestphasefortheAOIs(mouthandeyes)infunctionof theemotionalfaces(angryorhappy)aftereachemotionalvoicefamiliarizationcondition (angryorhappy). TheeffectofAOIswasnotsignificant(F(1,23)=.56,p=.46η=.024).Infantsseemtohave lookedatthemouth(mean±s.e.m.;seconds,0.60±0.11s.)andeye(0.47±0.08s.)areasfor thesameamountoftime.Theeffectoftheemotionalvoicefamiliarizationconditionwasnot PLOSONE|https://doi.org/10.1371/journal.pone.0194579 April11,2018 10/17
Description: