ebook img

DTIC ADA510822: Towards Link Characterization from Content PDF

0.25 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview DTIC ADA510822: Towards Link Characterization from Content

TOWARDSLINKCHARACTERIZATIONFROMCONTENT JohnGrothendieck AllenGorin RutgersUniversity U.S.DepartmentofDefense ABSTRACT giventheassumptionthatthetestdataisdrawnfromthesamepopu- lationasthetrainingdata[2].Toprovidevarianceinformationrather In processing large volumes of speech and language data, we thanasimplepointestimaterequiresadifferenttechnicalapproach. areofteninterestedinthedistributionoflanguages,speakers,topics, etc. Forlargedatasets,thesedistributionsaretypicallyestimatedat A hierarchical Bayes model for the true class proportions can agivenpointintimeusingpatternclassificationtechnology. Such incorporateerrorrateuncertainty. TheMetropolis-Hastings(M-H) estimates can be highly biased, especially for rare classes. While algorithm[3]allowsustoconstructtheposteriordistributionoftrue these biases have been addressed in some applications, they have classproportions. TheposteriormeanprovidesaBayesestimateof thusfarbeenignoredinthespeechandlanguageliterature.Thisne- theclassproportions,whileposteriorvarianceprovidesconfidence glectcausessignificanterrorforlow-frequencyclasses. Correcting boundsontheestimatedproportion. thisbiaseddistributioninvolvesexploitinguncertainknowledgeof theclassifiererrorpatterns. TheMetropolis-Hastingsalgorithmal- 2. RELATEDRESEARCH lowsustoconstructaBayesestimatorforthetrueclassproportions. Weexperimentallyevaluatethisalgorithmforaspeakerrecognition Issuesofdatasummarizationwhenusingaclassifierhavenotbeen task. In this experiment, the Bayes estimator reduces maximum atraditionalfocusofHumanLanguageTechnology(HLT)research. RMSE by a factor of five. Performance is furthermore more con- An appropriate model for classifier errors is presented in [2]; this sistent,withrangeofRMSEreducedbyafactorof4. workhoweverdoesnotaddresstheissueofestimationbasedupon IndexTerms— knowledgeacquisition, MonteCarlomethods, uncertainerrorrates. Thebiasinherentinhardclassifieroutputhas been ignored by the speech and language processing community speechprocessing (thus such works such as [4] analyze output label rather than true classproportions). Thework[5]also seeks amethodologythatis 1. INTRODUCTION valid for all possible class proportions; it further provides an HLT engineer’ssketchofBayesiandecisiontheory. Theirinteresthow- There is increasing interest in characterizing links in a communi- everisoncalibrationofscorelikelihoodsconditionalonclass,anal- cation network, notsimply in terms ofmessage count butby con- ogoustoourconfusionmatrix,ratherthanupdatingthehypothesis tent.Forexample,whatproportionofinternettrafficispeer-to-peer? priordistribution.Naturallanguageprocessingusescomplexclassi- Theremaybelittleornopriorknowledge. Forcommunicationbe- fiersandmachinelearningtechniques,butcorpussummarystatistics tweenhumans,characterizationcaninvolveanyofthestandardtasks havenotbeenaprimaryconcern. in language processing. We might assign a categorical label (e.g. Researchareasinvolvinghigh-speedhigh-volumedatastreams language, speaker or topic) to linguistic content encoded in audio, (suchasinternettraffic)focusmoreonissuesofspeedandscalabil- textordocumentimages, thenfocusonthedistributionoverthese ity.RecentlytherehasbeenconvergencewithHLT.Contentmining categories. Histograms of these distributions provide useful sum- techniquesareincreasinglyusedtomonitornetworks[6],whilethere marystatisticstohelphumanscopewithinformationoverload[1]. isongoingresearchonfastlanguageprocessingscalabletomassive Automatedclassifiershavemanyuses,buttheiroutputistypi- datastreams.[7]describesoneapplicationin(text-based)language callybiasedduetoclassificationerrors. Proportionalbiasincreases and topic identification. As high-volume data processing incorpo- asthefrequencyofaclassdecreases. Forexample,considersome ratesimperfectclassifiers, classifierbiascanseriouslyimpactdata binary task with 5% false alarm rate and negligible missed detec- analysis. tions. If20%ofthedataistrulyfromthetargetclass,around24% Themedicalliteraturerecognizestheissueofclassificationbias; ofthedatawillbehypothesizedassuchbytheclassifierduetofalse someauthorsuseconfusionmatrixinversion,assumingknownerror alarms.Thisisincorrect,butperhapsstilluseful.However,foratrue rates[8]. Afewworksnotethatthisisunrealistic[9]. Inparticular, valueof0.01%,theexpected5%hypothesizedproportioniswrong thetechnicalapproachof[10]isverysimilartoours. Theirpaper byordersofmagnitude. Thislargeproportionalbiasisunsatisfac- considersonlytwoclassesandreliesonaGibbssamplingscheme tory,especiallyinapplicationswhererareeventsareofinterest. dependentonconjugatepriors,butisreadilyextensibletomoregen- Giventheclassifiererrorrates,itisstraightforwardtoestimate eralclassificationproblems.Theseresultsseemtobeunknownout- themostlikelyclassproportionsviatheE-Malgorithm. Thesecan sideoftheepidemiologyliterature. beestimatedfromsomesamplesetwithmanualannotation. How- Ourtechnicalproblemrequiresdeducingtrueclassproportions ever, estimates based upon finite data have some degree of uncer- from the classifier’s hypothesized proportions and estimated error tainty. Optimaldecisionscanrequireunderstandingofvariance— patterns. Fromthisperspectiveoursolutionsimplyadaptsstandard themostlikelytargetclassproportionmaybe20%,buthowplausi- Bayesian techniques to a particular mixture problem. The justifi- bleis19%,or10%? Thisisawellunderstoodprobleminstatistics, cation for using Markov Chain Monte Carlo (MCMC) numerical Email:[email protected] estimation is well-understood [3], but the practice involves some Email:[email protected] art[11][12]. 1-4244-1484-9/08/$25.00 ©2008 IEEE 4849 ICASSP 2008 Report Documentation Page Form Approved OMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington VA 22202-4302. Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to a penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE 3. DATES COVERED 2008 2. REPORT TYPE 00-00-2008 to 00-00-2008 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Towards Link Characterization From Content 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATION Rutgers University,Piscataway,NJ,08854-8019 REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSOR/MONITOR’S ACRONYM(S) 11. SPONSOR/MONITOR’S REPORT NUMBER(S) 12. DISTRIBUTION/AVAILABILITY STATEMENT Approved for public release; distribution unlimited 13. SUPPLEMENTARY NOTES See also ADM002091. Presented at the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2008), Held in Las Vegas, Nevada on March 30-April 4, 2008. Government or Federal Purpose Rights License. 14. ABSTRACT 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF 18. NUMBER 19a. NAME OF ABSTRACT OF PAGES RESPONSIBLE PERSON a. REPORT b. ABSTRACT c. THIS PAGE Same as 4 unclassified unclassified unclassified Report (SAR) Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 3. ESTIMATINGCLASSPROPORTIONS 3.1. Introduction Wemeasureestimatorperformanceviameansquarederror(MSE). Inthissection, weshowthathypothesizedclassproportionsactas ashrinkageestimatortowardsthefixedeigenvectoroftheclassifier errorratematrix. Thisintroducesuncontrolledbiasandlackofpre- dictabilityintotheMSE. Inincorporatingamodeloferrorrates,theBayesestimatorde- scribed in this paper gains some desirable statistical properties. It isconsistentinthesensethatgivenunlimiteddataitmustconverge tothetruth. Itisadmissible(nostrictlylowerriskestimatorexists) since it is Bayesian for a particular prior [13]. By construction it hasminimumexpectedsquarederrorlossunderexplicitpriorbeliefs abouttheparameters. 3.2. TheDistributionofClassifierHypotheses Denotebyxthevalueofthetrueclasslabelforsomeobservation, andbyythehypothesizedclasslabelfromtheclassifieroutput.As- sumemultinomialsamplesxandywithassociatedclassprobability vectorsV andW respectively,whereV ≡ {vi = P(x = i)}and Fig.1.RMSEofwˆ1for10%EERclassifier,highandlowvariance. W ≡ {wi = P(y = i)}. Improvedclassifierperformancebrings W closertotrueV,butaccurateestimationofV ispossibleforim- perfectclassifiersgivenaccurateknowledgeofclassifiererrorrates. shrinkageofV towardsVF. TheshrinkagedependsonbothC and Foragivendatasetandclassifierwehaveamodelwithclass- unknownV,soRMSEcannotbepredictedwithoutanexplicitmodel conditionalerrorprobabilities forC. CompensatingforthebiasbyestimatingC providesamore predictableRMSE. c =P(y=i|x=j) ij The cij are independent of the (unknown) true distribution of x. 3.3. HierarchicalBayesModel Given probability vectors V for the true-class distribution and W Estimation of error rates C is typically done from some manually forthehypothesized-classdistribution,thisleadstothemultinomial labeledcorpusL,withlijthenumberofobservationswithtrueclass parameterequation jandhypothesizedclassi. ThedistributionoftheparameterV de- W =CV (1) pendsonthedistributionsofW andC,whichinturndependonL ThematrixChasaneigenvalue1,thusatleastone‘fixed’eigen- andY. AhierarchicalBayesmodelcanexploitpriorsnotonlyon vector VF such that CVF = VF. This VF is unique so long as theparameterofinterest, butontheotherparametersonwhichits theMarkovprocessdefinedbytransitionsC isergodic(irreducible distributiondepends. withrecurrentaperiodicstates). Asufficient(thoughnotnecessary) Wemodeltrueclassandclass-conditionaloutputlabelsasmulti- condition is if no entries of C are zero. In such a case, VF is the nomialrandomvariables.FlatpriorsallowustomodelP(W|Y)as uniqueattractorforallprobabilityvectorsV undertheactionofC: aDirichletandP(C|L)asahyper-Dirichletdistribution. Jointdis- limn→∞CnV = VF. This creates the bias in hypothesized ver- tributionP(C,W|L,Y)ismorecomplicatedinthatthedomainof sustrueclassproportions—othervectorsV aredrawntowardsVF. W depends on C. Changing coordinates to P(C,V|L,Y) elimi- Thus,W =CV differsfromV exceptatVF. natesthatissue,butdataY providesinformationonCV ratherthan GivenasetY ofobservedclassifieroutput,wedenotebyy(i) directlyonV. ThusweconstructposteriorP(C,V|L,Y)viaran- theclassifierhypothesisforobservationi,wherey(i)∈{1,K}for domsampling. aclassifierwithKcategories.Denotethenumberofobservationsin Y byNY. WeabusenotationandletY furtherdenotethevectorof 3.4. Metropolis-HastingsEstimationofV hypothesized class counts, so Y ∼ Multi(NY,W). Given Y, we haveWˆ =Y/NY therelativefrequencyestimatorforW. ThusWˆ OurgoalistoestimatethedistributionP(V|L,Y),whereV isthe is a random variable, while Wˆ(Y) is a fixed value. The expected vector of true class proportions given data L and Y. We have no MSE of Wˆ as an estimator of true proportions V has the classic analyticsolutionforP(V|L,Y),butdohave: decomposition: P(C,V|L,Y)∝P (C,V)P(Y|W =CV)P(L|C) 0 h i h i E (Wˆ −V)2 =var(Wˆ)+ E(Wˆ −V) 2 (2) forpriorP0(C,V)byBayesRule. Wegeneraterandomsamplesof C andV accordingtoprobabilitiesP(C|V,L,Y)andP(V|C,Y). Consider the 2-class case. When the number of observations WerecoverP(V|L,Y)byprojectingontothemarginaldistribution. NY islarge,thenvar(wˆ1)issmall,squaredbiasdominatestheMSE, TheM-HalgorithmprovidesaMonteCarlomethodforgener- andtherootmeansquarederror(RMSE)ofw1 ≈ |E(w1)−v1|. ating samples that are provable convergent to a target distribution. ForsmallerNY,var(wˆ1)contributestoRMSE.Figure1showsan DenotesomeparameterspacebyX andthe(computable)probabil- example. Estimator wˆ1 suffers from uncontrolled bias due to the itydistributionbyq(x). M-HperformsarandomwalkinX viaa 4850 transition kernel π(x,x(cid:4)). The transition kernel defines a Markov chain, which under suitable conditions (i.e. ergodicity) is guaran- teedtoconvergeinprobabilitytothetargetdistributionq(x).See[3] and[14]formoredetails. WegenerateacorrelatedsampleofsizeT asfollows: 1. SetinitialC0 =Cˆ(L),V0 =C0−1Wˆ(Y). 2. Fortin1toT: (a) Select candidate C(cid:4) via independent transition kernel πC =P(C(cid:4)|L). DefineWC(cid:4) =C(cid:4)Vt−1andWc,t−1 =Ct−1Vt−1 ThusαC =P(WC(cid:4)|Y,L)/P(Wc,t−1|Y,L) AcceptCt =C(cid:4)withprobabilitymin(αC,1). (b) SelectcandidateV(cid:4)viathetransitionπV. DefineWV(cid:4) =CtV(cid:4)andWv,t−1 =CtVt−1 ThusαV =P(WV(cid:4)|Y,L)/P(Wv,t−1|Y,L) AcceptVt =V(cid:4)withprobabilitymin(αV,1). This random walk in (C,V) will converge to P(C,V|Y,L). Se- quenceVt isguaranteedtoconvergetothemarginaldistributionof interest,P(V|Y,L). GivenKclassesthisisO(K2T);thenumber ofclassesthatcanbeconsideredinpracticeislimitedbytheamount Fig.2.Operatingpoint:c12 =c21 =0.053 oflabeleddataLtoestimateCratherthanalgorithmiccomplexity. 4. EXPERIMENTALEVALUATIONONSPEAKERID cuts NL is constant (4111). Denote the test sets by Yi, where the numberofvoicecutsNY isalsoconstant(726). 4.1. Introduction The value of ‘true target proportion’ vi1 is controlled by con- Inthissection,weexperimentallyevaluatetheM-Halgorithmona strained generation of the Yi. Denote by Xi1 the number of true target/non-targetspeakeridentification(SID)taskderivedfromthe target cuts in the data set Yi, where the true proportion of target Switchboardcorpus[15]. Wewillconstructthistaskbyrandomly speakers in that data set is given by vi1 = Xi1/NY. Denote by selecting100speakers(outofnearly500)toconstituteamodeled wi1 =Yi1/NY thehypothesizedtargetproportioninYi. targetset,withtheremainingopen-set(unmodeled)denotedasnon- ForeachpartitionweestimateP(V|Y,L)viaM-H.Denoteby targets. Nv1 thenumberofpartitions(Li,Yi)withcommonv1 (100inour WeevaluatetheRMSEasafunctionoftruetargetproportionv1, experiments).Thetruevi1isknownforeachpartition.Thisprovides comparing the RMSE curves for W∗ and V∗. Randomized train- an empirical measure for the RMSEPof an estimator at fixed true ivnagl[(0L,)1a].ndUtseinstg(tYhe)saelgtsoarirtehmgenoefrtahteedprfeovrivoaulsuesescotifovn1,winethesetiimntaetre- atanrdgestimpriolaprolyrtifoonr:RRMMSSEE((wv1∗1∗||vv11)).=( vi1=v1(vi∗1−v1)2/Nv1)1/2 P(V|L,Y)foreachdatasetandcomputetheRMSEasafunction WepresentRMSE(v1∗|v1)andRMSE(w1∗|v1), basedupon100 ofv1. WethencomparetheRMSEofthehypothesizedproportion randompartitionsgeneratedforevery(approximate)percentilevalue u(wnk1∗n),oawnndtorufethve1B.ayesestimatedproportion(v1∗),asfunctionsofthe oc1f2v=1. cF2i1gu=re02.05sh.oOwbssethrveetwthoatcRuMrveSsEa(wt 1t∗h)eisEqEuRiteoupnerparteidnigctpaobilnet, rangingbetween0.01and0.05dependingonthetruevalueofv1. 4.2. DataandExperimentalSet-up RmMaxSimE(uvm1∗)ofisRbMoSthEs(vig1∗n)iifiscaafnatclytolroowfe5rsamnadllmerothreanptrheedimctaaxbilme.umThoef Andrews and Hernandez [16] provided SID scores for a subset of RMSE(w1∗). Furthermore,measuringthepredictabilityoftheerrors Switchboard,usingthealgorithmfrom[17]. Inparticular,thereare byrange,then0.007<RMSE(v1∗)<0.017attheequaloperating 4837 different voice cuts representing 483 different speakers. To point,while0.006 < RMSE(w1∗) < 0.053. Thisgivesarangeof createatargetset, 100speakerswereselectedatrandom. Topro- 0.01versus0.047, ora75%relativereductionintherangeofv1∗. videataskwithnon-negligibleerrorrate,onlytwotrainedmodels Figure 3 shows the estimator RMSE curves when the false alarm wereretainedforeachofthetargetspeakers(i.e. 200models). No rate(c12)is2%andthemisseddetectionrate(c21)is9%. individualmodelswereretainedfromtheopen-set(non-targets).We definedasimplebinaryclassifierwithparameterT asfollows. For 4.3. ValueEstimationonStreams eachvoicecut: 1. Findmodelscores{si}forthetargetspeakers(200scores), Oneimportantproblemistoidentifywhichofseveraldatastreams has thegreatest proportion ofsome targetclass. If all streamsare 2. Ifmax(si) > T thenclassifythevoice-cutas“Target”,else havethesameclassifiererror, thebestsourceofthetargetclassis classifyas“Non-target.” theonewiththehighestobservedw1∗. Inpracticehoweverstreams Theresultingclassificationtaskhasanequalerrorratearound5%. oftendiffer,forexampleduetonoiseandchanneleffects. Inthese We estimate RMSE(v1∗) over various values of v1 as follows. cases hypothesized classes W alone can lead to consistently poor Generate random partitions of the 5K voice cuts into training and decisions. testsets. DenotethetrainingsetsbyLi,wherethenumberofvoice Wegeneratetwodatasetswithdifferentnon-targetdistributions 4851 mumRMSEwasreducedbyafactorof5,andtherangeinRMSE (asameasureofvariability)isreducedbyafactorof4. 6. REFERENCES [1] A.L.Gorin, “CopingwithInformationOverload,” inProceed- ings of the International Symposium on Large-scale Knowl- edgeResources,2006. [2] J.Langford,“TutorialonPracticalPredictionTheoryforClas- sification,” JournalofMachineLearningResearch,vol.6,pp. 273–306,2005. [3] S. Chib and E. Greenberg, “Understanding the Metropolis- HastingsAlgorithm,” TheAmericanStatistician,vol.49,pp. 327–335,1995. [4] J. Grothendieck, “Tracking Changes in Language,” IEEE Transactions on Speech and Audio Processing, pp. 700–711, 2005. [5] NikoBru¨mmerandJohanduPreez,“Application-independent evaluation of speaker detection,” Computer Speech & Lan- guage,vol.20,no.2-3,pp.230–275,2006. Fig.3.Operatingpoint:c12 =0.024,c21 =0.088 [6] S.Singh,C.Estan,G.Varghese,andS.Savage, “Automated WormFingerprinting,” inProc.OSDI,May2004,pp.45–60. [7] Stephen G. Eick, John W. Lockwood, Ron Loui, Andrew via biased sampling. Rather than random allocation, we assign a Levine, Justin Mauger, Doyle J. Weushar, Alan Ratner, and fixedproportionofthosepointsonwhichtheclassifierfailstosub- JohnByrnes,“HardwareAcceleratedAlgorithmsforSemantic setsofthedata. Inparticular,wedividetheSwitchboarddatainto ProcessingofDocumentStreams,”inIEEEAerospaceConfer- twohalvesS1andS2,butallocateexactlyone-thirdofclassifierer- ence,2006, Paper10.0802. rorstoS1. Thiscreatesoverallerrorratesof3.6%and7.1%onS1 andS2respectively. [8] N.J. Wald, K. Nanchahal, S.G. Thompson, and H.S. Cuckle, We partition each Si into equal pieces Li and Yi. Classifier “DoesBreathingOtherPeople’sTobaccoSmokeCauseLung performance C is modeled independently on each Li to allow for Cancer?,” BritishMedicalJournal,vol.293,pp.1217–1222, changes. We examine the results of estimation on 1000 partitions 1986. (L1,L2,Y1,Y2)forvariousfixedvaluesoftruetargetproportionv1 [9] S.D. Walter and L.M. Irwig, “Estimation of Test Error onY1andY2. Rates, Disease Prevalence, and Relative Risk from Misclas- If we set target proportions v1 = 0.03 in Y1 and v1 = 0.01 sifiedData:AReview,”JournalofClinicalEpidemiology,vol. in Y2, the mean value of w1∗ is 0.064 in Y1 and 0.080 in Y2. The 41,pp.923–937,1988. meanvalueofv1∗ is0.031inY1 and0.015inY2. Bayesestimation [10] L.Joseph,T.Gyorkos,andL.Coupal,“Bayesianestimationof decidesY1istherichersourceoftargetvoicecuts87.7%ofthetime; diseaseprevalenceandtheparametersofdiagnostictestsinthe hypothesizedclassesselectitonly0.6%ofthetime. absenceofagoldstandard,” AmericanJournalofEpidemiol- Withtargetproportionsv1 =0.047inY1andv1 =0.01inY2, ogy,vol.141,pp.263–272,1995. themeanvalueofw1∗is0.079ineach;truetargetdifferenceexactly matches the difference in bias. Means for v1∗ are 0.047 and 0.015 [11] A.Gelman,G.O.Roberts,andW.R.Gilks,“EfficientMetropo- respectively.Bayesestimationselectstherichersource99.2%ofthe lisJumpingRules,” BayesianStatistics5,pp.599–607,1994. time. Hypothesized classes have essentially random performance [12] H. Haario, E. Saksman, and J. Tamminen, “An Adaptive (correct47.2%ofthetime).Onlyasthedifferenceintrueclasspro- MetropolisAlgorithm,” Bernoulli,vol.7,no.3,pp.223–242, portionsincreasesbeyond3.5%,dohypothesizedclassesdetectthe 2001. difference.Weseethatgivenraretargetclasses,adifferenceinfalse [13] E.L. Lehmann and G. Casella, Theory of Point Estimation, alarmratecanoverwhelmthedifferenceintruetargetproportion. Springer,1998. [14] L. Tierney, “Markov Chains for Exploring Posterior Distri- 5. CONCLUSIONS butions (with discussion),” Annals of Statistics, vol. 22, pp. 1701–1762,1994. Thispaperhasaddressedtheproblemofestimatingclassproportions [15] J.GodfreyandE.Holliman, “SWITCHBOARD-1Release2,” basedontheoutputofanautomatedpatternclassificationsystem,for LinguisticDataConsortium,LDC97S62,1997. examplelanguage,speakerortopicidentification. Wedescribedan hierarchicalBayesmodelforthetrueclassdistribution,whichallows [16] W.AndrewsandJ.Hernandez-Cordero, “SREC’05outputon constructionofaBayesestimatorforthetrueclassproportion. SwitchboardI,privatecommunication,”2006. This algorithm was experimentally evaluated on a binary SID [17] D. Reynolds, W. Campbell, W. Shen, P. Torres-Carasquillo, taskderivedfromtheSwitchboardcorpus. Thisexperimentdemon- andA.Adami, “MIT-LincolnLaboratorySystemDescription stratedthattheBayesestimatoroftargetproportionisfarsuperior NISTSRE2005,”2005. tothehypothesizedtargetproportionfromtheclassifier. Themaxi- 4852

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.