ebook img

Face to Face: Anthropometry-Based Interactive Face Shape PDF

16 Pages·2015·5.21 MB·English
by  
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Face to Face: Anthropometry-Based Interactive Face Shape

HindawiPublishingCorporation InternationalJournalofComputerGamesTechnology Volume2009,ArticleID573924,15pages doi:10.1155/2009/573924 Research Article Face to Face: Anthropometry-Based Interactive Face Shape Modeling Using Model Priors YuZhang1andEdmondC.Prakash2 1InstituteofHighPerformanceComputing,1FusionopolisWay,16-16Connexis,Singapore138632 2DepartmentofComputingandMathematics,ManchesterMetropolitanUniversity,ManchesterM15GD,UK CorrespondenceshouldbeaddressedtoYuZhang,zhangyu [email protected] Received1February2009;Accepted19February2009 RecommendedbySuipingZhou This paper presents a new anthropometrics-based method for generating realistic, controllable face models. Our method establishesanintuitiveandefficientinterfacetofacilitateproceduresforinteractive3Dfacemodelingandediting.Ittakes3D face scans as examples in order to exploit the variations presented in the real faces of individuals. The system automatically learnsamodelpriorfromthedata-setsofexamplemeshesoffacialfeaturesusingprincipalcomponentanalysis(PCA)anduses it to regulate the naturalness of synthesized faces. For each facial feature, we compute a set of anthropometric measurements toparameterizetheexamplemeshesintoameasurementspace.UsingPCAcoefficientsasacompactshaperepresentation,we formulatethefacemodelingprobleminascattereddatainterpolationframeworkwhichtakestheuser-specifiedanthropometric parameters as input. Solving the interpolation problem in a reduced subspace allows us to generate a natural face shape that satisfiestheuser-specifiedconstraints.Atruntime,thenewfaceshapecanbegeneratedataninteractiverate.Wedemonstratethe utilityofourmethodbypresentingseveralapplications,includinganalysisoffacialfeaturesofsubjectsindifferentracegroups, facialfeaturetransfer,andadaptingfacemodelstoaparticularpopulationgroup. Copyright©2009Y.ZhangandE.C.Prakash.ThisisanopenaccessarticledistributedundertheCreativeCommonsAttribution License,whichpermitsunrestricteduse,distribution,andreproductioninanymedium,providedtheoriginalworkisproperly cited. 1.Introduction artistic skill. This problem is compounded by the fact that the slightest deviation from real facial appearance can be One of the most challenging tasks in graphics modeling immediately perceived as wrong by the most casual viewer. is to build an interactive system that allows users to While the exiting systems have exquisite control rigs to model varied, realistic geometric models of human faces providedetailedcontrol,thesecontrolsarebasedongeneral quicklyandeasily.Applicationsofsuchasystemrangefrom modeling techniques such as point morphing or free-form entertainmenttocommunications:virtualhumanfacesneed deformations, and therefore lack intuition and accessibility tobegeneratedformovies,computergames,advertisements, fornovices.Usersoftenfaceaconsiderablelearningcurveto orothervirtualenvironments,andfacialavatarsareneeded understandandusesuchcontrolrigs. for video teleconference and other instant communication To address the lack of intuition in current modeling programs.Someauthoringtoolsforcharactermodelingand systems, we aim to leverage the anthropometrical measure- animationareavailable(e.g.,Maya[1],Poser[2],DazStudio ments as control rigs for 3D face modeling. Traditionally, [3],PeoplePutty[4]).Inthesesystems,deformationsettings anthropometry—thestudyofhumanbodymeasurement— arespecifiedmanuallyovertherangeofpossibledeformation characterizes the human face using linear distance mea- for hundreds of vertices in order to achieve desired results. sures between anatomical landmarks or circumferences at An infinite number of deformations exist for a given face predefined locations [5]. The anthropometrical parameters mesh that can result in different shapes ranging from provideafamiliarinterfacewhilestillprovidingahighlevel the realistic facial geometries to implausible appearances. of control to users. While this is a compact description, Consequently, interactive modeling is often a tedious and they do not uniquely specify the shape of the human complex process requiring substantial technical as well as face.Furthermore,particularlyforcomputerfacemodeling, 2 InternationalJournalofComputerGamesTechnology the sparse anthropometric measurements taken at a small Besides these approaches, DeCarlo et al. [23] construct numberoflandmarksonthefacedonotcapturethedetailed a range of face models with realistic proportions using a shape variations needed for realism. The desire is to map variationally constrained optimization technique. However, suchsparsedataintoafullyreconstructed3Dsurfacemodel. without the use of the model priors, their method cannot Our goal is a system that uses model priors learned from generatenaturalmodelsunlesstheuseraccuratelyspecifiesa prerecordedfacialshapedata tocreatenaturalfacialshapes verydetailedsetofconstraints.Also,thisapproachrequires thatmatchanthropometricconstraintsspecifiedbytheuser. minutes of computation for the optimization process to Thesystemcanbeusedtogenerateacompletesurfacemesh generateafacemodel.BlanzandVetter[24]presentaprocess givenonlyasuccinctspecificationofthedesiredshape,and forestimatingtheshapeofafacefromasinglephotograph. itcanbeusedbyexpertandnovicealiketocreatesynthetic This is extended by Blanz et al. [25], who present a set of 3Dfacesformyriaduses. controls for intuitive manipulation of facial attributes. In contrast to our work, they manually assign attribute values 1.1. Background and Previous Work. A large body of litera- to characterize the face shape, and devise attribute controls ture on modeling and animating faces has been published usinglinearregression.Vlasicetal.[26]usemultilinearface in the last three decades. A good overview can be found models to study and synthesize variations in faces along in the textbook [6] and in the survey [7]. In this work, several axes, such as identity and expression. An interface we focus on modeling static face geometry. In this context, forgradient-basedfacespacenavigationhasbeenproposed severalapproacheshavebeenproposed.Theycanberoughly in[27].Principalcomponentsthatarenotintuitivetousers classified into the creative approach and the reconstructive areusedasnavigationaxesinfacespace,andfacialfeatures approach. cannot be controlled individually. The authors focus on a Thecreativeapproachistofacilitatemanualspecification comparisonofdifferentuserinterfaces. ofthenewfacemodelbyauser.Parametricfacemodels[8– Several commercial systems for generating composite 11]andmanycommercialmodelersfallintothisapproach. facial images are available [28–30]. Although they are The desire is to create an encapsulated model that can effective to use, a 2D face composite still lacks some of the generate a wide range of faces based on a small set of advantages of a 3D model, such as the complete freedom input parameters. They provide full control over the result, of viewpoint and the ability to be combined with other 3D including the ability to produce cartoon effects and the graphics.Additionally,toourknowledge,nocommercial2D highefficiencyofgeometricmanipulation.However,manual composite system available today supports automatic com- parameter tuning without geometric constraints from real pletion of unspecified facial regions according to statistical human faces for generating realistic faces is difficult and properties.FaceGen3[31]istheonlyexistingsystemthatwe time-consuming.Moreover,thechoiceoftheparameterset have found to be similar to ours in functionality. However, dependsonthefacemeshtopologyandthereforethemanual there is not much information available about how this association of a group of vertices to a specific parameter is function is achieved. As far as we know, it is built on [24] required. andthefacemeshisnotdividedintodifferentindependent Thereconstructiveapproachistoextractfacegeometry regions for localized deformation. In consequence, editing from the measurement of a living subject. The recon- operations on individual facial features tend to affect the structive approach is to extract face geometry from the wholeface. measurementofalivingsubject.Inthiscategory,theimage- based technique [12–18] utilizes an existing 3D face model 1.2.OurApproach. Inthispaper,wepresentanewmethod and information from few pictures (or video streams) for forinteractivelygeneratingfacialmodelsfromuser-specified the reconstruction of face geometry. Although this kind anthropometric parameters while matching the statistical of technique can provide reconstructed face models easily, properties of a database of scanned models. Figure1 shows its drawbacks are the inaccurate geometry reconstruction ablockdiagramofthesystemarchitecture.Weuseathree- and inability to generate new faces that have no image stepmodelfittingapproachforthe3Dregistrationproblem. counterparts. Another limiting factor of this technique lies By bringing scanned models into full correspondence with inthatitgivesverylittlecontroltotheuser. each other, the shape variation is represented by using Withasignificantincreaseinthequalityandavailability principal component analysis (PCA), which induces a low- of 3D capture methods, a common approach towards dimensional subspace of facial feature shapes. We explore creating face models uses laser range scanners to acquire the space of probable facial feature shapes using high- both the face geometry and texture simultaneously [19– level control parameters. We parameterize the example 22]. Although the acquired face data is highly accurate, models using the face anthropometric measurements, and unfortunately, substantial effort is needed to process the predefine the interpolation functions for the parameterized noisyandincompletedataintoamodelsuitableformodeling example models. At runtime, the interpolation functions oranimation.Inaddition,theresultofthiseffortisamodel are evaluated to efficiently generate the appropriate feature corresponding to a single individual; and each new face shapes by taking the anthropometric parameters as input. mustbefoundonasubject.Thedesiredfacemaynoteven Apart from an initial tuning of feature point positions, physically exist. Furthermore, the user does not have any our method works fully automatically. We evaluate the control over the captured model to edit it in a way that performance of our method with cross-validation tests. We producesanovelface. also compare our method against optimization in the PCA InternationalJournalofComputerGamesTechnology 3 subspaceforgeneratingfacialfeatureshapesfromconstraints (iv)The automatic runtime synthesis is efficient in time ofthegroundtruthdata. complexityandperformsfast. In addition, the anthropometric-based face synthesis Theremainderofthispaperisorganizedasfollows:Section2 method,combinedwithourdatabaseofstatisticsforalarge presents the face data we use. Section3 elaborates on the number of subjects, opens ground for a variety of appli- modelfittingtechnique.Section4describestheconstruction cations. Chief among these is analysis of facial features of of local shape spaces. The face anthropometric parameters differentraces.Second,theusercantransferfacialfeature(s) used in our work are illustrated in Section5. Section6 from one individual to another. This allows a plausible and Section7 describe our techniques of feature-based new face to be quickly generated by composing different shape synthesis and subregion blending, respectively. After featuresfrommultiplefacesinthedatabase.Third,theuser presentingandexplainingtheresultsinSection8,wepresent can adapt the face model to a particular population group a variety of applications of our approach in Section9. by synthesizing characteristic facial features from extracted Section10givesconcludingremarksandourfuturework. statistics.Finally,ourmethodallowsforcompressionofdata, enabling us to share statistics with the research community 2.ScannedDataandPreprocessing forfurtherstudyoffaces. Unlike a previous approach [23], we utilize the prior WeusetheUSFfacedatabase[32]thatconsistsofCyberware knowledge of the face shape in relation to the given face scans of 186 subjects with a mixture of gender, race, measurementstoregulatethenaturalnessofmodeledfaces. andage.Theageofthesubjectsrangesfrom17to68years, Moreover, our method efficiently generates a new face with and there are 126 male and 60 female subjects. Most of the desired shape within a second. Our method also differs the subjects are Caucasians (129), with African-Americans significantlyfromtheapproachpresentedin[24,25]insev- making up the second largest group (37), and Asians the eralrespects.First,theymanuallyassigntheattributevalues smallest group (20). All faces are without makeup and to the face shape and devise attribute controls for single accessories.Thelaserscansprovidefacestructuredatawhich control using linear regression. We automatically compute containsapproximately180ksurfacepointsanda360×524 theanthropometricmeasurementsforfaceshapeandrelate reflectance (RGB) image for texture-mapping (see Figures severalattributecontrolssimultaneouslybylearningamap- 2(a) and 2(b)). We also use a generic head model which ping between the anthropometric measurement space and consists of 1.092 vertices and 2.274 triangles. Prescribed thefeatureshapespacethroughscattereddatainterpolation. colors are added to each triangle to form a smooth-shaded Second, they use a 3D variant of a gradient-based optical surface(seeFigure2(c)). flowalgorithmtoderivethepoint-to-pointcorrespondence Leteach3DfacescaninthedatabasebeS (i=1,...,M). i betweenscannedmodels.Thisapproachdoesnotworkwell Since the number of vertices in S varies, we resample all i forfacesofdifferentracesorindifferentilluminationgiven faces in the database so that they have the same number the inherent problem of using static textures. We present a of vertices all in mutual correspondence. Feature points robust method of determining correspondences that does are identified semi-automatically to guide the resampling. notdependonthetextureinformation.Third,theirmethod Figure3 depicts the process. As illustrated in Figure3(a), tends to control the global face and requires additional a 2D feature mask consisting of polylines groups a set of constraints to restrict the effect of editing operations to a 86 feature points that correspond to the feature point sets localregion.Incontrast,ourmethodguaranteeslocalcontrol of MPEG-4 Facial Definition Parameters (FDPs) [33]. The thankstoitsfeature-basednature. featuremaskissuperimposedontothefront-viewfaceimage Themaincontributionsofourworkareasfollows. obtained by orthographic projection of a textured 3D face scan into an image plane. The facial features in this image (i)Ageneral,controllable,andpracticalsystemforface areidentifiedbyusingtheActiveShapeModels(ASMs)[34] modeling and editing. Our method estimates high- and the feature mask is fitted to the features automatically. level control models in order to infer a particular The 2D feature mask can be manipulated interactively. A face from intuitive input controls. As correlations little user interaction is needed to tune the feature point between control parameters and the face shape are positionsduetotheslightinaccuracyoftheautomaticfacial estimated by exploiting the real faces of individuals, feature detection. But this is restricted to slight corrections ourmethodregulatesthenaturalnessofsynthesized of wayward feature points. The 3D positions of the feature faces. Unspecified parts of the synthesized facial points on the scanned surface are then recovered by back- features are automatically completed according to projectiontothe3Dspace.Inthisway,weefficientlydefine statisticalproperties. a set of feature points on a scanned model S as U = (ii) We propose a new algorithm which uses intuitive i i {u ,...,u },wheren=86.OurgenericmodelGisalready attributeparametersoffacialfeaturestonavigateface i,1 i,n tagged with the corresponding set of feature points V = space. Our system provides sets of comprehensive {v ,...,v }bydefault. anthropometric parameters to easily control face 1 n shapecharacteristics,takingintoaccountthephysical 3.ModelFitting structureofrealfaces. (iii) Arobust,automaticmodelfittingapproachforestab- 3.1. Global Warping. The problem of deriving full corre- lishingcorrespondencesbetweenscannedmodels. spondence for all models S can be stated as: resample the i 4 InternationalJournalofComputerGamesTechnology Example scanned models Model fitting Conformed face Anthropometrical meshes with measurement space projection correspondences PCA subspace projection PCA shape Measurement parameters parameters RBF network training Offline processing Anthropometrical RBF Subregion Synthesized parameters interpolation blending face shapes network Runtime application Figure1:Overviewoftheinteractivefaceshapesynthesissystem. Different functions for φ(r) are available [35]. We had b(cid:6)etter results with the multi-quadric function φ(r) = r2+ρ2, where ρ is the locality parameter used to control howthebasisfunctionisinfluencedbyneighboringfeature points. ρ is determined as the Euclidean distance to the nearestotherfeaturepoint.Todeterminetheweightsw and j the affine transformation parameters M and t, we solve the followingequations: (a) (b) (c) (cid:3) (cid:5) (cid:2)n (cid:2)n Figure2:Facedata:(a)scannedfacegeometry;(b)texture-mapped d =f v |n , w =0, wTv =0. (2) facescan;(c)genericmodel. i,j j j=1 j j j j=1 j=1 This system of linear equations is solved using the LU surface for all Si using G under the constraint that vj is decomposition to obtain the unknown parameters. Using mapped to ui,j. The displacement vector di,j = ui,j − vj the predefined interpolation function as given in (1), we is known for each feature point vj on the generic model calculate the displacement vectors of all vertices to deform and ui,j on the scanned surface. These displacements are thegenericmodel. utilized to constructthe interpolating function that returns thedisplacementforeachgenericmeshvertex: 3.2. Local Deformation. The warping with a small set of (cid:2)n (cid:3)(cid:4)(cid:4) (cid:4)(cid:4)(cid:5) correspondences does not produce a perfect surface match. f(x)= w φ (cid:4)x−v (cid:4) +Mx+t, (1) We further improve the shape using a local deformation j j j (cid:7) j=1 whichfitsthegloballywarpedgenericmeshGtothescanned model S by iteratively minimizing the distance from the i where x ∈ R3 is a vertex on the generic model, (cid:3) · (cid:3) (cid:7) vertices of G to the surface of S. To optimize the positions i denotestheEuclideannormandφisaradialbasisfunction. (cid:7) ofverticesofG,thelocaldeformationprocessminimizesan w , M and t are the unknown parameters. Among them, j energyfunction: w ∈R3aretheinterpolationweights,M∈R3×3represents j rotationandscalingtransformations,andt∈R3represents (cid:3) (cid:5) (cid:3) (cid:5) (cid:3) (cid:5) E G(cid:7) =E G(cid:7),S +E G(cid:7) (3) translationtransformation. ext i int InternationalJournalofComputerGamesTechnology 5 (a) (b) (c) (d) (e) Figure3:Semi-automaticfeaturepointidentification:(a)initialoutlineofthefeaturemask;(b)afterautomaticfacialfeaturedetection;(c) afterinteractivetuning;(d)and(e)3Dfeaturepointsidentifiedonthescannedmodelandthegenericmodel. localsmoothnessofthemesh,theinternalenergytermE is int introducedasfollows: E (cid:3)G(cid:7)(cid:5)= (cid:2)NG (cid:2) (cid:4)(cid:4)(cid:4)(cid:3)x −x (cid:5)−(cid:3)x(cid:7) −x(cid:7) (cid:5)(cid:4)(cid:4)(cid:4)2, (6) int j k j k j=1k∈Ωj whereΩ isthesetgroupingallneighboringverticesx that j k are linked by edges to x , and x(cid:7) and x(cid:7) are the original j j k (a) (b) (c) positions of x and x before local deformation. Including j k this energy term constrains the deformation of the generic Figure 4: Model fitting: (a) deformed generic mesh after model mesh and keeps the optimization from converging to a fitting; (b) scanned model; (e) texture mapping of the deformed genericmesh. solutionfarfromtheinitialconfiguration. (cid:7) Minimizing E(G) is a nonlinear least-square problem and optimization is performed using L-BFGS-B, which is a quasi-Newtonian solver [36]. The optimization stops whereE standsfortheexternalenergyandE theinternal ext int when the difference between E at the previous and current energy. (cid:7) iterations drops below a user-specified threshold. After the TheexternalenergytermE attractstheverticesofGto ext local deformation, each mesh vertex takes texture coor- theirclosestcompatiblepointsonS.Itisdefinedas i dinates associated with its closest scanned data point for E (cid:3)G(cid:7),S(cid:5)= (cid:2)NGζ (cid:4)(cid:4)(cid:4)x −s (cid:4)(cid:4)(cid:4)2, (4) theixetruarrcehmicaaplpminagn.nFeirnbalylyt,awkienrgecaodnvsatnrtuacgtesuofrftahceeqduetaatielrsnianrya ext i j j j j=1 subdivision scheme and normal mesh representation [37]. Figure4 shows the results of model fitting. Hence, a spatial whereN isthenumberofverticesonthegenericmesh,x correspondence is established by the generated normal G j isthe jthmeshvertex,ands istheclosestcompatiblepoint meshes. j ofx onS.Theweightsζ measurethecompatibilityofthe j i j (cid:7) (cid:7) pointsonGandSi.AsGcloselyapproximatesSiintheglobal 4.FormingFeatureShapeSpaces (cid:7) warping procedure, we consider a vertex on G and a point onS tobehighlycompatibleifthesurfacenormalsateach We perceive the face as a set of features. In this work, the i pointhavesimilardirections.Hence,wedefineζ as: global face shape is also regarded as a feature. Since all j ⎧ (cid:3) (cid:5) (cid:3) (cid:5) (cid:3) (cid:5) (cid:3) (cid:5) face scans are in correspondence through mapping onto ⎪⎨n x ·n s ifn x ·n s >0 the generic model, it is sufficient to define the feature j j j j ζj =⎪⎩ (5) regions on the generic model. We manually partition the 0 otherwise, genericmodelintofourregions:eyes,nose,mouthandchin. This segmentation is transferred to all normal meshes to wheren(x )andn(s )arethesurfacenormalsatx ands , generateindividualizedfeatureshapeswithcorrespondences j j j j respectively. In this way, dissimilar local surface patches are (see Figure5). In order to isolate the shape variation from less likely to be matched, for example, front-facing surfaces thepositionvariation,wenormalizeallscannedmodelswith willnotbematchedtoback-facingsurfaces.Toacceleratethe respecttotherotationandtranslationofthefacebeforethe minimum-distancecalculation,weprecomputeahierarchi- modelfittingprocess. calboundingboxstructureforS sothattheclosesttriangles WeformashapespaceforeachfacialfeatureusingPCA. i arecheckedfirst. GiventhesetΓ = {F}offeatures,let{Fi}i=1,...,M beasetof The transformations applied to the vertices within a example meshes of a feature F, each mesh being associated regionofthesurfacemaydifferfromeachotherconsiderably, to one of the M scanned models in the database. These resulting in a non-smoothly deformed surface. To enforce meshes are represented as vectors that contain the x, y, z 6 InternationalJournalofComputerGamesTechnology 6.FeatureShapeSynthesis Fromthepreviousstageweobtainasetofexamplesofeach facial feature with measured shape characteristics, each of themconsistingofthesamesetofdimensions,whereevery dimensionisananthropometricmeasurement.Theexample Figure5:Fourfacialfeaturesdecomposedfromthelevel2normal measurementsarenormalized.Generally,weassumethatan mesh. examplemodelF offeatureFhasmdimensions,whereeach i dimension is represented by a value in the interval (0,1]. A value of 1 corresponds to the maximum measurement coordinates of N vertices Fi = (x1i,y1i,z1i,...,xNi ,yNi ,zNi ) ∈ valueofthedimension.ThemeasurementsofFicanthenbe R3N.T(cid:12)heaverageoverM examplemeshesisgivenbyψ0 = representedbythevector (1/M) Mi=1Fi.Eachexamplemeshdiffersfromtheaverageby (cid:13) (cid:14) the vector dFi = Fi −ψ0. We arrange the deviation vectors qi= qi1,...,qim , ∀j ∈[1,m]:qij ∈(0,1]. (8) into a matrix C = [dF ,dF ,...,dF ] ∈ R3N×M. PCA of 1 2 M thematrixCyieldsasetofMnon-correlatedeigenvectorsψi ThisisequivalenttoprojectingeachexamplemodelFiintoa andtheircorrespondingeigenvaluesλ.Theeigenevectorsare measurementspacespannedbythemselectedanthropomet- i sortedaccordingtothedecreasingorderoftheireigenvalues. ricmeasurements.ThelocationofFiinthisspaceisqi. Everyexamplemodelcanberegeneratedusing(7). With the input shape control thus parameterized, our goalistogenerateanewdeformationofthefacialfeatureby (cid:2)K computing the corresponding eigenmesh coordinates with F(α)=ψ + α ψ , (7) control through the measurement parameters. Given an i 0 ij j j=1 arbitrary input measurement vector q in the measurement space, such controlled deformation should interpolate the where0<K <Mandα =(F −ψ )·ψ arethecoordinates example models. To do this we interpolate the eigenmesh ij i 0 j of the example mesh in terms of the reduced eigenvector coordinatesoftheexamplemodelsandobtainsmoothrange (cid:12) (cid:12) basis. We choose the K such that K λ ≥ τ M λ, where over the measurement space. Our feature shape synthesis i=1 i i=1 i τ defines the proportion of the total shape variation (98% problemisthustransformedtoascattereddatainterpolation in our experiments). In this model each eigenvector is a problem. Again, the RBFs are employed. Given the input coordinateaxis.Wecalltheseaxeseigenmeshes. anthropometric control parameters, a novel output model with the desired shapes of facial features is obtained in runtimebyblendingtheexamplemodels.Figure7illustrates 5.AnthropometricParameters thisprocess.OurschemefirstevaluatesthepredefinedRBFs at the input measurement vector and then computes the Face anthropometry provides a set of meaningful measure- eigenmesh coordinates by blending those of the example ments or shape parameters that allow the most complete models with respect to the produced RBF values and pre- control over the shape of the face. Farkas [5] describes computedweightvalues.Finally,theoutputmodelwiththe a widely used set of measurements to characterize the desired feature shape is generated by evaluating the shape human face. The measurements are taken between the reconstruction model (7) at those eigenmesh coordinates. landmarkpointsdefinedintermsofvisually-identifiableor Note that there exist as many RBF-based interpolation palpablefeaturesonthesubjectfaceusingcarefullyspecified functionsasthenumberofeigenmeshes. proceduresandmeasuringinstruments.Suchmeasurements The interpolation is multi-dimensional. Consider a use a total of 47 landmark points for describing the face. Rm → Rmapping,theinterpolatedeigenmeshcoordinates As described in Section2, each example in our face scan a (·) ∈ R, 1 ≤ j ≤ K at an input measurement vector database is equipped with 86 landmarks. Following the j q∈Rmarecomputedas conventions laid out in [5], we have chosen a subset of 38 landmarksforanthropometricmeasurements(seeFigure6). (cid:15) (cid:16) (cid:2)M (cid:15) (cid:16) Farkas[5]describesatotalof132measurementsonthe a q = γ R q for1≤ j ≤K, (9) j ij i faceandhead.Insteadofsupportingall132measurements, i=1 we are only concerned with those related to five facial features (including global face outline). In this paper, 68 whereγij ∈RaretheradialcoefficientsandMisthenumber anthropometric measurements are chosen as shape control ofexamplemodels.Letqi (1≤i≤M)bethemeasurement parameters. As an example, Table1 lists the nasal measure- vectorofanexamplemodel.TheradialbasisfunctionRi(q)is mentsusedinourwork.Theexamplemodelsareplacedin amulti-quadricfunctionoftheEuclideandistancebetween the standard posture for anthropometric measurements. In qandqiinthemeasurementspace: particular, the axial distances correspond to the x, y, and (cid:6) (cid:15) (cid:16) (cid:4) (cid:4) z axes of the world coordinate system. Such a systematic R q = (cid:4)q−q(cid:4)2+ρ2 for1≤i≤M, (10) i i i collectionofanthropometricmeasurementsistakenthrough all example models in the database to determine their where ρ is the locality parameter used to control the i locationsinamulti-dimensionalmeasurementspace. behavior of the basis function and determined as the InternationalJournalofComputerGamesTechnology 7 sci n ps ft se fz fz n en ex ex se mf pi mf prn al al prn ls’slnscspbhal ch lssstno sto ch li li go’ go go’ sl sl go pg pg gn gn (a) (b) Figure6:Headgeometrywithanthropometriclandmarks(greendots).Thelandmarknamesaretakenfrom[5]. Table1:Anthropometricmeasurementsofthenose. Landmarks MeasurementName Landmarks MeasurementName mf-mf Nasalrootwidth n-pm Nasalbridgelength al-al Nosewidth aI-pm Alasurfacelength sbal-sbal Alarbasewidth al-sn Alarpoint-subnasalelength sbal-sn Nostrilfloorwidth n-pm Inclinationofthenasalbridge sn-pm Nasaltipprotrusion sn-prn Inclinationofthecolumella en-se Nasalrootdepth aI-pm Inclinationofthealar-slopeline en-se Nasalrootslope n-se-pm Nasofrontalangle aI-pm Alalength al-pm-al Ala-slopeangle al-mf Nasalbridgeangle se-pm-sn Nasaltipangle n-sn Noseheight pm-sn-ls Nasolabialangle Euclideandistancebetweenqandtheclosestotherexample facial features. For each vertex x ∈ V , the vertex in each j F measurementvector. feature region that exerts influence on it, xF, is the one of The jtheigenmeshcoordinateoftheithexamplemodel, minimaldistancetoit.Itisdesirabletousegekiodesicdistance aij, corresponds to the measurement vector of the ith on the surface, rather than Euclidean distance to measure example model, qi. Equation (9) should be satisfied for qi the relative positions of two mesh vertices. We adopt an andaij (1≤i≤M): approximationofthegeodesicdistancebasedonacylindrical projectionwhichispreferableforregionscorrespondingtoa (cid:2)M (cid:15) (cid:16) volumetricsurface(e.g.,thehead).Theideaisthatdistance a = γ R q for1≤ j ≤K. (11) ij ij i i betweentwoverticesontheprojectedmeshinthe2Dimage i=1 planeisafairapproximationofgeodesicdistance.Thus,xF Theradialcoefficientsγij areobtainedbysolvingthislinear isobtainedas: ki system using an LU decomposition. We can then generate (cid:4)(cid:4) (cid:4)(cid:4) (cid:4)(cid:4) (cid:4)(cid:4) the eigenmesh coordinates, hence the shape, corresponding (cid:4)xj−xkFi(cid:4)G≈min{i|i∈VF}(cid:4)x∗j −xi∗(cid:4), (12) totheinputmeasurementvectorqaccordingto(9). wherex∗andx∗arethepositionsofverticesontheprojected i j mesh, and (cid:3)·(cid:3) denotes the geodesic distance. Note that 7.SubregionShapeBlending thedistanceismGeasuredofflineintheoriginalundeformed genericmesh.Foreachnon-featurevertexx ,itspositionis After the shape interpolation procedure, the surrounding j updatedinshapeblendingas: facial areas should be blended with the deformed internal (cid:2) (cid:17) (cid:4) (cid:4) (cid:18)(cid:4) (cid:4) facial features to generate a seamlessly smooth face mesh. 1(cid:4) (cid:4) (cid:4) (cid:4) x(cid:8) =x + exp − (cid:4)x −xF(cid:4) (cid:4)x(cid:8)F−xF(cid:4), (13) The position of a vertex xi in the feature region F after j j α j ki G ki ki deformation is x(cid:8). Let V denote the set of vertices of the F∈Γ i head mesh. For smooth blending, positions of the subset whereΓisthesetoffacialfeaturesandαcontrolsthesizeof V = V \ V of vertices of V that are not inside the the region influenced by the blending. We set α to 1/10 of F F feature region should be updated with deformation of the thediagonallengthoftheboundingboxoftheheadmodel. 8 InternationalJournalofComputerGamesTechnology RBF-based interpolation q1 a1 Projection a1Φ1 Φ0 ... ... ... qi aj Projection ajΦj (cid:2) feNaetuwr e shape ... ... ... qm aK Projection aKΦK Figure7:Generatinganewfacialfeatureshapebyblendingexam- plemodelsthroughinterpolationoftheireigenmeshcoordinates. Figure9:GUIofoursystem. Table2:Detailsofthedatausedinoursystem.Misthenumberof examples,Nisthenumberofmeshvertices(thenumberoforiginal dimensions equals 3N), K is the number of reduced dimensions ofthePCAspace,andmisthenumberofanthropometriccontrol (a) (b) parameters. Figure8:Synthesisofthenoseshape:(a)Withoutshapeblending, Fullhead Eyes Nose Mouth Chin theobviousgeometricdiscontinuitiesaroundtheboundaryofthe M 186 186 186 186 186 nose region impair realism of the synthesis to a large extent. (b) N 16192 2914 1782 2105 643 Using our approach, the geometries of the feature region and K 34 23 26 20 18 surroundingareasaresmoothlyblendedaroundtheirboundary. m 16 13 20 12 7 Figure8(b) shows the effect of our shape blending scheme Figure10 illustrates a number of distinct facial shapes employedinsynthesizingthenoseshape. synthesized to satisfy user-specified local shape constraints. Clear differences are found in the width of the nose alar 8.Results wings, the straightness of the nose bridge, the inclination ofthenosetip,theroundnessofeyes,thedistancebetween Ourmethodhasbeenimplementedinaninteractivesystem eyebrows and eyes, the thickness of mouth lips, the shape withC++/OpenGL,wheretheusercanselectfacialfeatures of the lip line, the sharpness of the chin, and so forth. A to work on interactively. A GUI snapshot is shown in morphingcanbegeneratedbyvaryingtheshapeparameters Figure9. Our system starts with a mean model which is continuously, as shown in Figures 10(b) and 10(c). In computed as the average of 186 meshes of the RBF-warped additiontostartingwiththemeanmodel,theusermayalso models and textured with the mean cylindrical full-head select the desired head model of a specific person from the textureimage[38].Oursystemalsoallowstheusertoselect example database for further editing. Figure11 illustrates the desired feature(s) from a database of pre-constructed face editing results on the models of two individuals for typical features, which are shown in the small icons on the varioususer-intendedcharacteristics. upper-left of the GUI. Upon selecting a feature from the In order to quantify the performance, we arbitrarily database, the feature will be imported seamlessly into the selected ten examples in the database for the cross valida- displayed head model and can be further edited if needed. tion. Each example has been excluded from the example The slider positions for each of the available feature in the database in training the face synthesis system and its shape databasearestoredbythesystemsothattheirconfiguration measurements were used as a test input to the system. The can be restored whenever the feature is chosen. Such a outputmodelwasthencomparedagainsttheoriginalmodel. featureimportingmodeenablescoarse-to-finemodification Figure12showsavisualcomparisonoftheresult.Weassess offeatures,makingthefacesynthesisprocesslesstedious.We the reconstruction by measuring the maximum, mean, and invitedseveralstudentuserswhonaturallylackthegraphics rootmeansquare(RMS)errorsfromthefeatureregionsof professional’s modeling background to create face models theoutputmodeltothoseoftheinputmodel.The3Derrors using our system. The laymen appreciated the intuitiveness arecomputedbytheEuclideandistancebetweeneachvertex and continuous variability of the control sliders. Table2 ofthegroundtruthandsynthesizedmodel.Table3showsthe showsthedetailsofthedatasets. average errors measured for the ten reconstructed models. InternationalJournalofComputerGamesTechnology 9 (a) (b) (c) Figure 10: (a) New faces synthesized from the average model (leftmost) with global and local shape variations. (b) and (c) Face shape morphing(lefttorightineachexample). (a) (b) Figure11:Feature-basedfaceeditingonthemodelsoftwoindividuals.Ineachexample,theoriginalmodelisshowninthetop-left. shape in the subspaces of facial features using the downhill simplex algorithm such that the sum of distances between thesourceandtargetpositionsofalllandmarksisminimized. Table4showsthecomparisonbetweenourmethodandOpt- PCA. Opt-PCA produces a large error since the number of landmarksissmallanditisnotsufficienttofullydetermine weightsoftheeigen-model.Opt-PCAisalsoslowsincethere aremanyPCAweightstobeoptimizediteratively. (a) (b) Our system runs on a 2.8GHz PC with 1GB of RAM. Figure12:Comparisonofanoriginalmodel(leftineachview)and Table5showsthetimecostofdifferentprocedures.Atrun- synthesizedmodel(rightineachview)incrossvalidation. time,ourschemespendslessthanonesecondingenerating a new face shape upon receiving the input parameters. This includes the time for the evaluation of RBF-based The errors are given using both absolute measures (/mm) interpolation functions and for shape blending around the andasapercentageofthediameteroftheoutputheadmodel featureregionboundaries. boundingbox. We compare our method against the approach of opti- mization in the PCA space (Opt-PCA). Opt-PCA performs 9.Applications optimization to estimate weights of the eigen-model (7). It starts from the mean model on which the anthropometric Apart from creating plausible 3D face models from users’ landmarksareintheirsourcepositions.Thecorresponding descriptions,ourfeature-basedfacereconstructionapproach target positions of these landmarks are the landmark posi- is useful for a range of other applications. The statistics of tions on the example model. We then optimize the mesh facial features allow analysis of their shapes, for instance, 10 InternationalJournalofComputerGamesTechnology Table3:Crossvalidationresultsofour3Dfacesynthesissystem. Eyes Nose Mouth Chin Averagemax. 3.85(0.91%) 2.55(0.84%) 2.86(0.94%) 4.46(1.06%) Averagemean 2.57(0.57%) 1.62(0.38%) 2.04(0.49%) 2.25(0.53%) AverageRMS 3.62(0.86%) 2.23(0.53%) 2.84(0.67%) 3.14(0.74%) Table4:Comparisonofourmethodwiththeoptimizationapproach.Eachvalueisanaverageoftentrialswithdifferentexamplemodels. Opt PCA Ourmethod Eyes Nose Mouth Chin Eyes Nose Mouth Chin Meanerror(mm) 2.83 3.27 3.84 6.65 2.57 1.62 2.04 2.25 Time(s) 34.8 21.5 23.5 5.3 0.4 0.5 0.4 0.3 Table 5: Time consumed for different processes of system genders.ThegroupstatisticsareshowninFigure13,colored implementation. For some processes (in italic), the time spent with blue, green, and red, respectively. It shows that the per example is shown. Notation: time consumed in interactive Caucasiannoseisnarrow,theMongoliannoseismedial,and operation(TI),timeconsumedinautomaticcomputation(TA). the Negroid nose is wide. The statistics indicate a relatively protruding,narrownoseinCaucasian.TheMongoliannose Process T T I A is less protruding and wider, and the Negroid nose has the Offlineprocessing smallest protrusion. The nasal root depth and nasofrontal Featurepointidentification 3–5minutes 6seconds anglearethelargestfortheCaucasian,exhibitingsignificant Globalwarping N/A 2seconds differencescomparedwiththesmallerNegroidandsmallest Localdeformation N/A 4minutes Mongolian values. This suggests the high nasal root in Multi-resolutionmodelgeneration N/A 5seconds Caucasian and relatively flat nasal root in Negroid and ComputingeigenmeshesbyPCA N/A 2hours Mongolian.Significantdifferencesamongthethreeracesare Computingeigenmeshcoordinates N/A 0.5seconds alsofoundininclinationofthecolumellaandnasaltipangle, Computinganthropometricmeasurements N/A 0.2seconds indicatingthehookednoseinCaucasianandthesnubnose inMongolianandNegroid. LUdecomposition N/A 2minutes For the eyes, the main characteristics of the Caucasian Runtime grouparethelargesteyefissureheight,thesmallestintercan- Featureshapesynthesis N/A 0.6seconds thal width and eye fissure inclination angle. These suggest that the Caucasian eyes typically have larger openings with to discern differences between groups of faces. They also horizontally aligned inner and external eye corners. The Mongolian group has the largest intercanthal width, and allow synthesis of new faces for applications such as facial featuretransferbetweendifferentfacesandadaptationofthe the greatest inclination in the shortest eye fissure and the smallesteyefissureheight,whichindicatetherelativelysmall model to local populations. Moreover, our approach allows eye openings separated in a large horizontal distance with for compression of 3D face data, facilitating us to share positions of the inner eye corners lower than those of the statistics with other researchers to allow the synthesis and external ones. Blacks have the largest eye fissure length and furtherstudyofhigh-resolutionfaces. binocularwith,whichdenotetherelativelywideeyesinthis 9.1. Analyzing the Shape of Facial Features. As the first group. application, we consider analysis of the shape of facial As shown in Figure13(c), many measurements of the features.Thisisusefulforclassificationoffacescans.Wewish mouth of Negroid (e.g., mouth width, upper and lower lip togaininsightintohowfacialfeatureschangewithpersonal height, upper and lower vermilion height) are the largest characteristics by comparing statistics between groups of amongthethreeraces.Theyaresignificantlydifferentfrom faces.Wecalculatethemeanandstandarddeviationstatistics thoseinCaucasianorMongoliangroup.Mongolianhasthe of anthropometric measurements for each facial feature of relativelynarrowmouthandthinlips.InCaucasiantheskin different groups. The morphometric differences between portion of the upper and lower lips and their vermilion groups are visualized by comparing the statistics of each height are the smallest. However, the proportions of the facialfeatureinadiagram.Wefollowthisapproachtostudy upper and lower lip heights in the three races reveal the theeffectsofraceandgender. similarity. From statistics illustrated in Figure13(d), the Negroid Race. Toinvestigatehowtheshapeoffacialfeatureschanges chinhasthecharacteristicsofalongverticalprofiledimen- withrace,wecomparethreegroupsof18–30year-oldCau- sion and small width. The smallest value of inclination of casian (72 subjects), Mongolian (18 subjects), and Negroid the chin from the vertical and the largest mentocervical (26 subjects) which are divided almost equally between the angle also indicates a less protruding chin for Negroid. In

Description:
Feb 19, 2009 animation are available (e.g., Maya [1], Poser [2], DazStudio. [3], PeoplePutty [4]). modeling techniques such as point morphing or free-form.
See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.