ebook img

Providing Long-Term Participation Incentive in Participatory Sensing PDF

1.5 MB·English
by  Lin Gao
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Providing Long-Term Participation Incentive in Participatory Sensing

Providing Long-Term Participation Incentive in Participatory Sensing Lin Gao, Fen Hou, and Jianwei Huang Abstract—Providing an adequate long-term participation in- requirements of sensing tasks. Clearly, the success of such a centive is important for a participatory sensing system to main- sensing system strongly relies on the active participations of tain enough number of active users (sensors), so as to collect a users as well as their willingnesses to contribute their sensing sufficient number of data samples and support a desired level of capability and resource to the sensing tasks. service quality. In this work, we consider the sensor selection probleminageneraltime-dependentandlocation-awarepartici- Although many participatory sensing applications have patory sensing system, taking the long-term user participation 6 incentive into explicit consideration. We study the problem been proposed in [4]–[14], they simply assume that users 1 systematically under different information scenarios, regarding voluntarily participate in the system to perform sensing tasks. 0 both future information and current information (realization). In reality, however, users may not be willing to participate in 2 Inparticular,weproposeaLyapunov-basedVCGauctionpolicy the sensing system, as this will incur extra operational cost for the on-line sensor selection, which converges asymptotically (e.g., the battery energy expenditure and the transmission ex- b e to the optimal off-line benchmark performance, even with no pense). Moreover, many sensing tasks are location-aware and F future information and under (current) information asymmetry. time-dependent, and involve spatio-temporal context. Sharing Extensive numerical results show that our proposed policy out- sensing data tagged with spatio-temporal context may reveal a 4 performsthestate-of-artpoliciesintheliterature,intermsofboth lotofpersonalandsensitiveinformation,whichposespotential 2 userparticipation(e.g.,reducingtheuserdroppingprobabilityby privacythreatstotheparticipatingusers[15].Allofthesebring 25% ∼ 90%) and social performance (e.g., increasing the social ] welfare by 15%∼80%). the incentive issue to the fore. T G Several recent works have been devoted to the incentive I. INTRODUCTION . mechanismdesignissueinparticipatorysensing,mainlyusing cs A. Background and Motivations pricing and auction [17]–[25]. Most of them focus on com- [ pensating the user’s direct sensing cost when being chosen The proliferation of mobile devices (e.g., smartphones) 2 withrichembeddedsensorshasledtorevolutionarynewsens- as a sensor to perform a particular sensing task (e.g., in [17]–[23]), which we call the short-term sensing incentive. v ing paradigm, often known as Participatory Sensing [1]–[3], In practice, however, we find that the users participating in 0 in which mobile users voluntarily participate in and actively a sensing task may suffer certain indirect cost even when 8 contributetosensingsystem,usingtheircarryingsmartphones. 4 Due to the low deploying cost and high sensing coverage, this not performing the sensing task.1 In this case, the short-term 2 new paradigm has attracted a wide range of applications in sensing incentive may not be enough to guarantee the long- 0 term continuous participations of users. Intuitively, if a user is environment, infrastructure, and community monitoring (e.g., . rarelyselectedasasensor(hencehardlyreceivestheshort-term 1 air pollution [4]–[6], wireless signal strengths [7]–[9], road sensingincentive),theusermaylosetheinterestincontinuous 0 traffic [10]–[12], and parking [13], [14]). 5 participationanddecidetodropoutofthesensingsystem(e.g., 1 A typical participatory sensing system architecture usually shut down the sensing app on his smartphone). Without an : consists of a service platform (also called service provider) adequatenumberofusersparticipatinginthesystem,however, v residing in the cloud and a collection of mobile smartphone the service provider may not be able to collect a sufficient i X users[16]–[18].Theserviceproviderlaunchesasetofsensing number of sensing data to support a desired service quality tasks with different sensing requirements for different pur- (e.g.,misstheroadtrafficinformaitoninsomeareas). r a poses, and mobile users actively subscribe to (participate in) one or multiple sensing task(s). In this work, we focus on To the best of our knowledge, [24] and [25] are the only an important type of participatory sensing scheme called the results that explicitly study the long-term participation in- server-initiated sensing, where the service provider selects a centive in participatory sensing. To stimulate the continuous specificsetofparticipatingsmartphonestoperformthesensing participation of users, Lee et al. in [24] and [25] introduce task,dependingonthespatio-temporaldatarequirementofthe a virtual credit for lowering the bids of users who lost in sensingtaskandthegeographicallocationsoftheparticipating the previous round of auction, hence increasing their winning users as well as their sensing capabilities. Comparing with probabilities in future auction rounds. However, they consider the user-initiated sensing scheme (where users actively decide neither the truthfulness, nor the optimality of the proposed when and where to sense), the server-initiated sensing scheme auction.Inthiswork,wewillstudythelong-termparticipatory gives more control to the service provider to decide when and incentive, joint with the short-term sensing incentive, with wheretocollectthedataatwhatcosts,hencecanbetterfitthe rigorous truthfulness and optimality analysis. Thisworkissupportedby... L.GaoandJ.HuangarewithDept.ofInformationEngineering,TheChi- neseUniversityofHongKong,HK,Email:{lgao,jwhuang}@ie.cuhk.edu.hk; 1Forexample,inalocation-awaresensingtask,usersneedtoperiodically F. Hou is with Dept. of Electrical and Computer Engineering, University of reporttheirlocationstotheserviceproviderbeforethelattermakesthesensor Macau,Macau.Email:[email protected] selectiondecision,whichincurscertainenergyandtransmissioncost. xxx-x-xxxx-xxxx-x/xx/$31.00 (cid:13)c2015 IEEE MU 1 MU 2 MU 3 2 TABLEI. MAINRESULTSINTHISPAPER Sensing Region FutureInfo. CurrentInfo. Solution Performance Section Grid User 1 Complete/ Optimal Symmetric Off-line III Stochastic (Benchmark) NoInfo Symmetric On-linePolicy1 AsymptoticOpt. IV User 2 NoInfo Asymmetric On-linePolicy2 AsymptoticOpt. V User 5 User 3 User 4 asymptotically optimal performance as in Policy 1.2 It is important to note that the key contributions of this User 7 User 6 work are not on the Lyapunov framework itself, but rather, onthenovelproblemformulationandsolutiontechniques.For (Senging Area) moreclarity,welistthemainresultsinTableI,andsummarize the key contributions as follows. Fig.1. Location-AwareParticipatorySensingModel. • Novel Model and Problem Formulation: We study a gen- eral time-dependent and location-aware participatory sensing B. Solutions and Contributions system,takingintoconsiderationtheimportantbutlessstudied issue of long-term user participation incentive. We propose a Specifically, we consider a general location-aware, time- simple yet representative formulation based on the allocation dependent participatory sensing system, where the data in probability of each user to capture such an incentive. differenttime(slots)and/orlocationsmayhavedifferentvalues for the sensing tasks. Each participating user has the potential • Multiple Information Scenarios: We study the optimal tosenseaspecificregion(atacertainsensingcost)inaspecific sensorselectionproblemunderdifferentinformationscenarios. timeslot,dependingonhislocationandmobilitypattern.Fig.1 In particular, we propose on-line sensor selection policies that illustrates a snapshot of such a sensing system (in a particular convergetotheasymptoticallyoptimalperformance,evenwith time slot), where the sensing region of each user is denoted nofutureinformationandunderinformationasymmetry. by the shadow area around the user. In such a system, the • Performance Evaluations: We compare the proposed service provider selects (allocates) users as sensors to perform on-line policies with the state-of-art policies, and show our sensing tasks slot by slot. We focus on the following sensor proposed policies outperform the existing ones significantly, selection/allocation problem for the service provider: in terms of both user participation and social performance: (i) Comparing with the RADP-VPC policy proposed in [24] • Which users should be selected as sensors in each time [25], our policies can reduce the user dropping probability by slot,aimingatmaximizingthesocialwelfareandensuringthe 25%∼50%, and increase the social welfare by 15%∼40%; long-term participation incentive of users? (ii)ComparingwiththeGreedy/Randompolicywidelyusedin existing systems (e.g., [10]), our policies can reduce the user Theproblemischallengingduetothefollowingreasons.First, dropping probability by 70% ∼ 90%, and increase the social the overlap of different users’ sensing regions makes their welfare by 65%∼80%. sensingactivitiespossiblyredundant(hencepartially“conflict” witheachother).Second,thelong-termparticipationincentive II. SYSTEMMODEL of users makes the sensor allocations in different time slots We consider a location-aware participatory sensing system coupled.Basedontheabove,ourmodelandproblemformula- with a service provider (SP) and a set N (cid:44) {1,...,N} of tioncapturethefollowingimportantfeaturesofaparticipatory mobilesmartphoneusers(participatinginthesystem).TheSP sensingsystem:(i)long-termparticipationincentive,(ii)time- wantstocollectspecificdatainacertainarea(viaparticipating dependent and location-aware sensing requirement, and (iii) users’ smartphones) for specific tasks. Mobile users move partial conflicting sensing activity. As far as we know, this is randomlyinandoutofthedesirablesensingareaaccordingto thefirstworkthatsystematicallystudiesaparticipatorysensing certainmobilitypattern.AsshowninFig.1,eachuserhasthe problem with all of the above features. potentialtosenseaspecificregioninacertainperiodaccording to his location and mobility, and the whole sensing area A is We solve the above sensor selection problem under differ- dividedintoasetI ={1,...,I}ofgrids.3 EachgridA ,i∈I, i ent information scenarios, regarding both future information isassociatedwithaweightw [t],denotingthevalueofthedata i (i.e., complete, stochastic, or no future information) and cur- associated with grid A in each slot t. Obviously, such a data i rent information (i.e., symmetric or asymmetric). Specifically, value is location-aware and time-dependent. with complete or stochastic future information, we formulate and solve an off-line sensor selection problem as benchmark The SP requests data slot by slot, where each time slot (where we assume that the current information is always ranges from minutes to hours depending on the temporal data symmetric). With no future information, we formulate and requirements of tasks. We consider the sensing operation in a solve an on-line sensor selection problem: (i) under informa- long period consisting of a set T (cid:44) {1,...,T} of T slots. At tion symmetry, we propose a Lyapunov-based on-line sensor selection policy (Policy 1), which converges to the optimal 2Several recent works also studied the on-line policy for sensing task allocation, considering the uncertainty of user arrival [26], [27]. However, off-line benchmark asymptotically; and (ii) under information theseworksdidnotconsidertheuserlong-termparticipationincentive. asymmetry,weproposeaLyapunov-basedVCGauctionpolicy 3Agridistheminimumunitofsensingareaataparticularlocation,e.g., (Policy2),whichistruthful,andmeanwhileachievesthesame asquareof100m×100m,associatedwithasingledatainaparticulartime. 3 thebeginningofeachtimeslot,theSPselects(allocates)aset indicator to reflect the user ROI: Allocation Probability,4 i.e., ofuserstoperformthesensingtaskinthattimeslot,depending the probability of each user being selected as sensor. Namely, on factors such as the user locations and the data values. Let we consider such a scenario where each user n will drop out x [t]∈{0,1}denotewhetherausernisselectedassensorin of the sensing system, if his allocation probability (of being n slott,andx[t](cid:44){x [t],∀n∈N}denotethesensorselection selected as sensor) is smaller than a specific threshold D , n n vector in slot t. We further denote x (cid:44) {x [t],∀t ∈ T} as called the dropping threshold of user n. Therefore, to ensure n n the allocation vector of user n in all time slots. the active participation of users, the allocation probability of each user should be no smaller than his dropping threshold, which we call the user participatory constraint: A. Mobile User Modeling D ≤d (x )(cid:44) 1 (cid:88)x [t], ∀n∈N, (2) n n n T n 1)Sensing Region: Each mobile user has a certain sensing t∈T range in each time slot, mainly depending on his location and wheredn(xn)isthetimeaverageallocationprobabilityofuser mobility pattern. In Fig. 1, the sensing region of each user is n, depending on the allocations of user n in all slots. illustrated by the shadow area around the user. Let z [t] ∈ n,i {0,1}denotewhetheragridA islocatedinthesensingrange B. Service Provider Modeling i of user n in slot t. Then, the total sensing region of user n in GiventhesetN ofmobileusersparticipatinginthesystem, eachslottcanbedefinedasavector:z [t](cid:44){z [t],∀i∈I}. n n,i the SP selects a subset of users as sensors in each time Notethatwhenusernmovesoutofthedesirablesensingarea slot. We consider a non-commercial SP (e.g., a non-profit in time slot t, we can simply define: z [t] = 0,∀i ∈ I. As n,i organization or a governmental department), whose primary mobile users move randomly, the sensing region z [t] of each n goal is to maximize the total sensing value and minimize the user n also changes randomly across time slots. total sensing cost in the entire time period, subjecting to the user participatory constraint in (2). 2)Sensing Value: When a user n is selected as sensor in a slot t, i.e., xn[t]=1, he performs the following sensing task: Given the allocation vector x[t](cid:44){xn[t],∀n∈N} in slot collect,process,andtransmitallofthedatawithinhissensing t, the total sensing cost (in slot t) can be directly defined as region zn[t] to the SP. This generates a sensing value vn[t] the sum of all selected users’ sensing costs, i.e., equal to the sum of weights of all grids within zn[t]: C[t](cid:44) (cid:88) x [t]·c [t]. (3) n n (cid:88) v [t](cid:44)x [t]· z [t]·w [t]. (1) n∈N n n n,i i The total sensing value (in slot t), however, may not be same i∈I as the aggregate sensing value of all selected users due to the overlap of their sensing regions. The key reason is that the 3)Sensing Cost: When performing sensing tasks, users samedatacollectedbymultipleuserssimultaneouslycanonly incur extra operational cost (called sensing cost) due to, for generate value once. For convenience, let y [t] denote whether i example,theenergyexpenditureandthetransmissionexpense. agridA issensedbyatleastonemobileuser,thatis, Let c [t] denote the total sensing cost of user n in slot t i n (cid:38) (cid:39)1 (includingallpotentialexpenseusedforcollecting,processing, (cid:88) y [t](cid:44) x [t]·z [t] , (4) and transmitting the data within z [t] to the SP). Obviously, i n n,i n such a sensing cost is user- and time-dependent. n∈N where (cid:100)x(cid:101)1 = 1 if x ≥ 1, and (cid:100)x(cid:101)1 = x if x < 1. Then, the Due to this direct sensing cost, users may be reluctant to total sensing value (in slot t) can be defined as follows: perform sensing tasks without sufficient incentives. To avoid (cid:88) V[t](cid:44) y [t]·w [t]. (5) this,ineachtimeslot,theSPwillpaycertainmonetaryornon- i i monetary compensation (which we call the short-term sensing i∈I incentive)tothoseuserswhoareselectedassensors.Laterwe Obviously, if the sensing regions of all selected users do not (cid:80) will show that this type of incentive can be easily addressed overlap(cid:80)with each other, then yi[t]= n∈N xn[t]·zn,i[t], and through, for example, a first-degree price discrimination [28] V[t] = n∈N vn[t]·xn[t], i.e., the total sensing value is di- or a truthful auction [29] in each time slot. rectlythesumofallselectedusers’sensingvalues. The social welfare generated in each slot t is the differ- 4)Participatory Constraint: As discussed in Section I, encebetweenthetotalsensingvalueandsensingcost,i.e., users may suffer certain indirect cost even when not perform- ingsensingtasks,inducedby,forexample,reportinglocation/ S[t](cid:44)V[t]−C[t]. (6) mobility information or running sensing apps. Thus, if a user The overall (average) social welfare in all time slots is is rarely selected as a sensor (hence hardly receives the short- 1 (cid:88) 1 (cid:88) term sensing incentive), he may gradually lose the interest in S(x)(cid:44) S[t]= (V[t]−C[t]), (7) T T continuous participation, and decide to no longer participate t∈T t∈T in the system (in this case, we say the user drops). where x(cid:44){x [t],∀n∈N,t∈T}(cid:44){x[t],∀t∈T}. n As shown in [24] and [25], such a long-term participation incentive strongly depends on the user’s Return on Investment 4Consider, for example, a user with an expected direct sensing cost c1, an expected indirect sensing cost c2, and an expected return r when being (ROI). In this work, instead of directly estimating the total selectedasasensor.Then,anallocationprobabilityηdirectlycorrespondsto returnandtotalinvestment, weuseasimpleyetrepresentative anexpectedROI: r·η . η·(c1+c2)+(1−η)·c2 4 C. Information Scenario user n under all possible θ. Then, the expected social welfare maximization problem can be defined as follows:5 Wewillstudythesensorselectionproblemindifferentnet- (cid:90) work information scenarios. The network information consists max (cid:0)V(θ)−C(θ)(cid:1)·f(θ)dθ of the weight (data value) of each grid, the sensing region and x θ∈Θ (10) sensingcostofeachuserineachtimeslot.Formally,wedefine s.t. (a) x (θ)∈{0,1}, ∀n∈N,∀θ ∈Θ, n the network information in time slot t as: (b) D ≤d (x ), ∀n∈N, n n n θ[t](cid:44){w [t],z [t],c [t], ∀i∈I,n∈N}. (8) where i n n (cid:80) • C(θ)= x (θ)·c (θ) is the sensing cost under θ; Note that the sensing value vn[t] of each user is not network • V(θ)=(cid:80)n∈Ny (nθ)·w (nθ) is the sensing value under θ; information, as it is determined by wi[t] and zn[t]. •y (θ)=(cid:100)(cid:80)i∈I ix (θ)·iz (θ)(cid:101)1 indicateswhetheragrid i n∈N n n,i Regarding the future network information, we consider A is sensed by at least one user under θ; i (cid:82) the scenarios of complete future information, stochastic future • d (x ) = x (θ)·f(θ)dθ is the average allocation n n θ∈Θ n information, and no future information, depending on whether probability of user n. and how much the SP knows regarding the future network Similarly, (10) is an off-line problem, and the solution information. Regarding the current network information (real- defines a contingency plan that specifies the allocation of ization), we consider the scenarios of information symmetry each user under each possible information realization θ in and asymmetry, depending on whether the SP can observe the advance. Note that (10) is an integer programming with an private information of users (e.g., the sensing cost). infinite number of decision variables (as θ is continuous), which is non-convex and NP-hard. Nevertheless, by the linear III. OFF-LINESENSORSELECTIONBENCHMARK programming relaxation, we can easily transform (10) into a linear programming problem, and solve it by classic methods, In this section, we study the sensor selection problem with e.g., the KKT analysis.6 complete future information and stochastic future information (as benchmarks). Note that in these benchmark cases, we Nextweanalyzethegapbetweenthemaximumsocialwel- assume the scenario of information symmetry (regarding the fare with complete information (denoted by S◦) derived from current network information), where the SP is able to observe (9) and the maximum expected social welfare with stochastic all of the network information realized in each time slot. information (denoted by S∗) derived from (10). Formally, Lemma 1. If T →∞, then S∗ →S◦. A. Complete Future Information With complete future information, the SP is able to deter- Lemma 1 indicates that as long as the total sensing period minethesensorselectionsinalltimeslotsjointlytomaximize T is large enough, the social welfare loss induced by the loss the overall social welfare. Thus, the SP’s problem is of complete network information is negligible. Hence, both S◦ and S∗ will serve as the same benchmark for the on-line max 1 (cid:88)(cid:0)V[t]−C[t](cid:1) policies proposed in Sections IV and V. x T t∈T (9) Itisnotablethatformulatingandsolving(10)stillrequires s.t. (a) xn[t]∈{0,1}, ∀n∈N,∀t∈T, certain (stochastic) future information, which may not be (b) D ≤d (x ), ∀n∈N. availableinpractice.Thismotivatesustofurtherstudyon-line n n n policies not relying on any future information. Obviously, (9) is an off-line allocation problem, and the solu- tion presents the explicit allocation of each user in each time slotinadvance.Notethat(9)isabinaryintegerprogramming, IV. ON-LINESENSORSELECTIONPOLICY and can be effectively solved by many classic methods, such In this section, we study the sensor selection problem with as the branch-and-bound algorithm in [31]. no future information. We propose an on-line sensor selection policy based on the Lyapunov optimization framework [30], Itiseasytoseethatformulatingandsolving(9)requiresthe which relies only on the current network information and past complete future information, which is obviously impractical. sensor selection history, while not on any future information. Hence,wewillstudyanotherbenchmarksolutionbasedonthe Meanwhile,itasymptoticallyconvergestothebenchmarkwith stochastic information in the next subsection. complete or stochastic future information proposed in Section III. Note that we also assume the scenario of information B. Stochastic Future Information symmetry in this section, and will further study the scenario With stochastic information only, the SP cannot decide the of information asymmetry in the next section. explicit allocation of each user in each time slot in advance, due to the lack of complete future information. Hence, in this A. Lyapunov Optimization Technique case, we will focus on the expected social welfare maximiza- Lyapunov optimization [30] is a widely used technique for tion based on the stochastic information. solving stochastic optimization problems with time average Let x (θ) ∈ {0,1} denote whether a user n is selected n as sensor under a particular information realization θ, x(θ)(cid:44) 5Θisthefeasiblesetofθ,i.e.,thesetofallpossiblenetworkinformation realizations,andf(θ)istheprobabilitydistributionfunction(pdf)ofθ. {xn(θ),∀n ∈ N} denote the allocation vector of all users 6Weleavethedetailsin[31],asthemethodisstandard.Moreover,solving under θ, and xn (cid:44) {xn(θ),∀θ ∈ Θ} denote the allocation of thestochasticopitmalizationproblemisnotthemaincontributionofthiswork. 5 constraints, such as the social welfare maximization problem By the Lyapunov drift theorem (Th. 4.1 in [30]), if a policy (9) in this work (with T → ∞), where the user participatory greedily minimizes the Lyapunov drift ∆[t] in each slot t, then constraint (b) is the time average constraint. Hence, we intro- allbacklogsareconsistentlypushedtowardsalowlevel,which duce the Lyapunov optimization technique to solve the sensor potentially maintains the stabilities of all queues (i.e., ensures selection problem (9) with no future information. the participatory constraints of all users). 1)Queue Definition: The key idea of Lyapunov optimiza- 4)Joint Queue Stability and Welfare Maximization: Next, tion technique is to use the stability of queues to ensure that weanalyzethejointqueuestabilityandobjectiveoptimization the time average constraints are satisfied. Following this idea, (i.e., expected social welfare maximization). By the Lyapunov we first introduce a virtual queue (qn) for each user n. This optimization theorem (Th. 4.2 in [30]), to stabilize the queues virtualqueueisusedforbufferingthevirtualallocationrequest while optimizing the objective, we can use such an allocation ofeachuser.Here,weusetheprefix“virtual”todenotethatthe policythatgreedilyminimizesthefollowingdrift-plus-penalty: requestisnotactuallyinitiatedbytheuser,butrather,itisused to reflect the requirement of the user participatory constraint. Π[t](cid:44)∆[t]−φ·(cid:0)V[t]−C[t](cid:1), (14) Namely, one virtual request represents that “to satisfy the user wherethe(negative)socialwelfare,i.e.,C[t]−V[t],isviewed participatoryconstraint,theusershouldbeselectedassensor as the penalty incurred on each slot t; φ≥0 is a non-negative in one additional time slot”. Hence, the backlog of a virtual controlparameterthatischosentoachieveadesirabletradeoff queuedenotesthetotalnumberofvirtualrequestsinthequeue between the optimality and queue backlog. (which may not be an integer), i.e., the total number of addi- tional time slots that the user should be selected as sensor (in We further notice that directly minimizing the drift-plus- ordertomeethisparticipatoryconstraint). penaltydefinedin(14)maybedifficult(partlybecause∆[t]is a quadratic function). Hence, we will focus on minimizing a 2)QueueDynamics: Withtheabovequeuedefinition,each specific upper-bound of the drift-plus-penalty to achieve the virtual request of user n will enter into the queue with a constant arrival rate of D . Let x†[t] ∈ {0,1} denotes joint stability and optimization. n n whetherusernisselectedassensorintimeslott(undercertain Next, we give such an upper-bound. Notice that sensor selection policy), and d† (cid:44) d (x†) = 1 (cid:80) x†[t] n n n T t∈T n denotetheaverageallocationprobabilityofusern.Intuitively, ∆[t]≤ 1 (cid:88) (cid:0)x†[t]2+D2 +2·qt ·(D −x†[t])(cid:1) x†[t]=1 implies that one virtual request of user n leaves the 2 n n n n n n queue at slot t. Hence, the virtual request of user n will leave n∈N (15) (cid:88) the queue with an average departure rate of d†n. ≤B+ qnt ·(Dn−x†n[t]), n∈N Let qt denote the queue backlog of user n in slot t, and n luesteqrst.(cid:44)Fo{rqenta,c∀hnus∈erNn}, gdievneontethtehecoqnusetuaentbaarcrkivloalgovfehcitsorvoirftuaalll fwohlleorweiBng(cid:44)upp(cid:80)ern-b∈oNun1d+2Dfon2r itsheadcroifnt-sptalunst.-7peTnhaeltny,iwne(1h4a)v:e the request and the allocation x†[t] in each slot t (departure), we have the following dynamicnequation for his virtual queue: Π[t]≤B+ (cid:88) qt ·(D −x†[t])−φ·(cid:0)V[t]−C[t](cid:1). (16) n n n qt+1 =(cid:2)qt −x†[t](cid:3)++D , (11) n∈N n n n n By the Lyapunov optimization theory, it is easy to show that where [x]+ =max(x,0). minimizing the above upper-bound of the drift-plus-penalty is Next,weshowhowtoconnectthequeuestabilitycondition equivalent to minimizing the drift-plus-penalty itself, in terms with the user participatory constraint in our problem. We say of the queue stability and objective optimization. a virtual queue q is rate stable, if n Remark. Beyond following the standard Lyapunov opti- qt mization framework [30], our own contributions in this part lim n =0 with probability 1. t→∞ t are two-fold. First, we explicitly define the virtual queue, and analytically connect the user participatory constraint and Bythequeuestabilitytheorem[30],aqueueq isratestableif n the queue stability. This is the basis of applying Lyapunov and only if the arrival rate is no larger than the departure rate, i.e., D ≤ d†. This establishes the equivalence between the optimization in our problem. Second, we propose an upper- n n bound(16)forthedrift-plus-penalty,whichisproblem-specific queuestabilityconditionandtheuserparticipatoryconstraint. anddoesnothaveagenericformsuitableforallproblems.The That is, to guarantee the user participatory constraint in our later on-line policy is based on this upper-bound. problem, we only need to ensure that the associated virtual queue is rate stable under the proposed policy. 3)QueueStability: Nowwestudythequeuestabilityusing B. On-line Allocation Policy the Lyapunov drift. We first define the Lyapunov function: Based on the above theoretical analysis, we now design an 1 (cid:88) J[t](cid:44) (qt)2. (12) on-line policy that aims at minimizing the drift-plus-penalty 2 n upper-bound in (16) in each time slot. We present such a n∈N Lyapunov optimization based policy in Policy 1. The Lyapunov drift in each slot t is defined as the change of Lyapunov function from one slot to the next, i.e., 7Thefirstinequalityfollowsbecause([q−x]++D)2≤q2+x2+D2+ ∆[t](cid:44)J[t+1]−J[t]. (13) 2q·(D−x).Thesecondinequalityfollowsbecausex†n[t]2≤1. 6 Policy 1: Lyapunov-based Policy (Information Symme- Policy2:Auction-basedPolicy(InformationAsymmetry) try) Initialization: µ=µ0; Initialization: q=q0; for each time slot t=0,1,...,T do for each time slot t=0,1,...,T do Denote c(cid:48)[t] as the bid of each user n; n Allocation Rule: Allocation Rule: x†[t]=argmax(cid:16)V[t]−C[t]+ (cid:88) qnt ·x [t](cid:17) x‡[t]=argmax V[t]− (cid:88) xn[t]·(c(cid:48)n[t]−µtn) x[t] φ n x[t] n∈N n∈N Payment Rule: Updating Rule: (cid:16) (cid:17) qt+1 =(cid:104)qt−x†[t](cid:105)++D , ∀n∈N pn[t]=x‡n[t]· V‡[t]−C−‡n[t]−S(cid:101)−(cid:93)n[t]+µtn n n n Updating Rule: µt+1 = 1 ·(cid:16)(cid:2)φ·µt −x‡[t](cid:3)++D (cid:17), ∀n∈N n φ n n n 1)Algorithm Design: The proposed Policy 1 consists of an allocation rule and an updating rule in each time slot. Theallocationruledeterminesthesensorselection(allocation) x†[t] in each slot t, based on the current network information his private information, and cannot be observed by the SP. θ[t] and the current queue backlogs qt, aiming at minimizing Obviously, without this private sensing cost, the SP cannot implement the allocation rule in Policy 1. the upper-bound of drift-plus-penalty in (16). The updating ruleupdatesthequeuebacklogsbasedonthecurrentallocation result x†[t] according to (11). It is easy to see that Policy 1 A. Auction Mechanism Design relies only on the current network information and the past sensor selection history (captured by the queue backlogs), Wedesignan(reverse)VCGauctiontoaddressthecredible while not on any complete or stochastic future information. informationdisclosureofusersineachtimeslot,wheretheSP 2)Optimality: NowweprovidetheoptimalityofPolicy1. is the auctioneer (buyer), and users are the bidders (sellers). Let S†[t] denote the social welfare generated in each slot t, A standard VCG auction usually consists of an allocation and S∗ denote the maximum social welfare benchmark with rule(winnerdetermination)andapaymentrule.Ourproposed the stochastic information (derived in Section III). Formally, auction mechanism involves a set of regulation factors (which are introduced for ensuring the user participatory constraint), Theorem 1 (Optimality). hence includes an additional updating rule for the regulation lim 1 (cid:88)E(S†[t])≥S∗− B. factors. We present the detailed auction mechanism in Policy T→∞T φ 2. Next we will explain these rules in details. t∈T For convenience, we denote c(cid:48) [t] as each user n’s bid The proof follows standard Lyapunov optimization theory n (report) regarding his sensing cost in each slot t, and µt as [30].ByTheorem1,wecaneasilyfindthatPolicy1converges n the regulation factor (similar as the virtual queue in Section tothemaximumsocialwelfarebenchmarkasymptotically,with IV) associated with each user n in each slot t. a controllable approximation error bound O(1/φ). 1)Allocation Rule: The allocation rule aims at maximiz- Intuitively,inPolicy1,eachvirtualqueuecanbeviewedas a regulation factor for lowering (regulating) the sensing cost ing a regulated social welfare in each time slot: of that user, and hence increasing the selection probability of (cid:88) that user. By the updating rule in Policy 1, we can further S(cid:101)[t](cid:44)V[t]− xn[t]·(cid:101)cn[t], obtain the following approximation for the queue backlog8 n∈N qnt ≈qn0 −(cid:88)t−1x†n[k]+t·Dn. wn,hedreepe(cid:101)cnnd[ti]n(cid:44)g ocn(cid:48)n[tb]o−thµttnheisutsheerrbegidulaanteddtsheensrienggulcaotsotroffacutsoerr. k=0 For convenience, we denote x‡[t] (cid:44) {x‡[t],∀n ∈ N} as the n Thisimpliesthatthetime-attenuatedqueuebacklog qnt canbe allocation result in slot t (i.e., that maximizes S(cid:101)[t]). t used to approximate the gap between the required allocation 2)PaymentRule: Thepaymenttousernineachtimeslot probability (i.e., D ) and the actual allocation probability till qsltotisttbiollusnldoetdt,(ih.een.,nc(cid:80)e ktt−h=e10txa†nb[okv])e.Ngaopticgeoethsattothzeerqoueausetb→ack∞lo.g (tiii)s:if(iu)spenr[nt]i=s s0eliefctuesde,rin.e.,isxn‡no[tt]s=ele1c,tethde,ni.e., x‡n[t] = 0, or n pn[t]=V‡[t]−C−‡n[t]−S(cid:101)−(cid:93)n[t]+µtn, (17) V. AUCTION-BASEDON-LINESENSORSELECTION POLICY where V‡[t] is the total sensing value under x‡[t], C−‡n[t] = (cid:80) x‡[t]·c [t] is the total sensing cost except that of user In this section, we consider the asymmetric information k(cid:54)=n k (cid:101)k scenario(regardingthecurrentinformation),wherethesensing n under x‡[t], and S(cid:101)−(cid:93)n[t] is the maximum achievable social cost of each user n realized in each time slot t (i.e., c [t]) is welfarewhenexcludinguserninthesystem.Thefirst3terms n correspondtothepaymentinastandardVCGauction.Thelast 8Thisapproximationisobtainedbysimplyomittingtheoperation[.]+. term is used to compensate the user cost regulation. 7 3)Updating Rule: Inspired by Policy 1, we have: Scenario (a) Scenario (b) Scenario (c) Scenario (d) µt+1 = 1 ·(cid:16)(cid:2)φ·µt −x‡[t](cid:3)++D (cid:17), ∀n∈N. n φ n n n 0.8 1 1 1 e TheaboveupdatingruleisexactlysameasthatinPolicy1,by alu simply viewing φ·µt as qt. Obviously, if users are truthful, V 0.5 0.5 0.5 0.5 then the above allocnation/unpdating rule achieves the exactly ata 0.2 0 0 0 same allocation and performance as in Policy 1. D Y X Y X Y X Y X B. Truthfulness and Optimality Theorem 2 (Truthfulness). TheauctioninPolicy2istruthful. Fig.2. IllustrationofTwoScenarios:(a)nohotspotand(b)onehotspot. Proof: Due to the space limit, we only show that each user n has no incentive to report (bid) a cost higher than his theoriginallocation(grid)toanotherlocation(grid)randomly true cost.9 There are 4 possible outcomes: according to certain probability distribution. For illustrative purposes, we assume that the sensing region of each user in (a) {loss, loss}: user n loses when bidding both truthfully and each time slot is a disk, centered at his location, with a radius non-truthfully.Hereceivesazeropaymentinbothstrategies. randomlypickedfrom[400m,800m].10 Werunthesystemina (b) {win, loss}: user n wins (loses) when bidding truthfully periodof10,000timeslots,whichislongenoughforobtaining (non-truthfully). He receives a smaller payment (i.e., zero) stable outcomes under our adopted policies. when bidding non-truthfully. A. Simulation Scenarios (c) {loss, win}: user n loses (wins) when bidding truthfully (non-truthfully).Thisispracticallyimpossible,asauserlosing Weconsidertwodifferentsimulationscenarios(a)and(b), withalowercostwillneverwinwhensubmittingahighercost. depending on the different data value distributions in different areas, as shown in Fig. 2. In scenario (a), there is no hotspot, (d) {win, win}: user n wins when bidding both truthfully and and all grids are of the similar importance. Hence, the data non-truthfully. We will show that user n receives the same valueindifferentareasfollowsani.i.d.distribution.Inscenario payment in both strategies. First, the third term and the last (b),thereisonehotspot,andthegridsneartothecentreofthe termin(17)areobviouslyidenticalinbothstrategies.Second, hotspot are more important than those far from the hotspot, thefirsttwotermsin(17)arealsoidenticalduetothefollowing and hence have larger data values. Note that any scenario assert:Ifanallocationvectorx∗ maximizesthesocialwelfare, with multiple hotspots can be viewed as an intermediate case then, excluding any user n and removing the grids sensed by between (a) and (b). For fair comparison, we set the average user n (under x∗), the remaining vector x∗ maximizes the −n data value in the whole area as 0.5 for both scenarios. social welfare in the remaining system. Theorem 3 (Optimality). TheauctioninPolicy2achievesthe B. Performance Comparisons same asymptotically optimal social welfare as in Policy 1. Now we compare the performance of our proposed policy withtheRADP-VPCpolicyproposedin[24]and[25],awell- Proof: By the truthfulness given in Theorem 2, together known policy that considers the participation incentive.11 To with the observation that the allocation and updating rules in draw a more convincing conclusion, we also compare our Policy 2 are exactly same as those in Policy 1, we can prove policy with those not considering the participation incentive, the optimality immediately. e.g., random selection and greedy selection (both are widely Remark. The above Policy 2 is truthful only when users used in practical applications such as Waze [10]).12 are myopic, in the sense that they only care about the current 1)Dropping Probability: We first compare the user drop- benefits in each time slot, while not anticipating the potential pingprobabilityunderdifferentpolicies.Inthissimulation,we impacts of their bidding strategies on the future benefits. As a set the dropping threshold as 0.5 for all users. Namely, if the counter-example, a non-myopic user may report a large fake allocation probability of a user is less than 0.5, the user will cost.Bydoingso,theSPwillassignalargeregulationfactorto drop out of the system.13 Fig. 3 illustrates the dynamics of theuser(inordertosatisfytheuser’sparticipatoryconstraint), user allocation probabilities as well as the dropping of users. which potentially increases the user’s future payment. We will study the model with non-myopic users in our future work. 10We will consider the more practical mobility model and sensing region scenariobasedonrealdatatracesinourfuturework. VI. SIMULATIONS 11In the RADP-VPC policy, each user n’s cost is regulated by a virtual creditvn,andthevirtualcreditvnisupdatedinthefollowingway:(i)vn= In our simulations, we launch a participatory sensing ap- vn +α if user n is not selected as sensor in the previous slot, and (ii) plicationinamiddle-scalevirtualcitywithsize10km×10km. vn = 0 if user n is selected as sensor in the previous slot, where α > 0 is a controllable parameter. Intuitively, a larger α can better satisfy the user Thewholeareaisdividedinto2500grids,eachcorresponding participatoryconstraint,butmayreducethegeneratedsocialwelfare. to a square of 200m×200m. Users move according to the 12In the greedy (random) policy, users are selected one by one in a random walk model: in each time slot, each user jumps from descending(random)orderoftheirgeneratedsocialwelfares. 13Toreducethe“starteffect”whereausermaymistakenlydropinthefirst 9The proof for “users are not willing to report costs lower than the true fewslots(duetothelowallocationprobabilityintheseslots),weassumethat values”issimilar.Pleasereferto[31]fordetails. alluserswillbeselectedassensorinthefirst40timeslots. 8 Fig. 3. Allocation Probability Dynamics and User Dropping in Scenario (a) (the first row) and Scenario (b) (the second row). The dropping of a user is illustratedbythesuddendecreaseofhisallocationprobability.Thepercentageofdroppingusersisdenotedbytheblueshadowarea. (2) Benchmark (1) Upperbound Welfare10800 (1) Upperbound7702 (((333bca))) Welfare10800 (2) Benchmark (2) Benchmark Scenario (a) Average Social 4600 ((((4445abc)))) Scenario (b) Average Social 4600 9902 (((((44333ababc))))) me 20 (6) me 20 (4c) Ti Ti (5) (6) 0 0 2000 4000 6000 8000 10000 2000 4000 6000 8000 10000 Time Time Fig.4. AverageSocialWelfareunderDifferentPoliciesinScenario(a)(left)andScenario(b)(right).Policy:(3a)-(3c)Lyapunov-basedpolicy(φ={20,10,5}) proposedinthiswork;(4a)-(4c)RADP-VPCpolicy(α={1,0.5,0.2})proposedin[24]and[25];(5)Randompolicy;(6)Greedypolicy. Wecanseethat,inscenario(a)(thefirstrow),morethan70% incentive cost, which is used to guarantee the user long-term of users drop under the greedy or random sensor selection participation incentive. In our simulations, the incentive cost policy, and around 25% of users drop under the RADP-VPC is approximately 6% in scenario (a) and 8% in scenario (b). policy (α = 1); and in scenario (b) (the second row), more Namely, the incentive cost is higher in scenario (b) than (a) than 90% of users drop under the greedy or random sensor duetothehigherdroppingprobability. selection policy, and more than 50% of users drop under the RADP-VPC policy (α = 1). Our proposed policy, however, Curves(3a)-(3c)denotethesocialwelfaresachievedbyour retains all users in both scenarios. proposedLyapunov-basedPolicy1or2(withφ=20,10,and 5, respectively). Our policy converges to the optimal bench- We can further see that under the same policy (except our mark asymptotically, with very small approximation errors, proposed one), more users drop in the scenario (b) than in e.g., {1%, 2%, 3%} in scenario (a) and {1.5%, 3%, 4.5%} scenario(a).Thereasonisasfollows.Inscenario(b)withone in scenario (b). Note that the approximation error bound is hotspot, most of the data value is concentrated in the hotspot controllable,viachoosingdifferentvaluesofφ.Wecanfurther area, and hence a large total sensing value can potentially be see that the benchmark (i.e., the maximum social welfare) is collected by a small number of users (located in the hotspot higher in scenario (b), as the same amount of sensing value area). In scenario (a) with no hotspot, however, the data value canpotentiallybecollectedbyfewerusersinscenario(b)than is uniformly distributed in all areas, and hence a large total in scenario (a). Accordingly, our proposed policy can achieve sensing value can be collected only by a large enough number ahighersocialwelfareinscenario(b). of users (distributed in the whole area). Hence, to achieve the same level of sensing value, the number of sensors needed in Curves (4a)-(4c) denotes the social welfares achieved by scenario(b),onaverage,issmallerthanthatneededinscenario theRADP-VPCpolicy(withα=1,0.5,and0.2,respectively) (a). Accordingly, the user allocation probability is lower, and proposed in [24] and [25]. Obviously, the performance of hence more users drop, in scenario (b). RADP-VPC largely depends on the choice of parameter α. In 2)Social Welfare: We then compare the average social scenario (a), the social welfare gap between the RADP-VPC welfareunderdifferentpoliciesinFig.4.Curve(1)isthemaxi- policy and our policy ranges from 15% (when α=1) to 50% mumsocialwelfarewithnoparticipatoryconstraint,andserves (when α = 0.2). In scenario (b), this gap increases to 40% as an upper-bound of the maximum achievable social welfare and 75%. In fact, different from our policy or benchmark, the with the participatory constraint. Curve (2) is the maximum RADP-VPC policy achieves a worse performance in scenario social welfare benchmark (with the participatory constraint) (b),duetothehigherdroppingprobabilityinscenario(b).This with complete or stochastic future information derived in illustrates the importance of considering the long-term partici- Section III. The gap between curves (1) and (2) is called the patoryincentiveinasensingsystem. 9 0.5 40 40 ability0.04.54 C/W = 1.0 Welfare 30 Dn = 0.25 Dn≤ 0.2 Welfare30 NN==5400 (Upperbound) n Prob0.35 C/W = 1.1 Social 20 Dn = 0.3 Social 20 NN==3200 Allocatio0.02.53 CC//WW == 11..32 Maximum 10 DDnn == 00..435 Maximum 100 N=10 0.2 0 10 20 30 40 50 10 20 30 40 50 0 0.1 0.2 0.3 0.4 0.5 0.6 Number of Users − N Number of Users − N Dropping Threshold − Dn Fig.5. AllocationProbabilityvsUserNumber Fig.6. SocialWelfarevsUserNumber Fig.7. SocialWelfarevsDroppingThreshold Finally, Curves (5) and (6) denotes the social welfares Fig. 7 illustrates the maximum social welfare (benchmark) achieved by the random and greedy sensor selection policies. vsthedroppingthresholdD ,withdifferentnumbersofusers. n Neither policy considers the long-term participation incentive, Each dash line denotes the maximum social welfare without hence users drop quickly (see Fig. 3) and the social welfare theparticipatoryconstraint(i.e.,theupperboundinFig.4).We decreases dramatically. The social welfare gap between the can see that the social welfare decreases with the dropping thesetwopoliciesandourpolicyislargerthan60%inscenario threshold, as a higher dropping threshold implies that more (a) and 85% in scenario (b). Similarly to the RADP-VPC incentive cost is needed to retain the users in the system. We policy,thesetwopoliciesbothachievesaworseperformancein canfurtherseethatsuchawelfaredegradation(inducedbythe scenario(b)thanin(a),duetothehigherdroppingprobability incentive cost) is more severe with a larger number of users, in scenario (b). Counter-intuitively, the greedy policy achieves as the total incentive cost increases with the number of users. a worse performance than the random policy, due to the higher user dropping probability in the greedy policy. This VII. CONCLUSION also illustrates the importance of considering the long-term participatory incentive in a sensing system. Inthiswork,westudiedtheoptimalsensorselectionprob- lem in a general time-dependent and location-aware participa- C. Impact of Participatory Constraint tory sensing system with the user long-term participatory con- straint. We proposed Lyapunov based on-line sensor selection Sofar,wehaveshowninFig.4thatourpolicyconvergesto (auction)policies,whichdonotrelyonfutureinformationand the maximum social welfare benchmark asymptotically. Next, achieve the optimal off-line benchmark performance asymp- we show in Figs. 5-7 that how the participatory constraint totically. There are several possible extensions in the future affects this benchmark. We provide the results in scenario (a) work. An interesting one is to study the truthful mechanism only, as those in scenarios (b)-(d) are similar. when users are not myopic and can somehow anticipate the Fig.5illustratestheuserallocationprobabilityvsthenum- impact of their activities on the future time slots. berofparticipatingusers,underdifferentsensingcosts(where C/W denotes the average ratio of unit sensing cost and unit REFERENCES data value). Obviously, the allocation probability decreases with both the sensing cost and the number of users (due to [1] N.D.Lane,E.Miluzzo,H.Lu,D.Peebles,etal.,“ASurveyofMobile the partial conflict of their sensing activities). Note that in this PhoneSensing,”IEEECommunicationsMagazine,48(9),2010. result,thereisnoparticipatoryconstraint.Namely,usersnever [2] J. A. Burke, D. Estrin, M. Hansen, A. Parker, et al., “Participatory sensing,”inACMWorkshopWSW,2006. drop, and in each time slot they will be selected based on the realized costs. This result is useful for explaining the different [3] R. K. Ganti, F. Ye, and H. Lei, “Mobile Crowdsensing: Current State andFutureChallenges,”IEEECommunicationsMagazine,49(11),2011. impacts of participatory constraint discussed later. [4] WeatherLah,http://www.weatherlah.com/. Fig. 6 illustrates the maximum social welfare (benchmark) [5] OpenSenseProject,http://www.nano-tera.ch/projects/401.php/. vs the number of participating users N, under different drop- [6] IntelUrbanAtmosphere,http://www.urban-atmospheres.net/. ping thresholds. We can see that when the dropping threshold [7] OpenSignal,http://opensignal.com/. issmall(e.g.,D ≤0.35),themaximumsocialwelfarealways n [8] NoiseTube,http://www.noisetube.net/. increases with the number of users, and the increase rate [9] Sensorly,http://www.sensorly.com/. becomes larger with a smaller dropping threshold. When the [10] Waze:FreeGPSNavigationwithTurnbyTurn,https://www.waze.com/. dropping threshold is large (e.g., D ≥ 0.4), the maximum n [11] MobileMillennium,http://traffic.berkeley.edu/. social welfare first increases with the number of users, and [12] B.Hull,V.Bychkovsky,Y.Zhang,etal.,“CarTel:ADistributedMobile thendecreaseswiththenumberofusers.Thisimpliesthatina SensorComputingSystem,”inProf.ACMSenSys,2006. sensingsystemwithamildornoparticipatoryconstraint(e.g., [13] SpotSwitch,http://spotswitch.com/. a small or zero dropping threshold), we can always increase [14] S.Mathur,T.Jin,N.Kasturirangan,etal.,“ParkNet:Drive-bySensing the social welfare by involving more users into the sensing ofRoad-sideParkingStatistics,”inProf.ACMMobiSys,2010. system. With a stringent participatory constraint (e.g., a large [15] E. De Cristofaro, and C. Soriente, “Participatory Privacy: Enabling dropping threshold), however, involving more users may not PrivacyinParticipatorySensing,”IEEENetworkMagazine,27(1),2013. always increase the social welfare, due to the high incentive [16] Q.Zhao,Y.Zhu,etal.,“FairEnergy-efficientSensingTaskAllocation cost to retain users in the system. inParticipatorySensingwithSmartphones,”IEEEINFOCOM,2014. 10 [17] D. Yang, G. Xue, X. Fang, and J. Tang, “Crowdsourcing to Smart- phones: Incentive Mechanism Design for Mobile Phone Sensing,” In Proc.ACMMOBICOM,2012. [18] Z. Feng, Y. Zhu, Q. Zhang, L. M. Ni, and A. V. Vasilakos, “TRAC: Truthful Auction for Location-Aware Collaborative Sensing in Mobile Crowdsourcing,”inProc.IEEEINFOCOM,2014. [19] T. Liu and Y. Zhu, “Social Welfare Maximization in Participatory SmartphoneSensing,”inWirelessAlgorithms,Systems,andApplications, SpringerBerlinHeidelberg,2013. [20] L.G.Jaimes,I.Vergara-Laurens,M.A.Labrador,“ALocation-Based Incentive Mechanism for Participatory Sensing Systems with Budget Constraints,”inProc.IEEEPerCom,2012. [21] I. Koutsopoulos, “Optimal Incentive-Driven Design of Participatory SensingSystems,”inProc.IEEEINFOCOM,2013. [22] T.Luo,H.-P.Tan,andL.Xia,“Profit-MaximizingIncentiveforPartic- ipatorySensing,”inProc.IEEEINFOCOM,2014. [23] L. Duan, T. Kubo, K. Sugiyama, J. Huang, et al., “Incentive Mecha- nismsforSmartphoneCollaborationinDataAcquisitionandDistributed Computing,”inProc.IEEEINFOCOM,2012. [24] J. S. Lee and B. Hoh, “Sell Your Experiences: A Market Mechanism BasedIncentiveforParticipatorySensing,”inProc.IEEEPerCom,2010. [25] J. S. Lee and B. Hoh, “Dynamic Pricing Incentive for Participatory Sensing,”PervasiveandMobileComputing,6(6),2010. [26] X. Zhang, Z. Yang, Z. Zhou, et al., “Free Market of Crowdsourcing: Incentive Mechanism Design for Mobile Sensing,” IEEE Transactions OnParallelandDistributedSystems,2014. [27] D. Zhao, X. Li, and H. Ma, “How to crowdsource tasks truthfully without sacrificing utility: Online incentive mechanisms with budget constraint,”inProc.IEEEINFOCOM,2014. [28] R.Phillips,PricingandRevenueOptimization,StanfordUniv.Pr.,2005. [29] V.Krishna,AuctionTheory,AcademicPress,2009. [30] M. J. Neely, Stochastic Network Optimization with Application to CommunicationandQueueingSystems,Morgan&Claypool,2010. [31] “ProvidingLong-TermParticipationIncentiveinParticipatorySensing”, Tech.Report,https://www.dropbox.com/s/v3d2ktcjr1g7886/ps-main.pdf

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.