ebook img

Introduction to linear logic and ludics, part I PDF

0.32 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Introduction to linear logic and ludics, part I

Introduction to linear logic and ludics, Part I 5 Pierre-Louis Curien (CNRS & Universit´e Paris VII) 0 0 February 1, 2008 2 n a Abstract J Inthistwo-partsurveypaper,weintroducelinearlogicandludics,whichwerebothintroduced 8 1 byGirard,in1986and2001,respectively. Theyofferathoroughrevisitationofmathematicallogic fromfirstprinciples,withinspirationfromandapplicationstocomputerscience. Someimportant ] keywords are: resources, geometry of computation, polarities, interaction, and games. O L Prerequisites . s c Weassumesomebasicknowledgeofsequentcalculusandnaturaldeduction(see[15,18]),ofλ-calculus [ (the classical reference is [5], see also [20]), and of category theory (only in section 7, see [25]). 1 v Warning 5 3 This paper is not trying to survey the whole and impressive body of works on linear logic since 17 0 years, but rather chooses a route that tries to highlight the deep originality of the subject. The 1 bibliography is in consequence very partial (also reading the latter as a French word, wich meanss 0 something like “personal”, possibly biased). 5 0 / s 1 Introduction c : v Linearlogicarosefromthesemanticstudyofλ-calculus,specifically,thesecond-ordertypedλ-calculus, i X or system F (see [15]). Shortly after proposing the first denotational model of system F [13], Girard r realized that in coherence spaces (to be introduced in section 7) the interpretation of type A → B a decomposes naturally as (!A) ⊸ B (this will be detailed in section 6), and in almost no time a full- fledged new logic, with a totally innovative way of representing proofs and to execute them, that is, toperformcut-elimination,wascreated. We tellthisstoryhere,notinthe orderinwhichitunfolded, but starting from so-called substructural considerations. We raise the question of what happens when we remove some of the structural rules from sequent calculus, most importantly contraction. Substructural logics of various kinds have been considered much before the birth of linear logic, but what characterizes truly linear logic is not that aspect, but rather the controlled way in which the structural rules of weakening and contraction are reintroduced, by means of the new modality ! (“of course”) (and its dual ? (“why not”)). This, together with proof nets, constitutes the major break- through of linear logic. Let us consider the implication in the absence of contraction. In λ-calculus, this corresponds to considering terms in which each variable occurs at most once. (Here we use the Curry-Howard 1 correspondence between proofs and programs, see [15]). Consider thus the set of affine λ-terms built with the followingconstraints: xis anaffine term,if M is anaffine termthen λx.M is anaffine term, and if M,N are affine terms with disjoint sets of free variables then MN is an affine term. The key fact about affine terms is the following: Affine terms normalize in linear time. The reason is trivial: each β-reduction reduces the size, keeping in mind that P[x←Q] is the result of the substitution of at most one occurrence of x with Q, so P[x ← Q] is shorter by at least the twosymbolsofabstractionandapplicationthathavebeenabsorbedbythe reductionof(λx.P)Q. Of course, this linearity is one of the reasons for the name Linear Logic. Incontrast,arbitraryλ-termsmaytakeanexponentialtime andmoreto reachtheir normalform. Forexample,thereadermayplaywithmn,wheren=λfx.fn(x)(Churchnumerals–here,f0(x)=x and fn+1(x)=f(fn(x))). As anotherinstanceofthe phenomenonofduplication,orofmultiple useofavariable(orassump- tion),thinkofaconstructivereadingof(∀x ∃y x<y)fromwhichonemaywishtoextractaprogram creating an infinite strictly increasing sequence of numbers. The programextractedfrom the formula will actually be a procedure to produce a y >x, when given an x, and it will be called in a loop. In linear logic, this formula would have to be written !(∀x ∃y x<y). Rather than continuing a long introduction, we decide to enter in the subject proper, and we content ourselves with giving a plan of the sections to come. In section 2, we introduce so-called multiplicative-addditive linear logic, and we complete the description of (propositional) linear logic with the exponentials in section 3. We shall not consider quantifiers in this survey paper, in any case,they lookverymuchasusual. Then, insection4,we illustrate the expressivityoflinearlogic by sketchingtheproofthatpropositionallinearlogicisundecidable,apropertythatsharplydistinguishes linear logic from classical and intuitionistic logics. The following three sections are concerned with the semanticsoflinearlogic. Wefirstdiscussphasesemantics,whichisjustasemanticsofprovability (section 5), then we introduce the coherence semantics, already alluded to (section 6), and finally we address the question of more general categoricaldefinitions of models (section 7). In part II, we shall go back to syntactic issues and introduce proof nets. In particular, we shall address the so-called correctness criteria: proof nets are elegant representations of proofs avoiding some useless sequentializations imposed by sequent calculus. Reconstructing the sequentializations and thus asserting that a proof structure, that is, a would-be proof net, is actually a proof net, is a fascinating problem,that relatesto gamessemantics [22, 1]. The last sections (a polished formof[8]) aredevotedto ludics,whichwhile stronglyinspiredby linearlogictakesamoreradicalviewofproofs as interactive processes, evolving in space and time. Here is a table of sources I have been using for writing this paper. The reader will be able to find morecompleteinformationandproofsbygoingtothesereferences,aswellastothemanyothersthat he will find in their bibliographies or on the Web! sections 2 and 3 [14, 16, 9] (slightly revisited here in the light of ludics) section 4 [24] (a remarkably well-written paper) sections 5 and 6 [14] section 7 [26] (which itself is a survey) proof nets [14, 12, 19, 4] ludics [17] 2 2 Multiplicatives and additives Insequentcalculus, depending onpresentations,youcansee the left and rightrules for, say,conjunc- tion given in this format: Γ,A⊢∆ Γ,B ⊢∆ Γ⊢A,∆ Γ⊢B,∆ Γ,A∧B ⊢∆ Γ,A∧B ⊢∆ Γ⊢A∧B,∆ or in that format: Γ,A,B ⊢∆ Γ1 ⊢A,∆1 Γ2 ⊢B,∆2 Γ,A∧B ⊢∆ Γ1,Γ2 ⊢A∧B,∆1,∆2 The two formats have been called additive and multiplicative, respectively, by Girard. We recall that asequentΓ⊢∆is madeoftwomultisets offormulas(that is,we workmodulo the so-calledexchange rule which says that Γ⊢∆ entails any of its permutations – on the contrary,rejecting this rule leads to a complicated and still not very well-understood subject called non-commutative (linear) logic). The notation Γ1,Γ2 stands for the multiset union of Γ1 and Γ2. Logiciansdidnotbotheraboutthesedifferentpresentations,becausetheyareequivalent. Yes,but modulo the contraction and weakening rules: Γ,A,A⊢∆ Γ⊢A,A,∆ Γ⊢∆ Γ⊢∆ Γ,A⊢∆ Γ⊢A,∆ Γ,A⊢∆ Γ⊢A,∆ Let us show this, say, for the left introduction rule. Let us first derive the multiplicative format from the additive one: Γ1 ⊢A,∆1 Γ2 ⊢B,∆2 Γ1,Γ2 ⊢A,∆1,∆2 Γ1,Γ2 ⊢B,∆1,∆2 Γ1,Γ2 ⊢A∧B,∆1,∆2 where the double line expresses multiple use of the weakening rules. For the converse direction we have: Γ⊢A,∆ Γ⊢B,∆ Γ,Γ⊢A∧B,∆,∆ Γ⊢A∧B,∆ using contractions. If we remove contraction and weakening, then the presentations are no longer equivalentandhencedefinedifferentconnectives,⊗and&,respectivelycalled“tensor”and“with”. In- deed,youwillseeinsection6thattheyareinterpretedverydifferently. Moreover,thecoherencespace model (to which the curious reader may jump directly) enlightens the terminology multiplicative– additive, as the reader will check that the cardinal of (the interpretation of) A⊗B (resp. A&B) is the product (resp. the sum) of the cardinals of (the interpretations of) A,B. Similarly,wesplit“disjunction”intwoconnectives,Oand⊕,respectivelycalled“par”and“plus”. As usual in sequent calculus, the left rule for, say, ⊗, tells us by duality what the right rule for O 3 is, and similarly for ⊕. This leads us to the following monolateral sequent presentation of the four connectives: MULTIPLICATIVES ⊢A,B,Γ ⊢A,Γ1 ⊢B,Γ2 ⊢AOB,Γ ⊢A⊗B,Γ1,Γ2 ADDITIVES ⊢A,Γ ⊢B,Γ ⊢A,Γ ⊢B,Γ ⊢A⊕B,Γ ⊢A⊕B,Γ ⊢A&B,Γ Considered as (associativeand commutative) operations,these four connectives have units “true” and “false”, each of them splits into the multiplicative and additive one: O ⊗ ⊕ & ⊥ 1 0 ⊤ (As for the terminology and for mnemotechnicity, the names 1 and 0 correspond to the usual mul- tiplicative and additive notation in groups.) We shall synthesize the rules for the units from the requirement that the isomorphisms A⊗1≈A, etc..., should hold. In order to prove ⊢A⊗1,Γ from ⊢ A,Γ we should have ⊢ 1. In order to prove ⊢ AO⊥,Γ from ⊢ A,Γ, we should have ⊢ A,⊥,Γ as an intermediatestep,whichsuggeststhe rule for⊥givenbelow. We derive⊢A⊕0,Γfrom⊢A,Γ bythe (left) ⊕ rule, without any need for a (right) rule for 0. Finally, we need ⊢ ⊤,Γ to derive ⊢ A&⊤,Γ from ⊢A,Γ: ⊢A,Γ ⊢A ⊢1 ⊢A,⊥,Γ ⊢A,Γ ⊢A,Γ ⊢⊤,Γ ⊢A⊗1 ⊢AO⊥,Γ ⊢A⊕0,Γ ⊢A&⊤,Γ The rules are thus: UNITS ⊢Γ ⊢1 ⊢⊥,Γ no rule for 0 ⊢⊤,Γ An even shorter way to synthesize the rules for the units is to formulate rules for n-ary versions of the four connectives, and then to take the degenerate case n = 0. The assumption of the n-ary O rule is ⊢ A1,...,An,Γ, which degenerates to ⊢ Γ. For the tensor rule there are n assumptions and the conclusionis ⊢A1⊗...⊗An,Γ, with Γ=Γ1,...,Γn, and in the degeneratecase we thus have no assumption and Γ =∅. There are n ⊕ rules, and thus no rule in the degenerate case. Finally, the & rule has n assumptions, and thus no assumption in the degenerate case. What about negation? In classical logic, negation can be boiled down to negation on atomic formulasonly,thanksto De Morganlaws: ¬(A∧B)=(¬A)∨(¬B), fromwhichthe otherDe Morgan law follows, using ¬¬A = A. This works also for linear logic, where negation is written A⊥ (“A orthogonal”,or “A perp”). But the logical operations being different, negation in linear logic is very 4 different from that of classical logic. In classical logic, we have both A = ¬¬A and ⊢ A∨¬A (the latterbeing whatconstructivistscriticized),while inlinearlogicwe haveA=A⊥⊥ and⊢AOA⊥, but we do not have ⊢ A⊕A⊥, and as a matter of fact linear logic retains the distinctive features that make intuitionistic logic so nice: confluence (of cut-elimination), and the disjunction property (for ⊕, see below). Thanks to De Morgan laws, we can dispense with negation as a connective, and consider A⊥ as an abbreviation for its De Morgan normal form obtained by applying the following rules (where we have added the not-yet introduced “!” and “?” for completeness): (A⊗B)⊥ →(A⊥)O(B⊥) (AOB)⊥ →(A⊥)⊗(B⊥) 1⊥ →⊥ ⊥⊥ →1 (A&B)⊥ →(A⊥)⊕(B⊥) (A⊕B)⊥ →(A⊥)&(B⊥) ⊤⊥ →0 0⊥ →⊤ (!A)⊥ →?(A⊥) (?A)⊥ →!(A⊥) (Notice that the dual under ⊥ of a multiplicative (resp. additive) connective is a multiplicative (resp. additive) connective.) In the sequel, we shall freely use bilateral sequents in examples, but they can be seen as syntactic sugar for monolateral ones, for example A1,A2 ⊢B is a version of ⊢ A⊥1,A⊥2,B. Hence,thankstoduality,linearlogicputsallformulasofasequentonthesameground,incontrastto intuitionistic logic, where there is a clear distinction between input (Γ) and output (A) in a sequent Γ⊢A. Tocompletethedescriptionofthemultiplicative-additivefragmentoflinearlogic,weneedaxioms, and the cut rule. The latter is not needed in the sense that it can be eliminated (just as in classical or intuitionisitc sequent calculus), but one can hardly use the system without cuts, which carry the intelligent part of proofs. AXIOM CUT ⊢A,Γ1 ⊢A⊥,Γ2 ⊢A,A⊥ ⊢Γ1,Γ2 (Notethatthecutruleisgiveninamultiplicativestyle,i.e.,itisquitesimilartothetensorrule. That the axiom has this most economicalformat, as opposed to ⊢A,A⊥,Γ, should come as no surprise, as the latter embodies an implicit weakening.) Asexplainedintheintroduction,themainmotivationfortheremovalof(weakeningand)contrac- tionis the controlonnormalisation. Indeed, the readercaneasilycheckthat allaffine λ-terms canbe typed by the following rules: Γ1 ⊢M1 :A⊸B Γ2 ⊢M2 :A Γ,x:A⊢M :B x:A⊢x:A Γ1,Γ2 ⊢M1M2 :B Γ⊢λx.M :A⊸B Notice the multiplicative style of the crucial typing rule for application. As usual, “A implies B” is “not A or B”, and specifically here A ⊸ B = (A⊥)OB (⊸ reads “linearly implies”). This natural deduction style rule is indeed encoded in sequent calculus using the (multiplicative) cut rule: Γ2 ⊢A ⊢B⊥,B Γ1 ⊢A⊥OB Γ2 ⊢A⊗B⊥,B Γ1,Γ2 ⊢B 5 Thisleadsustothemainconceptualnoveltyoflinearlogic: theunderstandingofformulasasresources, that are consumed throughentailment: in the sequent A⊢B, the resource A is consumed in order to getB (thinkalsoofachemicalreaction). Thisislikeinanabstractstatemachine,whichchangesstate upon performing certain actions. As a matter of fact, we present an encoding of such a machine in section4. With this readinginmind, oneeasily understandshow badcontractionandweakeningare. Suppose that B is some good,andA is a givenamountof money,say 20Euros. ThenA,A⊢B reads as: “you can acquire B against 40 Euros”. The contraction rule would say that you could acquire the same good for 20 Euros. Thus, contraction is not very good for the merchant. It is not good in chemistry either: in a reaction, in order to obtain 2H2O you need to have exactly four H+ and two O−2. Another way to criticize the rule is by looking from conclusion to antecedents (proof-search): if you only have A to get B, your resource A cannot be magically duplicated, as the contraction rule would imply. As for weakening,if youcanbuy B with A, as formalizedby A⊢B, why would you– the buyer – spend an additional amount of money C to get B (C,A⊢B)? Weakening is not good for the client. So far, we have insisted on a dichotomy between additive and multiplicative connectives. There is however another grouping of connectives, that was sitting there from the beginning, and that has even been somehow made explicitly by the choice of symbols: ⊗ and ⊕ on one hand, and & and O on the other hand. Girard noticed that ⊗ distributes over ⊕ (like in arithmetic!), and that dually O distributes over &, while other thinkable combinations do not distribute upon each other (it is an instructive exercise to check this). Let us examine the proof of this distributivity law. We have to prove ⊢(A⊗(B⊕C))⊥,(A⊗B)⊕(A⊗C) and ⊢((A⊗B)⊕(A⊗C))⊥,A⊗(B⊕C). Both proofs begin by decomposing the O’s and the &’s. For the first sequent, this goes as follows: ⊢A⊥,B⊥,(A⊗B)⊕(A⊗C) ⊢A⊥,C⊥,(A⊗B)⊕(A⊗C) ⊢A⊥,(B⊥&C⊥),(A⊗B)⊕(A⊗C) ⊢A⊥O(B⊥&C⊥),(A⊗B)⊕(A⊗C) andwe areleft to provethe subgoals⊢A⊥,B⊥,(A⊗B)⊕(A⊗C) and ⊢A⊥,C⊥,(A⊗B)⊕(A⊗C). Similarly,thesearchofaproofofthesecondsequentyields⊢A⊥,B⊥,A⊗(B⊕C)and⊢A⊥,C⊥,A⊗ (B⊕C) as subgoals. Before we proceed, note an important feature of & and O: in each of the two proof constructions, the two subgoals together are actually equivalent to the original goal: these two connectivesare reversible – an affirmationthat we shallmake formaljust below. In contrast,in order to complete the proof of any of the subgoals, we have to make (irreversible) decisions. We just give the proof of ⊢A⊥,B⊥,(A⊗B)⊕(A⊗C), but the others are similar: ⊢A⊥,A ⊢B⊥,B ⊢A⊥,B⊥,(A⊗B) ⊢A⊥,B⊥,(A⊗B)⊕(A⊗C) The two decisions which we have made in this proof are: to choose the left ⊕ rule, and then to split A⊥,B⊥ in the way we did, assigning the resource A⊥ for the proof of A and B⊥ for the proof of B. 6 The opposition⊗–⊕versusO–&is actuallyfundamental, andhasplayedanimportantroleinthe genesis of ludics (see part II). The following table summarizes its significance: ⊗ ⊕ O & Irreversible Reversible Non-deterministic Deterministic Active Passive Player Opponent External choice Internal choice Positive Negative We first show that the rules for O and & are reversible,that is, that the conclusions of the rules hold if and only if their antecedents hold, as follows: ⊢A⊥,A ⊢B⊥,B ⊢A⊥,A ⊢Γ,AOB ⊢A⊥OB⊥,A,B ⊢A&B,Γ ⊢A⊥⊕B⊥,A ⊢Γ,A,B ⊢A,Γ Hence we lose nothing by replacing ⊢ Γ,AOB with ⊢ Γ,A,B in proof-search, and moreover we can gain something, since the liberated immediate subformulas A and B can then be sent to different antecedents when decomposing a tensor (as in the proof of ⊢ AOB,A⊥ ⊗B⊥). The O and & rules are moreover deterministic in the sense that, once the formula to be decomposed in the sequent is fixed, then there is no choice on how to do it: for the O rule, just dissolve the O, and for the & rule, prove the two subgoals, which are the same sequent up to the replacement of the formula by one of its immediate subformulas. As we shall see in part II, provided a certain proof-search discipline is respected,whichcanbe phrasedas“applyreversiblerulesinpriority”,andifmaximalgroupsofrules concerning reversible (resp. irreversible)connectives are grouped and consideredas a single synthetic rule, then sequents always contain at most one formula whose topmost connective is reversible, and thereforeeventhe choiceofwhichformulaofthe sequentto decomposeisdeterministic. This absence of initiative can be termed passivity. Let us examine a contrario what makes the other connectives active and irreversible. We have already hinted at this when we proved the distributivity law. Each ⊕ rule, read bottom-up, chooses oneofAorB,whileeachinstanceofthe⊗-rulemakesachoiceofhowtosplitthe(restofthe)sequent in two parts. The choice of which formula of the sequent to decompose is also non-deterministic. In other words, the two connectives are associated with some actions that must be taken. The next pair in our list places us in the tradition of the dialogue game interpretation of proofs, whichgoesbacktoGentzen. Aproofisthe workofoneperson,the Player(think ofastudent),which anotherperson,theOpponent(thinkofanexaminer)cancheck. Checkinggoesbytests,orplays. The Opponent challengesthe conclusion of the proof, to which the Playerhas to answerby displaying the last rule he used. The Opponent then chooses one of the antecedents of the rule, and challenges the Player to justify that particular antecedent, to which the Player answers by displaying the last rule used to show this antecedent, etc... The play results in disclosing, or exploring, a part of the proof. We shall come back to this game interpretation in part II. Here, we shall content ourselves with an illustration, Yves Lafont’s menu: 7 Menu (price 17 Euros) (Q&S)  Quiche or Salad ⊗ Chicken or Fish 17E ⊢ (C&F) Banana or “Surprise du Chef∗” ⊗ (*) either “Profiteroles” or “Tarte Tatin”  (B&(P ⊕T)) We can recognize here some of the meanings that we already discussed. The right of the sequent is what you can get for 17 Euros. The tensor tells that for this price you get one “entr´ee”, one dish and one dessert. The difference between & and ⊕ is a bit more subtle, and the game interpretation is helpful here. So let us start again from the beginning, considering a play between the restaurant manager (the Player) and the customer (the Opponent). It is the Player’s responsibility to split the 17E into three parts, corresponding to the cost of the three parts of the meal. May be, this is done as follows: 5E ⊢Q&S 8E ⊢C&F 4E ⊢B&(P ⊕T) 17E ⊢(Q&S)⊗(C&F)⊗(B&(P ⊕T)) Now let the Opponent challenge 5E ⊢Q&S: 5E ⊢Q 5E ⊢S 5E ⊢Q&S whichreadsas: bothQuicheandSaladareavailabletothecustomer,buthecangetonlyone,anditis hischoiceofpickingoneoftheantecedentsandtoorder,say,aQuiche. Thustheadditiveconjunction can be understood as a ... disjunction embodying a notion of external choice (remember that in our example the customer is the Opponent, or the context, or the environment). Let us now analyse a proof of 4E ⊢B&(P ⊕T): 4E ⊢T 4E ⊢B 4E ⊢P ⊕T 4E ⊢B&(P ⊕T) SupposethattheOpponentchoosestheSurprise. ThenitisthePlayer’sturn,whojustifies4E ⊢P⊕T using the right ⊕ rule. So, the Opponent will get a Tarte Tatin, but the choice was in the Player’s hands. Thus ⊕ has an associated meaning of internal choice. In summary, two forms of choice, external and internal, are modelled by & and ⊕, respectively. In the case of ⊕, whether A or B is chosen is controlled by the rule, that is, by the Player. In the case of &, the choice between A or B is a choice of one of the antecedents of the rule, and is in the hands of the Opponent. Actually, our image becomes even more acurate if we replace the customer with an inspector (in summer, many restaurants propose unreasonable prices to the tourists...). The inspector will not consumethewholemenu,hewilljustcheck(hischoice!) whetherwhatisoffered,sayfortheentr´ee,is correct (not over-priced, fresh enough...). Another inspector, or the same inspector, may want to do another experiment later, checking this time on dessert: using this sharper personification, the game as explained above is more fully reflected. All these oppositions confirm a fundamental polarity: by convention, we shall term & and O as negative, and ⊗ and ⊕ as positive. 8 3 Full linear logic In order to recover the power of the full λ-calculus (or of intuitionistic logic), we reintroduce the structural rules, but only on formulas marked with a modality (actually two: “!” and its dual “?”): ⊢?A,?A,Γ ⊢Γ ⊢?A,Γ ⊢?A,Γ The rules are easier to explain in a bilateralformat. If you want to prove !A,Γ⊢B, then contraction allows you to give yourself the freedom of using the assumption !A twice, or more: if you have been able to prove !A,...,!A,Γ ⊢ B, then you can conclude by using (repeated contractions). Similarly, weakening allows you to forget assumptions of the form !A: having a proof of Γ⊢B is enough to get !A,Γ⊢B. The connective ! is a connective just as the others, and thus has its left and right introduction rules (?Γ denotes a multiset of formulas each of the form ?A): Dereliction Promotion Γ,A ⊢?Γ,A ⊢Γ,?A ⊢?Γ,!A Our description of linear logic (LL) is now complete! The two connectives ! and ? are called the EXPONENTIALS. A variation on the promotion rule, which is inspired by categoricalconsiderations (see section 7), and also by considerations of the geometry of interaction (see part II) is: ⊢Γ,A ⊢??A,Γ ⊢?Γ,!A ⊢?A,Γ Thesetworulestogetherareequivalenttothe promotionrule. We justshowherehowthe secondrule can be derived: ⊢!A⊥,?A ⊢??A,Γ ⊢!!A⊥,?A ⊢?A,Γ For intuitionistic or classical sequent calculus, the key result to establish in order to be able to make some use of it is the cut-elimination theorem. It holds here too, but we shall not prove it here (this is done in full detail, say, in the appendix of [24]), and we shall rather content ourselves with giving only a few cut-elimination rules, without conviction, since a much better representation of proofs – prof nets – will be given in part II. It is good that the reader see what he gains, though! Π . . . Π ⊢A⊥,A ⊢A⊥,Γ . . . ⊢A⊥,Γ −→ ⊢A⊥,Γ 9 ′ Π . . Π . ′ Π .. ⊢?Γ2,B . . . . Π ′ .. Π. ⊢Γ1,?B⊥,?B⊥ ⊢?Γ2,!B ⊢?Γ2,B . . . ⊢Γ1,?B⊥,?B⊥ ⊢?Γ2,B ⊢Γ1,?Γ2,?B⊥ ⊢?Γ2,!B ⊢Γ1,?B⊥ ⊢?Γ2,!B ⊢Γ1,?Γ2,?Γ2 ⊢Γ1,?Γ2 −→ ⊢Γ1,?Γ2 The second rule illustrates duplication of proofs, while the first one is the reinsuring one: cuts may vanish! ***** AJOUTER DES REGLES COMMUTATIVES ***** We can now give a translation of (simply-typed) λ-calculus: Γ,x:A⊢M :B Γ⊢M :A→B Γ⊢N :A Γ,x:A⊢x:A Γ⊢λx.M :A→B Γ⊢MN :B into LL proofs. This translation takes (a proof of) a judgement Γ ⊢ M : A and turns it into a proof [[Γ ⊢ M : A]] of ⊢?(Γ∗)⊥,A, where ∗ applies to all formulas of Γ, (hereditarily) turning all B → C’s into ?B⊥OC. Formally: A∗ =A (A atomic) (B →C)∗ =?(B∗)⊥OC∗ ?(Γ∗)⊥ ={?(A∗)⊥ |A∈Γ} The reason why Γ ⊢ A becomes ⊢?Γ⊥,A (or !Γ ⊢ A) should be clear: a variable can have several occurrences in a term, whence the need of a contraction rule to type it (using multiplicative and exponential connectives only). The translation is as follows (we omit the ∗ for a better readability). Variable Abstraction A⊥,A [[Γ,x:A⊢M :B]] . . . ⊢?Γ⊥,A⊥,A ⊢?Γ⊥,?A⊥,B [[Γ,x:A⊢x:A]] = ⊢?Γ⊥,?A⊥,A [[Γ⊢λx.M :A→B]] = ⊢?Γ⊥,(?A⊥OB) 10

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.