Universit`a degli Studi di Bologna DEIS Using Abductive Logic Agents for Legal Justification A. Ciampolini P. Torroni August 7, 2002 DEIS Technical Report no. DEIS-LIA-02-002 LIA Series no. 55 Using Abductive Logic Agents for Legal Justification A. Ciampolini P. Torroni DEIS, Universit`a di Bologna Viale Risorgimento 2, 40136 Bologna, Italy {aciampolini, ptorroni}@deis.unibo.it August 7, 2002 Abstract. In this paper we present a novel approach for legal justification based on an abductive multi-agent system. Legal justification has the main objectiveoffindingtheplausibleexplanationsforsomegivenpiecesofevidence, forinstanceinacrimetrial. Inourapproach,theprocessofjustificationisdone through the collaborative abductive reasoning of agents, which operate within a logic-based architecture called ALIAS. We apply the proposed approach on anexampleborrowedfromtheliterature. WeshowthatusingALIASagentsin legaljustificationallowsnotonlytoproduceplausibleexplanationsforobserved evidences, but also to detect either collusion or inconsistencies among trial characters, and to take into consideration the credibility of the persons (e.g., witnesses) involved in the trial. Keywords: Multi-agent systems, logic based reasoning, abduction, legal justi- fication 1 Introduction In recent years, the interest for intelligent agents has considerably grown from both theoretical and practical point of view [19]. The agent paradigm, in fact, is well suited to represent applications that merge features inherited both from the dis- tributed systems area (such as locality, distribution, interaction, mobility etc.) and from artificial intelligence (such as reasoning, adaptation, etc.). In particular, in- telligent agents require both reasoning capabilities and social abilities, which make interaction and coordination among agents possible. Abduction has been widely recognized as a powerful mechanism for hypothetical reasoning in presence of incomplete knowledge [6, 9, 15]: it is generally understood as reasoning from effects to causes, and also captures other important issues such as reasoning with defaults and beliefs (see for instance [17, 20]). For these reasons, abduction has been already extensively applied to legal reasoning, especially as an explanatory inference technique in legal trials [27, 10]. In this paper we present a novel approach for legal justification based on an abductive multi-agent system. Legal justification has the main objective of finding the plausible explanations for some given pieces of evidence or observations, for instance in a crime trial. The process of legal justification is supported by abductive reasoning. In fact, abduction could be particularly useful when the knowledge base representing the problem domain is either incomplete or multiple. This is the case, forinstance,oflegaltrials,wheresomepiecesofknowledgeaboutthecrimeareoften missing; moreover, multiple knowledge is available in the case of several witnesses, when it could happen that some of them provide a different version about the crime, and some inconsistencies could arise. We deal with legal justification by means of ALIAS, an agent architecture where agents are logic-based and capable to reason with abduction. ALIAS agents can be dynamically grouped and their reasoning can be coordinated by means of proper operators (namely, collaboration and competition). The multi-agent approach we are proposing relies on several agents each rep- resenting a distinct actor in the trial (e.g., the detective, suspects, witnesses, etc.). Oncewerepresenteachtrialactorbyasingleabductiveagent,weareabletodynam- ically group (some of) the agents and coordinate them for the explanation of goals. As for traditional abductive reasoning, the system starts from a set S of observed piecesofevidence,toprovideoneormoresetsofhypothesesaspossibleexplanations for S, using the knowledge of a group of agents. In addition, the dynamic composi- tion of agents knowledge bases allows to perform some additional tasks that could help both the detective and judges in solving the case. For instance, it is possible to check the coherence of distinct witnesses, thus possibly to detect either collusion or false testimonies. It is worthwhile noticing that, in realistic cases, finding explanations for a given evidence could be unsuccessful due to various reasons: for instance, not all the available pieces of evidence could be reliable, depending, for instance, on the source. Wefacedthisproblembyrepresentingthereliabilityeitherofeachdramatis persona or even of single pieces of knowledge (e.g. a passage in a witness interrogation) by means of proper hypotheses. In this way we can obtain justifications labelled with reliability hypotheses, that could further help detective and juries in understanding behaviours and roles of actors involved in the trial. We tested the system on the Peyer case, a well known trial that has been adopted as a test bed by previous abduction-based approaches [27, 10, 22]. The paper is organized ad follows. Section 2 gives some preliminary notions on abduction and presents ALIAS. In section 3 we show how to apply abduction and 2 agents to the Peyer case, and give some general guidelines to be followed when using abductive agents in legal trials. A discussion on related literature is presented in section 4. Conclusions follow. 2 Logic-based abductive reasoning and Abductive Logic Agents In this section we give some preliminary notions on abductive logic programming and multi-agent systems. In particular, we firstly introduce logic-based abduction, as a tool for hypothetical reasoning. We then present ALIAS, the multi-agent ar- chitecture based on abductive logic agents that we use in the following to model the process of legal justification. 2.1 Abductive logic programming An assumption that is often made in multi-agent systems is that individuals only haveapartialknowledgeoftheenvironmenttheyaresituatedin. Thisisparticularly true in the legal application domains, if we want to use agents to model physical persons or different jurisdictional institutions, as we will see in the sequel. There- fore, in most knowledge-intensive applications of agents, it is often the case that an intelligent agent requires some sort of guess about a computation which cannot be performed locally, since the local knowledge is incomplete. In the legal setting, the incomplete knowledge could be, for instance, the information that each person involved in a legal trial (either the detective, a witness or a judge) has about the crime. In a logic programming perspective, we could rephrase it by saying that a guess is needed because the agent has a goal which cannot be solved locally. In this situation, the Closed World Assumption [4] usually adopted in logic programming can be no longer assumed, and some form of open or abductive reasoning has to be considered. Different forms of abduction, but not in logic programming, have already been used for legal justification by different authors [27, 10]. Abduction is generally understood as reasoning from effects to causes, and cap- tures other important issues, such as reasoning with defaults and beliefs (see for instance [17, 20]). Incomplete knowledge is handled by labeling some pieces of infor- mation as abducibles, i.e., possible hypotheses which can be assumed, provided that they are consistent with the current knowledge base. In the context of intelligent agents, abduction can be regarded as a way of dynamically enlarging the agent’s knowledge with abducible atoms. In the case of a single agent that uses a proof to reason abductively, such proof will guarantee that the abduced hypotheses that en- large the agent’s knowledge base are consistent with its knowledge. To this purpose, an agent’s knowledge base is equipped with some integrity constraints, that must never be violated. 3 Before giving a formal definition of abductive logic programs, we will briefly recall some basic concepts about logic programs. A logic program is a set of clauses 1 A ← L ,...,L 1 n where A is an atom, and each L is a literal, i.e., an atom B or its negation not B. i An atom is an n-ary predicate symbol followed by a list of n terms p(t ,...,t ), 1 n or true, or false. A literal is an atom or the negation of an atom. A term is a constant, a variable or a compound term. Note that not represents default negation or negation as failure. All variables in A,L ,...,L are assumed to be universally quantified from the outside. A is 1 n referred to as the head of the clause and L ,...,L as its body. 1 n Definition 1 (abductive logic program) An abductive logic program (ALP) is a triple hP,A,ICi where P is a logic program, possibly with abducible atoms in clause bod- ies; A is a set of abducible predicates, i.e. open predicates which can be used to form explaining sentences; IC is a set of integrity constraints. Given an abductive program hP,A,ICi and a sentence G, the purpose of abduc- tion is to find a (possibly minimal) set of abducible atoms δ which together with P “entails” G, and such that P ∪δ “satisfies” IC. Depending on the chosen semantics for the given logic program [14], various notions of “entailment” are possible, such that, for every ground literal in the language, either the literal or its complement is entailed, and such that no literal and its complement are entailed at the same time. In the following, we will use the following notation for the concept of “abductive entailment”: abd hP,A,ICi ‘ G δ and it is possible to formalize its declarative semantics in the following way: Definition 2 (declarative semantics of an ALP) Let hP,A,ICi be an abductive abd logic program. Then, given a goal G, the declarative semantics of hP,A,ICi ‘ G δ is defined as follows: P ∪δ (cid:15) G P ∪δ “satisfies” IC δ ⊆ A where (cid:15) is the entailment symbol. 1Fromnowon,wewilluselogicprogrammingnotationasitisdefinedin[18]. Moreover,inthis work we will adopt the Edinburgh notation for standard Prolog, where variable names start with upper case or ’ ’, and constants start with lower case. 4 Therearealsodifferentnotionsof“integrityconstraintsatisfaction”inliterature. Different procedures / semantics will adopt different such notions. Moreover, there is not a unique syntax to write integrity constraints: depending on the adopted proof-procedure, they could be written for example as denials, as in the case of the Kakas-Mancarella proof-procedure, or as forward rules (implications), as in the IFF proof-procedure. All proofs require that each integrity constraint contains at least one abducible, and that abducible predicates are not defined, i.e., there is no clause in an ALP that has an abducible in the head. In fact, this would mean that the abducible could be derived by deduction, while we consider abducibles as open predicates that can be assumed true or false, depending on the goal to solve. It is worthwhile noticing that, given a program P, possibly containing definitions of abducible predicates, it is possible to syntactically transform P into another seman- tically equivalent program, by means of auxiliary predicates, where no abducible is defined. Such transformation is shown in [14]. The following example shows a sim- ple abductive logic program, written according to the syntax of integrity constraints required by the Kakas-Mancarella proof-procedure. Example 1 Let us consider the following program (inspired by the Peyer case, described in section 3), expressing two possible explanations for the retrieval of the bodyofCaraKnott,inthePeyertrial. 2 Inthisexampletheatomspeyer killed knott and someone else killed knott are abducible predicates: found knott0s body ← peyer killed knott. found knott0s body ← someone else killed knott. with integrity constraint: ← someone else killed knott,peyer killed knott. In order to satisfy such constraint, the predicates someone else killed knott and peyer killed knott cannot be assumed both true at the same time. The observation found knott0s body isthereforeexplainedbytwo(minimal)setsofsentences, δ and 1 δ : 2 δ = {someone else killed knott,not peyer killed knott} and 1 δ = {peyer killed knott,not someone else killed knott} 2 According to [9], negation as default, possibly occurring in clause bodies, can be recovered into abduction by replacing negated literals of the form not a with a new 2”Peyer” is the name of the main defendant in the trial. 5 positive, abducible atom not a and by adding the integrity constraint ← a,not a to IC. Thenaturalsyntacticcorrespondencebetweenastandardatomanditsnegation by default is given by the following notion of complement: ( α if l = not α l = not α otherwise where α is an atom. All negated literals in an ALP are considered abducibles. Theintegrityconstraintsusedforhandlingnegationasdefault,liketheconstraint ← p,not p, are considered trivial, and we will leave them implicit and suppose that they are replicated in each agent’s knowledge base. We will briefly introduce now the Kakas-Mancarella proof-procedure for abduc- tive derivation, that implements the declarative semantics defined above. This is not the only abductive proof-procedures defined in the logic programming literature [11, 7, 5], but it is the one adopted by ALIAS agents. 2.2 The Kakas-Mancarella proof-procedure The Kakas-Mancarella (KM, for short) proof-procedure [16] is an extension of the proof-procedure for ordinary logic programming proposed by [9]. Both procedures extendthebasicresolutionmechanismadoptedinSLDandSLDNFinordinarylogic programming by introducing the notion of abductive and consistency derivations. Intuitively, an abductive derivation is the usual logic programming derivation suitably extended in order to consider abducibles. When an abducible atom h is encountered during this derivation, it is assumed, provided this is consistent. The consistency check of a hypothesis, then, starts the second kind of derivation. The consistency derivation verifies that, when the hypothesis h is assumed and added to the current set of hypotheses, any integrity constraint containing h fails (i.e., the bodies of all the integrity constraints containing h are false). During this latter procedure, when an abducible L is encountered, in order to prove its failure, an abductive derivation for its complement, L, is attempted. In other words, as we said before, an abductive derivation computes a set of hypotheses “entailing” (with respect to the given logic program) a given goal/query/observation, starting from a given set of initial hypotheses. In order to compute such set and guarantee that the integrityconstraintsare“satisfied”,anabductivederivationmightrequiresubsidiary consistencyderivations,whichinturnmightrequireadditionalabductivederivations to ensure integrity constraint “satisfaction”. The only detail that we would like to give about the KM is about the syntax of abductive logic programs. In its original formulation, the KM requires that P is a propositional logic program, expressed in form of ground clauses. The integrity constraintsareexpressedintheformofdenialsofatoms,asinExample1;atleastone of such atoms must be abducible. Integrity constraint satisfaction in KM requires 6 that at least one atom in the body of an integrity constraint be false. For instance, given the constraint ← a,b,c. and the set {a,b} of abducibles, if we want to assume a to be true, we must either assume b false (by abducing not b) or prove that c is false. Both ways may require a deductive derivation in P, and possibly some more consistency steps. Such pro- cess is iterated until we find a consistent and “stable” set of abducibles δ, in the sense that further consistency steps do not result in an enlargement of such δ. The KM proof-procedure can be adopted also with first order logic programs (not only propositional): in that case, all variables in the body of an integrity constraint are assumedtobeuniversallyquantifiedfromtheoutside. TheKMproof-procedurehas been proven sound [1], for some specific notions of “entailment” and “satisfaction”, with respect to the semantics defined in [8] and weakly complete for the 3-valued stablemodelsemanticsof[1], aswellastheargumentation-semanticsdefinedin[28]. 2.3 Abductive Logic-based multi-agent system: the ALIAS archi- tecture The agent metaphor is widely used in distributed artificial intelligence, although no real agreement has ever been reached on the definition of agent. In fact, the term ‘agent’ has been used to describe a broad range of computational entities, ranging from simple systems to very large ones [19]. In our work, we use the agent metaphor as a model of a real world legal case, and we specify the agent knowledge by using the logic programming paradigm [29], and particularly, a logic language. We will define here the concept of agent system, from a logic-programming perspective. Definition 3 (agent system) An agent system is a finite set A, where each x ∈ A is a ground term, representing the name of an agent, equipped with a knowledge base K(x). Each K(x) includes an abductive logic program. We assume that, in an agent system, the agents share a common language. In particular, in ALIAS, the individuals will have separate knowledge bases but will refer to a common set of abducibles A. In ALIAS, each agent may thus autonomously reason on its own knowledge base and find abductive explanations to submitted goals. Furthermore, as far as conflicting agents beliefs and goals are concerned, we propose a technique aimed to maintaining consistency in an environment made of multiple individuals, and to aiding the agent reasoning through possible conflicts of beliefs. To this purpose, Abductive Logic Agents can dynamically join into groups, or bunches, with the purpose, for instance, of finding the solution of a given goal 7 in a collaborative way. In this perspective, although the set of program clauses and integrity constraints might differ from agent to agent, the set of abducible predicates (default predicates included) is the same for all the agents in a bunch. Thisimpliesthatwhenprovingagivengoal, ifanagentAassumesanewhypothesis h, all the arguing agents (i.e., the agents belonging to the same bunch) must check the consistency of h with their own integrity constraints. These checks could raise new hypotheses, whose consistency within the bunch has to be recursively checked. Therefore, in ALIAS, the abductive explanation of a goal within a bunch of agents is a set of abduced hypotheses, agreed by all agents in the bunch. To this purpose, the current ALIAS implementation provides mechanisms to support agent bunches, local abduction and global consistency checks. As far as local abduction is concerned, the current implementation adopts the KM proof- procedure.3 In order to guarantee consistency of abduced hypotheses within a bunch, we adopt a two-phase algorithm, which first collects the results of local4 abduction processes by running the KM proof-procedure, and then, depending on the cho- sen coordination policy, checks their consistency, and, in case of success obtains a consistent set of hypotheses ∆. The following example will show how abduction is performed within a bunch of two agents. Example 2 Let us consider again Example 1, where all the knowledge needed to find an explanation to the observation found knott’s body is enclosed in a single agent, that we will call A . Now, let us suppose that another agent, say A , has 0 1 a additional knowledge about the same problem. In particular, let A ’s knowledge 1 base be: peyer has alibi. and the following integrity constraint: ← peyer killed knott, peyer has alibi. Inorderforthemtocommunicatetheresultoftheirreasoning, expressedinform of conjuncts of abducibles, we assume that they share the same set of abducibles. Let such set be A = {someone else killed knott,peyer killed knott} 3Itisworthnoticing,however,thatdespitethisimplementationchoice,theALIASarchitecture is not strictly bound to the KM proof-procedure: the same high-level features of the system could exploit different abduction algorithms ([7, 13]. 4In this context, local means performed within the agent that demonstrates the query. 8 plus all negated literals. The two agents therefore form a bunch together, B = {A ,A }. 0 1 Inordertodemonstratethegoal(observation):-found knott’s body,agentA ap- 0 pliesitsfirstclausereducingthegoalto:-peyer killed knott. Then,sincepeyer killed knott isabducibletheconsistencyforpeyer killed knott hastobecheckedwithinthebunch {A ,A }. Inparticular, agentA triestoprovethefailureoftheintegrityconstraint 0 1 1 ← peyer killed knott, peyer has alibi. To do this, the failure of peyer has alibi must be proved: this derivation fails, since A ’s ALP contains the fact peyer has alibi. 1 Then,A appliesitssecondclausereducingthegoalto:-someone else killed knott; 0 again, the consistency check would require to prove the failure of integrity con- straints: inthiscase,onlytheconstraintinA containsthepredicatesomeone else killed knott 0 representing an hypothesis against Peyer’s guilt. In order to prove its failure, we abduce also the hypothesis not peyer killed knott so the derivation terminates with success, producing the set of hypothesis δ = {someone else killed knott, not peyer killed knott} In the previous example we have considered a pre-defined bunch of two agents for explaining goals with a set of abducible hypotheses which is globally consistent with the knowledge bases of all agents in the bunch. However, in ALIAS more flexible forms for agents composition and coordination are possible, thanks to a LanguageforAbductIveLogicAgents(LAILA).Inthefollowingwedescribeitbriefly, especially focusing on the LAILA operators used for agent grouping, interaction and coordination. 2.4 The LAILA Language The ALIAS agent behaviour can be expressed by means of the Language for Ab- ductIve Logic Agents, LAILA. This language allows to model agent actions and interactions in a logic programming style. In particular, using the LAILA language, the programmer can express the fea- tures of an abductive multi-agent application in a declarative way. In this paper we will focus on agent social behaviour, and on how each agent can request demonstra- tion of goals to other agents in the system. A detailed description of the language can be found in [3]. With regard to agent coordination, we introduce two composition operators: the collaborative AND operator (&) and the competitive operator (;) which can be used by each agent to express and coordinate abductive queries to other (set of) agents. The language provides also a communication operator (>) which is used to submit queries to other agents. 9
Description: