group8(Justice_A_Intelligence)

Download Report

Transcript group8(Justice_A_Intelligence)

Justice A. Intelligence
Nikhil Madan
Pulkit Sinha
Rahul Srinivasan
Rushil Goel
Is Judgment Day near?
Courtesy Google
Misconceptions about Law
Law is nothing more than Logic.
Laws are just a set of facts with inference
rules that can be used to make judgments
Extracting meaning from legal texts is
just a matter of getting through the long,
dense passages
Challenges in Legal
Interpretation
Some terms may have ambiguous meanings – many
cases arise mainly on their difference in interpretation
Most legal terms are “Open-Textured” which have
indeterminate meanings – do not specify necessary and
sufficient conditions for inclusion into a class
Hart’s famous example “Vehicles are not permitted in
this park”
Are baby carriages allowed?
Are fire engines prohibited in case of a fire?
Logical deduction doesn’t
suffice
Legal rules are more heuristic in nature
No ironclad inference laws – rules often have
exceptions
In most former British colonies (including India and
USA), common law is based on stare decisis – doctrine
of precedent
The judgment of similar cases in the past guide present
rulings by analogy
Adversarial and Fluid nature
of Law
Disputes are resolved through argumentation –
different interpretations of facts and rules, their
relevance and consequences can lead to opposite
conclusions
Depending on past cases and changing societal values,
legal concepts and rules evolve
Law is inherently non-monotonic in nature :- past
results may be limited or overturned
E.g., new DNA evidence might acquit the accused
So, how can we use AI in law?
We look at some areas of research
Logical Formalism of Legal Reasoning
Burden of Proof in Legal
Argumentation
Applications of AI in Criminal
Investigations
Influence of AI on law
Uncertainity in Legal Reasoning
Infer new conclusions in the absence of
evidence against them
Inherently non-monotonic
As an example
The law states that a thief should be
punished
What if the thief is mentally ill?
Legal norms
Represented as a legal knowledge base.
Set of strict conditionals in classical
logic
Set of defeasible conditionals
Strict conditionals vs.
Defeasible Conditionals
Strict conditionals - Always hold
Defeasible Conditionals - Prima-facie
legal norms
Transformation
Definition : A transformation is performed iff
is brought forward as a reason for , and
does not deductively entail .
A defeasible jump to a legal consequence
in the absence of reasons against it
A transformation is denoted by
Knowledge Base
Represented as (D,L)
L : set of legal norms of the form
D : set of defeasible legal norms of the
form
Example of a Knowledge Base
Rules represent the following situations: a statement (st) is normally
not written (¬w), and is normally not punishable (¬p), a defamation
(d) is normally a statement and is normally punishable (p), a libel (l) is
normally written (w) and is definitely a defamation, a slander (sl) is
definitely a defamation.
Which interpretation do you
pick?
Given a set of defeasible legal norms, we
need to establish preference relations
between the interpretations that a (legal)
agent foresees as possible
We establish this ordering by partitioning
the knowledge base and evaluating each
interpretation with respect to the
partition
Tolerance
A conditional with antecedent and
consequent is tolerated by a conditional set
iff
Partitioning the knowledge
base
The partition of D  (D ,D ,...,D ) is said to
1 2
n
be n-consistent if it has the following property: Every
defeasible legal norm belonging to is tolerated by

where n is the number of the subsets in the partition.
No partition of L is done, as a strict legal norm can not
be overruled. Only defeasible ones can be overruled.
Strength of Legal Norm
This induces a natural ordering on the
defeasible norms
Picking interpretations
Each interpretation is ordered according to the ranking
of the defeasible conditional sets it falsifies
The lexicographic entailment prefers the
interpretations that falsifies a “lighter” set of defeasible
rules, than a model that falsifies a more “serious” one.
Criteria to compare the
interpretations
the specificity of the falsified defeasible sentences by
each interpretation
the size of the set falsified by each interpretation
In situations when these two criteria will be in conflict,
a rational agent should prefer a criterion rather than
the other.
Based on these two rules, an ordering is induced on the set
of possible interpretations.
Argumentation in Law
An adequate theory of legal reasoning must provide a
sound basis
Argumentation serves as a framework for a practical
definition of proof and proof procedure
Burden of proof introduces a mechanism for
determining the outcome of an argument
What is Argumentation?
An argument comprises data supporting or refuting a
claim.
Warrant - the connection between data and claim
Every claim has a qualification : valid(!), strong (!-),
credible (+), weak(-), and unknown (?)
Any claim is subject to rebuttal arguments
All claims, including input data, must be supported
Warrants…
The relationship link between the antecedent and
consequent of a warrant can either be explanatory or
co-relational
Strength with which its consequent can be drawn from
its antecedent
Sufficient, default, and evidential
Search Algorithm
Side-1 in support of a claim and Side-2 in support of its
negation.
Side-1 attempts support for the input claim.
Given a claim, search for support proceeds from the input
claim toward input data
The process has been completed when all claims are
supported by propositions in the input
If no initial support can be found, the argument ends with a
loss for Side-1.
Search Algorithm (Contd.)
Control passes to Side-2, which tries to refute the
argument for claims established by Side-1.
Two types of refutation actions –
rebutting and undercutting
Defeating Arguments
Heuristics
Heuristics
valid reasoning steps are preferred over plausible steps
moves that are defeating are preferred over moves that only
make a claim controversial
moves that attack a supporting argument closer to the overall
claim are preferred
undercutting moves are preferred over rebutting moves.
Warrants are also ordered according to the following
criterion
stronger warrant types are preferred
warrants for which the antecedent currently has no known
contradictory support are preferred.
Burden of Proof
Burden of Proof
which side of the argument bears the burden
what level of support is required of that side
Defendable argument - one that cannot be defeated
with the given warrants and input data
Standards of Proof
Scintilla of evidence - at least one weak, defendable
argument
Preponderance of the evidence - at least one weak,
defendable argument outweigh the other side’s arguments
Dialectical validity - at least one credible, defendable
argument defeat all of the other side’s arguments
Beyond a reasonable doubt - at least one strong, defendable
argument defeat all of the other side’s arguments
Beyond a doubt - at least one valid, defendable argument
defeat all of the other side’s arguments
Burden of Proof and
Argumentation
Burden of proof plays several roles in the process of
argumentation:
as basis for deciding relevance of particular argument
moves
as basis for deciding sufficiency of a side’s move
as a basis for declaring an argument over
as a basis for determining the outcome
An Example
w1 (loose bricks) --> (maintenance deficiency)
w2 (maintenance deficiency) --> (landlord responsible)
w3 (landlord responsible)) --> (not (tenant responsible))
w4 ((loose bricks)(near road)) --> (danger)
w5 (danger) --> (tenant responsible)
w6 ((loose bricks)(near road)(seldom used)) --> (not (danger))
dl (loose bricks)
d2 (near road)
d3 (seldom used)
(claim (landlord responsible)
Application to Common Law
Burden of proof – a useful aspect of a computational
model of argumentation
Precedent :
Prior cases as antecedents, with conclusions
representing case outcomes
Application of AI in Criminal
Investigation
We have looked at applications of AI for understanding
legal reasoning and argumentation
But before the cases reach the court, prosecutors need a
complete investigation to build a tight case
Two crucial tasks in crime investigation –
Evidence collection
Hypothesis formulation
AI helps human investigators in considering alternative
hypothesis simultaneously
The approach
Different crime scenarios like homicide vs. suicide can
produce similar evidence
Given the evidence collected, and a form of abductive
reasoning, a set of possible scenarios which may
produce the evidence are constructed
Scenarios are made up of fragments – this increases
robustness to handle unforeseen cases
The system suggests new evidence that investigators
may want to search for
The Framework
From [6]
The Problem
Crime scenarios describe real world states and events
Evidence, from forensic tests etc. is available
Hypotheses are properties of the crime like possible
weapons, perpetrators etc.
We wish to find hypotheses that follow from the
scenarios that support all the available evidence
An overview of the algorithm
Given a set of reusable scenario components and evidence,
the scenario instantiator constructs possible scenarios
These are fed into the ATMS to infer circumstances under
which a certain crime scenarios is possible
The ATMS maintains information and assumptions as
nodes and inferences as relations between nodes
The results are analyzed by the query handler for answering:
Which hypotheses are supported by evidence?
What additional evidence can strengthen a hypothesis?
The inference mechanism
Initialize the ATMS with the evidence that is available
All possible sets of events and states that can produce
the given evidence are reconstructed by considering the
possible applicable scenario fragments
All evidence and hypotheses that follow from the
results of the last phase are generated by consider
scenario fragments
The possible inconsistencies that may arise in the
above phases are reported
An Example
From [6]
An Example
The case considered herein involves homicidal or
accident death of babies due to a subdural
hemorrhage. A subdural hemorrhage is a leakage of
blood from vessels on the underside of the dura, one
of the membranes covering the brain. It is a common
cause of death of abused babies (the so-called shaken
baby syndrome), but the injury may also be due to a
number of non-homicidal causes, such as
complications at birth, early childhood illnesses and
certain medical procedures.
Decision Making
The supported hypothesis are those implied by
environments which support the given evidence.
E.g., in above cases, both accident and homicide are
hypotheses consistent with the evidence
If we can find circumstances under which both the
evidence and hypothesis are supported, the evidence
can be found under the given hypothesis
In the above case, reduced collagen synthesis in the
medical report may be found if an accident had
occurred
Summary
If a sound and complete set of crime scenario segments
can be developed, the above procedure helps
investigators by looking at multiple scenarios
simultaneously
Further improvements may involve mechanisms for
weighing different possible scenarios to help focus the
investigators’ efforts
Do we need laws for Robots?
We’ve seen how AI is used to automate and assist the
legal process
Robots will eventually become autonomous beings,
capable of performing tasks in unstructured
environments.
We need laws to regulate the behavior of robots, which
gives rise to a number of legal issues.
Safety Intelligence
Need to limit robot ‘self-control’
A system of regulations restricting artificial intelligence
Isaac Asimov’s Three Laws of Robotics:
A robot may not injure a human being, or, through inaction,
allow a human being to come to harm
A robot must obey orders given it by human beings, except
where such orders would conflict with the First Law
A robot must protect its own existence as long as such
protection does not conflict with the First or Second Law
Believed to be a good foundation for constructing actual
Laws
Safety Intelligence - Challenges
‘Open Texture’ laws
Legal dilemma about status of robots – will they exist
as properties of human beings or as independent
beings.
Exceptions and contradictions in Law
E.g. Law enforcement, surgical operation
Making robots with sufficient intelligence to obey the
law.
Limitations of current
Approaches
Problems with Case-based reasoning
Assumes certain characteristics about legal
reasoning
Difficult to ascertain the true nature of
reasoning employed by lawyers and judges.
Often the principle behind a particular
judgment is more important in determining
its relevance to particular case than the
exact details of the case
An Example
An airline dismisses a co-pilot for refusing to fly a plane on
the ground that it is unsafe to fly
This may be similar to cases of discharge of employees for
refusal to commit perjury, in that both cases the employer’s
actions threaten third parties
There may be different principles operating in similar cases
e.g. pro-employee vs. pro-employer judgments above
The choice of which precedent to follow cannot be
determined based only on the closeness of factors, but
rather by the goodness of principles involved
Limitations of current
Approaches
Data collection and semantic interpretation is
often a stumbling block for the implementation of
AI systems
Ensuring the knowledge base fed to the AI system
is sound and complete is the user’s responsibility
E.g. In the criminal scenario system discussed
before, all possible causes of an hemorrhage
need to be fed into the system. This process
needs extensive knowledge in the particular
field on the user’s part
Conclusion
There has been significant progress in applying AI to
the legal domain
Some problems still need to be tackled for more
widespread use of these ideas
Hopefully, future work will lead to a better
understanding of the legal process and greater synergy
between AI and law
Maybe, we’ll even have Robot Judges!
Thank You!
Bibliography
1.
Cass R. Sunstein, Of Artificial Intelligence and Legal Reasoning
(Chicago Public Law and Legal Theory Working Paper No. 18)
2.
Edwina L. Rissland, Artificial Intelligence and Legal Reasoning (AI
Magazine Volume 9 Number 3, 1988)
3.
Edwina L. Rissland and Kevin D. Ashley, A note on dimensions
and factors (Artificial Intelligence and Law 10: 65-77, 2002)
4.
Arthur M. Farley and Kathleen Freeman, Burden of Proof in Legal
Argumentation (ACM, 1995)
5.
Yueh-Husuan, Chien-Hsun Chen and Chuen-Tsai Sun, The Legal
Crisis of Next Generation Robots: On Safety Intelligence (ICAIL
’07)
Bibliography
6.
Jeron Keppens and John Zeleznikow, A Model Based Reasoning
Approach for Generating Plausible Crime Scenarios and from
Evidence (ICAIL ‘03)
7.
Henry Prakken, Chris Reed and Douglas Walton, Argumentation
Schemes and Generalisations in Reasoning about Evidence (ICAIL
‘03)
8.
Katie Atkinson and Trevor Bench-Capon, Argumentation and
Standards of Proof (ICAIL ‘07)
9.
Katie Greenwood, Trevor Bench-Capon and Peter McBurney,
Towards a Computational Account of Persuasion in Law
10. Edwina L. Rissland and Kevin D. Ashley, A Case-Based System for
Trade Secrets Law (ACM)
Bibliography
11. Floris Bex, Henry Prakken, Bart Verheij and Gerard
Vreewijk, Sense-making software for crime
investigation: how to combine stories and
arguments?(April 2007)
12. Samuel Meira Brasil, Jr. and Berlihes Borges Garcia,
Modelling Legal Reasoning in a Mathematical
Environment through Model Theoretic Semantics
(ICAIL ’03)