Workshop on Modelling Policy-making (MPM 2011)

In conjunction with
The 24th International Conference on Legal Knowledge and Information Systems (JURIX 2011)
Wednesday December 14, 2011
University of Vienna
Vienna, Austria
Context:
As the European Union develops, issues about governance, legitimacy, and transparency become more pressing. National governments and the EU Commission realise the need to promote widespread, deliberative democracy in the policy-making cycle, which has several phases: 1) agenda setting, 2) policy analysis, 3) lawmaking, 4) administration and implementation, and 5) monitoring. As governments must become more efficient and effective with the resources available, modern information and communications technology (ICT) are being drawn on to address problems of information processing in the phases. One of the key problems is policy content analysis and modelling, particularly the gap between on the one hand policy proposals and formulations that are expressed in quantitative and narrative forms and on the other hand formal models that can be used to systematically represent and reason with the information contained in the proposals and formulations.
Submission Focus:
The workshop invites submissions of original research about the application of ICT to the early phases of the policy cycle, namely those before the legislators fix the legislation: agenda setting, policy analysis, and lawmaking. The research should seek to address the gap noted above. The workshop focuses particularly on using and integrating a range of subcomponents – information extraction, text processing, representation, modelling, simulation, reasoning, and argument – to provide policy making tools to the public and public administrators.
Intended Audience:
Legal professionals, government administrators, political scientists, and computer scientists.
Areas of Interest:

  • information extraction from natural language text
  • policy ontologies
  • formal logical representations of policies
  • transformations from policy language to executable policy rules
  • argumentation about policy proposals
  • web-based tools that support participatory policy-making
  • tools for increasing public understanding of arguments behind policy decisions
  • visualising policies and arguments about policies
  • computational models of policies and arguments about policies
  • integration tools
  • multi-agent policy simulations

Preliminary Workshop Schedule:
09:45-10:00 Workshop Opening comments
10:00-11:00 Paper Session 1

  • Using PolicyCommons to support the policy-consultation process: investigating a new workflow and policy-deliberation data model
    Neil Benn and Ann Macintosh
  • A Problem Solving Model for Regulatory Policy Making
    Alexander Boer, Tom Van Engers and Giovanni Sileno

11:00-11:15 Break (coffee, tea, air etc.)
11:15-12:15 Paper Session 2

  • Linking Semantic Enrichment to Legal Documents
    Akos Szoke, Andras Forhecz, Krisztian Macsar and Gyorgy Strausz
  • Semantic Models and Ontologies in Modelling Policy-making
    Adam Wyner, Katie Atkinson and Trevor Bench-Capon

12:15-13:15 Lunch break
13:15-14:45 Paper Session 3

  • Consistent Conceptual Descriptions to Support Formal Policy Model Development: Metamodel and Approach
    Sabrina Scherer and Maria Wimmer
  • The Policy Modeling Tool of the IMPACT Argumentation Toolbox
    Thomas Gordon
  • Ontologies for Governance, Risk Management and Policy Compliance
    Jorge Gonzalez-Conejero, Albert Merono-Penuela and David Fernandez Gamez

14:45-15:00 Break (coffee, tea, air etc.)
15:00-16:00 Paper Session 4 and Closing discussion

  • Policy making: How rational is it?
    Tom Van Engers, Ignace Snellen and Wouter Van Haaften
  • Closing discussion

Workshop Registration and Location:
Please see the JURIX 2011 website for all information about registration and location.
Webpage URL:
http://wyner.info/LanguageLogicLawSoftware/?p=1157
Important Dates:

  • Submission: Monday, October 24
  • Review Notification: Monday, November 7
  • Final Version: Thursday, December 1
  • Workshop date: Wednesday, December 14

Author Guidelines:
Submit position papers of between 2-5 pages in length in PDF format and using the IOS Press style files and authors’ guidelines at:
IOS Press Author Instructions
Submit papers to:
MPM 2011 on EasyChair
Publication:
The position papers are available only in an electronic version from the following link:
Proceedings of the Workshop on Modelling Policy-making
A call for selected extended versions of the papers will be issued for a special issue of AI and Law on Modelling Policy-making.
Contact Information:
Adam Wyner, adam@wyner.info
Neil Benn, n.j.l.benn@leeds.ac.uk
Program Committee Co-Chairs:
Adam Wyner (University of Liverpool, UK)
Neil Benn (University of Leeds, UK)
Program Committee (Preliminary):
Katie Atkinson
Trevor Bench-Capon
Bruce Edmonds
Tom van Engers
Euripidis Loukis
Tom Gordon
Ann Macintosh
Gunther Schefbeck
Maria Wimmer
Radboud Winkels
By Adam Wyner

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

Workshop Applying Human Language Technology to the Law

A workshop at
ICAIL 2011: The Thirteenth International Conference on Artificial Intelligence and Law

Applying Human Language Technology to the Law (AHLTL 2011)

June 10, 2011
University of Pittsburgh School of Law
Overview:
Over the last decade there have been dramatic improvements in the effectiveness and accuracy of Human Language Technology (HLT), accompanied by a significant expansion of the HLT community itself. Over the same period, there have been widespread developments in web-based distribution and processing of legal textual information, e.g. cases, legislation, citizen information sources, etc. More recently, a growing body of research and practice has addressed a range of topics common to both the HLT and Artificial Intelligence and Law communities, including automated legal reasoning and argumentation, semantic information retrieval, cross and multi-lingual information retrieval, document classification, logical representations of legal language, dialogue systems, legal drafting, legal knowledge discovery and extraction, linguistically based legal ontologies, among others. Central to these shared topics is use of HLT techniques and tools for automating knowledge extraction from legal texts and for processing legal language.
The workshop has several objectives. The first objective is to broaden the research base by introducing HLT researchers to the materials and problems of processing legal language. The second objective is to introduce AI and Law researchers to up-to-date theories, techniques, and tools from HLT, which can be applied to legal language. And the third objective is to deepen the existing research streams. Altogether, the interactions among the researchers are expected to advance research and applications and foster interdisciplinary collaboration within the legal domain.
Context:
Over the last two years, there have been several workshops and tutorials on or relating to processing legal texts and legal language, demonstrating a significant surge of interest. There have been two workshops on Semantic processing of legal texts (SPLeT) held in conjunction with LREC (2008 in Marrakech, Morocco; and 2010 in Malta). At ICAIL 2009, there were two workshops, LOAIT ’09 – the 3rd Workshop on Legal Ontologies and Artificial Intelligence Techniques joint with the 2nd Workshop on Semantic Processing of Legal Texts and NALEA ’09 – Workshop on the Natural Language Engineering of Legal Argumentation: Language, Logic, and Computation. LOAIT ’09 focussed on Legal Knowledge Representation with particular emphasis on the issue of ontology acquisition from legal texts, while NALEA ’09 tackled issues related to legal argumentation. In 2009, the National Science Foundation sponsored a workshop Automated Content Analysis and the Law, which drew participants from computational linguistics and political science. Finally, at the Second Workshop on Controlled Natural Language (CNL 2010), there were several presentations related to legal language.
Intended Audience:
The intended audience would include both current members of the AI & law community who are interested in automated analysis of legal texts and corpora and, in addition, HLT researchers for whom analysis of legal texts would provide an opportunity for development and evaluation of HLT techniques. It is anticipated that participants would come from industry (e.g. The MITRE Corporation, Thomson/Reuters, Endeca, Lexis/Nexis, Oracle), the judiciary in the US and Europe, national organisations (e.g. the US National Institute of Standards and Technology, the US National Science Foundation, European Science Foundation, the UK Office of Public Sector Information), government security agencies, legal professionals, and academic HLT researchers.
Areas of Interest:
The workshop will focus on extraction of information from legal text, representations of legal language (ontologies and semantic translations), and dialogic aspects. While information extraction and retrieval are crucial areas, the workshop emphasises syntactic, semantic, and dialogic aspects of legal information processing.

    Building legal resources: terminologies, ontologies, corpora.
    Ontologies of legal texts, including subareas such as ontology acquisition, ontology customisation, ontology merging, ontology extension, ontology evolution, lexical information, etc.
    Information retrieval and extraction from legal texts.
    Semantic annotation of legal texts.
    Multilingual aspects of legal text semantic processing.
    Legal thesauri mapping.
    Automatic Classification of legal documents.
    Automated parsing and translation of natural language arguments into a logical formalism.
    Linguistically-oriented XML mark up of legal arguments.
    Computational theories of argumentation that are suitable to natural language.
    Controlled language systems for law.
    Name matching and alias detection.
    Dialogue protocols and systems for legal discussion.

Workshop Schedule

      9:00 Opening remarks
      9:15 Jack Conrad (invited speaker). The Role of HLT in High-end Search and the Persistent Need for Advanced HLT Technologies
      10:00 Tommaso Fornaciari and Massimo Poesio. Lexical vs. Surface Features in Deceptive Language Analysis
      10:30 Nuria Casellas, Joan-Josep Vallbé and Thomas Bruce. Legal Thesauri Reuse. An Experiment with the U.S. Code of Federal Regulations
      11:00 Break
      11:15 Meritxell Fernández-Barrera and Pompeu Casanovas. Towards the intelligent processing of non-expert generated content: mapping web 2.0 data with ontologies in the domain of consumer mediation
      11:45 Emile De Maat and Radboud Winkels. Formal Models of Sentences in Dutch Law
      12:15 Guido Boella, Llio Humphreys, Leon Van Der Torre and Piercarlo Rossi. Eunomos, a legal document management system based on legislative XML and ontologies (Position paper)
      12:45 Anna Ronkainen. From Spelling Checkers to Robot Judges? Some Implications of Normativity in Language Technology and AI and Law
      13:15 Lunch

Workshop Location
To be announced.
Author Guidelines:

    The workshop solicits full papers and position papers. Authors are welcome to submit tentative, incremental, and exploratory studies which examine HLT issues distinctive to the law and legal applications. Papers not accepted as full papers may be accepted as short research abstracts. Submissions will be evaluated by the program committee. For information on submission details (length, format, notion of position paper, etc) see the ICAIL 2011 conference information:
    ICAIL CFP
    Submissions should be submitted electronically in PDF to the EasyChair site by the deadline (see important dates below):
    AHLTL 2011, an EasyChair site

Publication:

    Selected papers are to be invited to be revised and submitted to a special edition of the AI and Law journal, edited by Adam Wyner and Karl Branting.
    The papers from the workshop are available from here.

Webpage:

    Applying Human Language Technology to the Law

Important Dates:

    Paper submission deadline: DEADLINE FOR SUBMISSIONS EXTENDED TO APRIL 10 by 00:00 EST
    Acceptance notification sent: 15 April 2011
    Final version deadline: 23 May 2011
    Workshop date: 10 June 2011

Contact Information:

    Primary contact: Adam Wyner, adam@wyner.info
    Secondary contact: Karl Branting, lbranting@mitre.org

Program Committee Co-Chairs:

    Adam Wyner (University of Liverpool, UK)
    Karl Branting (The MITRE Corporation, USA)

Program Committee:

    Kevin Ashley (University of Pittsburgh, USA)
    Johan Bos (University of Rome, Italy)
    Sherri Condon (The MITRE Corporation, USA)
    Jack Conrad (Thomson Reuters, USA)
    Enrico Francesconi (ITTIG-CNR, Florence, Italy)
    Ben Hachey (Macquarie University, Australia)
    Alessandro Lenci (Università di Pisa, Italy)
    Leonardo Lesmo (Università di Torino, Italy)
    Emile de Maat (University of Amsterdam, Netherlands)
    Thorne McCarty (Rutgers University, USA)
    Marie-Francine Moens (Catholic University of Leuven, Belgium)
    Simonetta Montemagni (ILC-CNR, Italy)
    Raquel Mochales Palau (Catholic University of Leuven, Belgium)
    Craig Pfeifer (The MITRE Corporation, USA)
    Wim Peters (University of Sheffield, United Kingdom)
    Paulo Quaresma (Universidade de Évora, Portugal)
    Mike Rosner (University of Malta, Malta)
    Tony Russell-Rose (Endeca, United Kingdom)
    Erich Schweighofer (Universität Wien, Austria)
    Rolf Schwitter (Macquarie University, Australia)
    Manfred Stede (University of Potsdam, Germany)
    Mihai Surdeanu (Stanford University, USA)
    Daniela Tiscornia (ITTIG-CNR, Italy)
    Radboud Winkels (University of Amsterdam, Netherlands)
    Jonathan Zeleznikow (Victoria University, Australia)

Legal Know-How Workshop Presentations

December 10, 2010, I gave a presentation at the International Society for Knowledge Organisation’s meeting on Legal Know-How. It was an interesting meeting, where I got the opportunity to present my work to members of the legal profession, hear what law firms are doing about knowledge management, and make some good new contacts.
The slides of all the talks, including mine, are available:
ISKO-UK Legal Know-How meeting
In a couple of weeks, ISKO will also add mp3s of the talks, so one can see the slides and hear the talks. Nice way to do things, as remarks and narration are almost more crucial than the slides themselves.
By Adam Wyner
Distributed under the Creative Commons
Attribution-Non-Commercial-Share Alike 2.0

Call for Papers: JURIX 2010 Workshop on Modelling Legal Cases and Legal Rules

I am organising a workshop at JURIX 2010
Modelling Legal Cases and Legal Rules
As part of the Jurix 2010 conference in Liverpool UK, we will hold a Workshop on Modelling Legal Cases and Legal Rules. This workshop is a follow on from successful workshops at Jurix 2007 and ICAIL 2009.
Legal cases and legal rules in common law contexts have been modelled in a variety of ways over the course of research in AI and Law to support different styles of reasoning for a variety of problem-solving contexts, such as decision-making, information retrieval, teaching, etc. Particular legal topic areas and cases have received wide coverage in the AI and Law literature including wild animals (e.g. Pierson v. Post, Young v. Hitchens, and Keeble v. Hickeringill), intellectual property (e.g. Mason v. Jack Daniel Distillery), and evidence (e.g. the Rijkbloem case). As well, some legal rules have been widely discussed, such as legal argument schemes (e.g. Expert Testimony) or rules of evidence (see Walton 2002). However, other areas have been less well covered. For example, there appears to be less research on modelling legal cases in civil law contexts; investigation of taxonomies and ontologies of legal rules would support abstraction and formalisation (see Sherwin 2009); additional legal rules could be brought under the scope of investigation, such as those bearing on criminal assault or causes of action.
The aim of this workshop is to provide a forum in which researchers can present their research on modelling legal cases and legal rules.
Papers are solicited that model a particular legal case or a small set of legal rules. Authors are free to choose the case or set of legal rules and analyse them according to the authors’ preferred model of representation; any theoretical discussion should be grounded in or exemplified by the case or rules at hand. Papers should make clear what are the particular distinctive features of their approach and why these features are useful in modelling the chosen case or rules. The workshop is an opportunity for authors to demonstrate the benefits of their approach and for group discussions to identify useful overlapping features as well as aspects to be further explored and developed.
Format of papers and submission guidelines
Full papers should not be more than 10 pages long and should be submitted in PDF format. It is suggested that the conference style files are used for formatting (see IOS Press site). All papers should provide:

  • A summary of the case or legal rules.
  • An overview of the representation technique, or reference to a full description of it.
  • The representation itself.
  • Discussion of any significant features.

Short position papers are also welcome from those interested in the topic but who do not wish to present a fully represented case or elaborate discussion of a set of legal rules; the short position papers can outline ideas, sketch directions of research, summarise or reflect on previously published work that has addressed the topic. A short position paper should be not more than five pages, giving a clear impression of what would be presented.
All submissions should be emailed as a PDF attachment to the workshop organiser, Adam Wyner, at: adam@wyner.info.
Programme Committee (Preliminary)

  • Kevin Ashley, University of Pittsburgh, USA
  • Katie Atkinson, University of Liverpool, UK
  • Floris Bex, University of Dundee, UK
  • Trevor Bench-Capon, University of Liverpool, UK
  • Tom Gordon, Fraunhofer, FOKUS, Germany
  • Robert Richards, Seattle, Washington, USA
  • Giovanni Sartor, European University Institute, Italy
  • Burkhard Schafer, Edinburgh Law School, Scotland
  • Douglas Walton, University of Windsor, Canada

Organisation
Organiser of this workshop is Adam Wyner, University of Liverpool, UK. You can contact the workshop organiser by sending an email to adam@wyner.info
Dates
Paper submission: Friday, November 5, 2010
Accepted Notification: Friday, November 12, 2010
Workshop Registration: Friday, November 19, 2010
December 15th, 2010 Jurix Workshops/Tutorials
December 16th-17th, 2010 Jurix 2010 Main Conference
By Adam Wyner
Distributed under the Creative Commons
Attribution-Non-Commercial-Share Alike 2.0

Meeting with John Sheridan on the Semantic Web and Public Administration

I met today with John Sheridan, Head of e-Services, Office of Public Sector Information, The National Archives, located at the Ministry of Justice, London, UK. Also at the meeting was John’s colleague Clare Allison. John and I had met at the ICAIL conference in Barcelona, where we briefly discussed our interests in applications of Semantic Web technologies to legal informatics in the public sector. Recently, John got back in contact to talk further about how we might develop projects in this area.
Perhaps most striking to me is that John made it clear that the government (at least his sector) is proactive, looking for research and development projects that make government data available and usable in a variety of ways. In addition, he wanted to develop a range of collaborations to better understand the opportunities the Semantic Web may offer.
As part of catching up with what is going on, I took a look around the web for relatively recent documents on related activities.

In our discussion, John gave me an overview of the current state of affairs in public access to legislation, in particular, the legislative markup and API. The markup is intended to support publication, revision, and maintenance of legislation, among other possibilities. We also had some discussion about developing an ontology of goverment which would be linked to legislation.
Another interesting dimension is that John’s office is one of a few that I know of which are actively engaged to develop a knowledge economy partly encouraged by public administrative requirements and goals. Others in this area are the Dutch and the US (with xml.gov). All very promising and discussions well worth following up on.
Copyright © 2009 Adam Wyner

Legal Taxonomy

Introduction
In this post, I comment on Sherwin’s recent article Legal Taxonomy in the journal Legal Theory. It is a very lucid, thorough, and well-referenced discussion of the state-of-the-art in taxonomies of legal rules. By considering how legal taxonomies organise legal rules, we better understand current conceptions of legal rules by legal professionals. My take away message from the article is that the analysis of legal rules could benefit from some of the thinking in Linguistics and Computer Science, particularly in terms of how data is gathered and analysed.
Below, I briefly outline ideas concerning taxonomies and legal rules. Then, I present and comment on the points Sherwin brings to the fore.
Taxonomies
Taxonomy is the practice and science of classification of items in a hierarchical IS-A relationship, where the items can be most anything. The IS-A relationship is also understood as subtypes or supertypes. For example, a car is a subtype of vehicle, and a Toyota is a subtype of car; we can infer that a Toyota is a subtype of vehicle. Each subtype has more specific properties than the supertype. In some taxonomies, one item may be a subtype of several supertypes; for example, a car is both a subtype of vehicle and a subtype of objects made of metal, however, not all vehicles are made of metal, nor are all things made from metal vehicles, which indicates that these types are distinct. Taxonomies are more specific than the related term ontologies, for which a range of relationships beyond the IS-A relationship may hold among the items such as is owned by or similar. In addition, ontologies generally introduce properties of elements in the class, e.g. colour, engine type, etc. Classifications in scientific domains such as Biology or Linguistics is intensely debated and revised. It would be expected that this would be even more so true in the legal domain which is comprised of intellectual evidence rather than empirical evidence as in the physical sciences and where the scientific method is not applied.
Legal Rules
First, let us be clear about what a legal rule is with a clear example following Professor David E. Sorkin’s example . A legal rule is a rule which determines whether some proposition holds (say of an individual) contingent on other propositions (the premises). For example, the state of Illinios assault statute specifies: “A person commits an assault when, without lawful authority, he engages in conduct which places another in reasonable apprehension of receiving a battery.” (720 ILCS 5/12-1(a)). We can analyse this into the legal rule:

    A person commits assault if

      1. the person engages in conduct;
      2. the person lacks lawful authority for the conduct;
      3. the conduct places another in apprehension of receiving a battery; and
      4. the other person’s apprehension is reasonable.

Optimally, each of the premises in a rule should be simple and be answerable as true or false. In this example, where all four premises are true, the conclusion, that the person committed assault, is true.
There are significant issues even with such simple examples since each of the premises of a legal rule may itself be subject to further dispute and consideration; the premises may be subjective (e.g. was the conduct intentional), admit degrees of truth (e.g. degree of emotional harm), or application of the rule may be subject to mitigating or aggravating circumstances. The determination of the final claim follows the resolution of these subsidiary disputes and considerations. In addition, some legal rules need not require all of the premises to be true, but allow a degree of counterbalancing evaluation of the terms.
The Sources of Legal Rules
Sherwin outlines the sources of the rules:

      Posited rules, which are legal rules as explicitly given by a legal authority such as a judge giving a legal decision.
      Attributed rules, which are legal rules that are drawn from a legal decision by a legal researcher rather than by a legal authority in a decision. The rule is implicit in the other aspects of the report of the case.
      Ideal rules, which are rules that are ‘ideal’ relative to some criteria of ideality, say morally or economically superior rules.

Purposes of Classification
In addition, we have the purposes or uses of making a classification of legal rules.

      Facilitating the discussion and use of law.
      Supporting the critical evaluation of law
      Influencing legal decision-making

In the first purpose, the rules are sorted into classes, which helps to understand and manage legal information. In Sherwin’s view, this is the most basic, formal, and least ambitious goal, yet it relies on having some taxonomic logic in the first place. The second purpose, the rules are evaluated to determine if they are serving the intended purpose as well as to identify gaps or inconsistencies. As Sherwin points out, the criteria of evaluation must then also be determined; however, this then relates to the criteria which guides the taxonomy in the first place, a topic we touch on below. The final purpose is a normative one, where the classification identifies the normal circumstances under which a rule applies, thereby also clarifying those circumstances in which the rule does not apply. Sherwin points out that legal scholars vary in which purpose they find attractive and worth pursuing.
While I can appreciate that some legal scholars might not find the ‘formal’ classification of interest, I view it from a different perspective. First, any claim concerning the normative application of one rule instead of another rest entirely on the intuitive presumption that the rules are clearly different. This is a distinction that the first level can help to clarify. Similar points can be made for other relationships among rules. Second, focusing on the latter stage does not help to say specifically why one rule means what it does and has the consequences as intended; yet surely this is in virtue of the specific ‘content’ of the rule, which again is clarified by a thorough going analysis at the first stage. Third, if there is going to be any progress in applied artificial intelligence and law, it will require the analytical elements defined at the first stage. Fourth, as the study of Linguistics has shown, close scrutiny at the first stage can help to reveal very issues and problems that are fundamental to all higher stages. Fifth, providing even a small, clear sample of legal arguments analysed along other lines of the first stage can give the community of legal scholars a common ‘pool’ of legal arguments to fruitfully consider at the later stages; along these lines, it is notable how few concrete, detailed examples Sherwin’s paper discusses. Not surprisingly, some of the issues Sherwin raises about the purposes of different ‘levels’ of analysis also appear in the linguistic literature. In my view, though the first stage may not be interesting to most legal professionals, there are very good reasons why it should be.
Criteria of Taxonomy
Several different criteria which guide the taxonomy of legal rules are discussed.

      Intuitive similarity: whether researchers claim that two rules are subtypes of one another.
      Evolutionary history: the legal rule is traced in the history of the law.
      Formal classification: the logical relations among categories of the law.
      Function based: a function from the problem to a set of solutions.
      Reason based: the higher-level reasons that explain or justify a rule.

Sherwin criticises judgements based on intuitive similarity since the taxonomers may be relying on false generalisations rather than their own intuitions and that intuition can be arbitrary and without reason. This is also the sort of criticism leveled at large segment of linguistic research and which has been shown to be misleading. Of course, one must watch false classifications and try to provide a justification for classifying one element in one class and not another. One way to do this is, as in psycholinguistics, is to provide tests run over subjects. Another way is to refine the sorts of observations that lead to classifications. In general, all that we currently know about language, from dictionaries, to grammars, to inference rules is based on linguistic intuitions. Some, such as the rules of propositional logic, have been so fixed that they now seem to exist independent of any linguistic basis.
The issue here is somewhat related to classification by formal logical relations. It is unclear what Sherwin thinks logical relations are and how they are applied. What we do have more clarity on are some of the criteria for such a formal taxonomy: accounting for all legal materials, a strict hierarchy, consistent interpretation of classes, and no overlap of categories. This is but one way to consider a formal hierarchy; indeed, there is a separate and very interesting question about what formal model of classification best suits a legal taxonomy. Yet, this issue is not explored in the article.
The function based approach seems to have meta categories. For example, the rule above can be seen as a function from circumstances to a classification of a person as having committed an assault. However, this is not what appears to be intended in Sherwin’s discussion. Rather, there are meta-functional categories depending on higher level problems and solutions. The examples given are Law as a Grievance-Remedial Instrument and Law as an Administrative-Regulatory Instrument. For me, this is not quite as clear as Sherwin makes it appear.
The reason approach organises rules according to an even higher-level of the rule — the justification or explanation of the rule. Some of the examples are that a wrongful harm imposes an obligation for redress, deterring breaches of promises facilitate exchange, or promoting public safety. In my view, these are what people (e.g. Professor Bench-Capon) in AI and Law would call values which are promoted by the legal rule. Sherwin discusses several different ways that reason based classification is done: intended, attributed, and ideal rationales. In my view, the claimed differences are not all that clear or crucial to the classification. In some cases, the rationale of a legal rule is given by the adjudicator. However, where this is not so, the rationale is implicit and must be interpreted, which is to give the intended rationale. In other cases, legal researchers examine a body of law and provide rationales, which is the attributed rationale. In this sense, the intended and attributed rationales are related (both interpreted), but achieved by different methods (study of one case versus study of a body of cases and considerations about the overall purpose of the law). Finally, there are ideal rationales, which set out broad, ideal goals of the legal rule, which may or may not be ‘ideally’ achievable. In this, the difference between intended/attributed and ideal is whether the rationale is analysed out of cases (bottom-up) or provided legislatively (top-down). In the end, the result is similar — legal rules are classified with respect to some rationale. The general problem with any such rationale is just how it is systematically given and itself justified so as to be consistent and not to yield conflicting interpretations of the same legal rule. Finally, Sherwin seems to think that there is some intrinsic conflict or tension between formal classification and reason based classification. I don’t agree. Rather, the difference is in the properties and methods being employed to make the classification, which are not inherently in conflict. Likely, a mixed approach will yield the most insights.
Copyright © 2009 Adam Wyner

Susskind's "The End of Lawyers" is Part of the Story

Introduction
In this post, I briefly outline Richard Sussking’s background, elements from The End of Lawyers, and then turn to consider issues that Susskind is aware of but does not discuss in depth. These are issues which I believe are fundamental to how technology will impact legal practice such as the semantic web, textual information extraction, ontologies, and open source databases of legal documents.
Background
Susskind specialises in how information and communication technology (ICT) is used by lawyers and public administrators. His website is:
www.susskind.com
Besides the important and general interest of his line of work, its prominence in the community of practicing legal professionals gives us a good indication of the sorts of technologies that community is and is not aware of.
Richard Susskind has been writing about ICT since publication of his PhD thesis Expert Systems in Law (1987, Oxford University Press). He is among the early researchers in Artificial Intelligence and the Law. His subsequent books — The Future of Law and Transforming the Law — developed themes about the relation of ICT and the legal profession, focusing on the ways ICT would change the practice of law and the interactions among lawyers, government administrators, and the public. In addition to the books, Susskind consults widely, is an editor of the journal International Journal of Law and Information Technology, and is a law columnist for The Times. He is very uniquely informed about the technologies that are available and how the legal community regards and uses them. This makes it all the more interesting to draw attention to what he does not discuss in depth.
His recent book The End of Lawyers has garnered a very significant amount of attention, and online excerpts along with comments can be found at:
The End of Lawyers
Legal Technology Tools
In this book, he develops and elaborates his main themes. He points out a range of technologies, briefly outlined below, which will contribute to changing the legal profession. As there is substantial information already on line about his proposals, I will not here repeat them in depth, but to say that by and large I agree with many of the overt points he makes about the applicability of technology to the legal profession as well as why the legal profession has been and remains slow to take up ICT solutions.
Among the key technologies Susskind outlines, we find:

      Automated document assembly — structuring blocks of legal documents.
      Connectivity — email, fax, cell phones, facebook, twitter, blogs.
      Electronic legal marketplace — legal services advertised, rated, and traded.
      E-learning — lawyers and members of the public having the opportunity to learn about the law online.
      Online legal guidance — rather than face-to-face with individual lawyers, a chance to read, learn about the law, have questions addressed at different levels of formality.
      Legal open-sourcing — user generated content, free and unrestricted legal information (e.g. BAILII), legal wikis.
      Closed legal communities — collectives of lawyers, justices, or government officials exchange information.
      Workflow and project management — using software and services to monitor and support the work of legal professionals. This includes case-management and electronic filing.
      Embedded legal knowledge — legal information and knowledge is more readily transparent in daily interactions or prevents non-compliance.
      E-disclosure — finding and processing documents and information relevant to the disclosure phase of a case.
      Online dispute resolution — systems to mediate and support the resolution of disputes.
      Courtroom annotation — transcribing and noting courtroom proceedings manually and automatically.
      Improving access to law — giving citizens more information and advice.

Engineering and Managing Legal Knowledge
In the course of the book, he says that the engineering and management of legal knowledge is central to these technologies, where:

      Legal knowledge management (p. 155) — the systematic organization, standardization, preservation, and exploitation of the collective knowledge of a firm. It is intended to maximize the firm’s return on the combined experience of its lawyers over time.
      Legal knowledge engineer (p. 272): someone who carries out basic analysis, decomposition, standardization, and representation of legal knowledge in computer systems.

However, little is said about how the engineering and management is to be done other than that some of the technologies outlined above contribute to them.
What is said is largely by way of brief references or outlines to additional issues such as the semantic web (p. 68), wikis (but not semantic wikis), online dispute resolution (but little on current developments), and open source legal information (e.g. BAILII, but not WorldLii).
More to the point, there is no discussion of research on key technologies such as:

      Legal ontologies by which legal knowledge is formalised, acquired, processed, and managed.
      XML which underlies the semantic web
      Web-based inference systems
      Textual information extraction which is essential to make use of open source legal information
      Rule-based systems such as provided by Oracle (previously known as Softlaw, RuleBurst, and Haley) which are prominently used by UK tax authorities
      E-government services which go beyond providing information and submission of forms but also allow some interaction such as Parmenides and DEMO-net

These are all topics of central relevance to our blog and to the AI and Law community which organises around the International Conference on AI and Law or Jurix
We agree by and large with Susskind. However, there is much more which would be highly relevant and valuable to draw to the attention of the legal community. Moreover, it would be very valuable to the AI and Law community were his prominent and respected voice in the legal and governmental circles to be heard advocating further for research such as in AI and Law.
Copyright © 2009 Adam Wyner

Legal Track of the Text Retrieval Conference Series

The Text Retrieval Conference (TREC) is an annual workshop on text retrieval from large text collections. It is sponsored by the National Institute of Standards and Technology, which is an agency of the US Commerce Department and started in 1992. In The goal of the legal track is to develop search technology that meets the needs of lawyers to engage in effective discovery in digital document collections. In 2006, a legal track was added to the conference, and there have been annual tracks the last three years.
The stated goal of the legal track is to develop text search technology to help lawyers discover information in digital document corpora. Papers from the track are published as part of the proceedings of TREC.
In the legal track, researchers are set a variety of tasks and topics among which they can choose to apply their search techniques. Let’s consider one, which was proposed in 2008 and continued in 2009, the Interactive Task, for which we have task guidelines and topics from 2008.
For 2008, the task is to search for documents that relate to topic, which is a single 16 page class action complaint that a tobacco company committed fraud, from among a document population of nearly 7 million documents of the Legacy Tobacco Document Library of legal case documents involving US tobacco companies. There are a wide range of document genres. The task is to realistically model the way that lawyers develop and refine their searches in the course of the discovery phase of litigation; that is, participants must retrieve a set of documents ‘relevant’ to what ought to be discovered concerning the topic. In the discovery phase, the parties to the suit request material (documents and evidence) concerning the case; e-discovery is the discovery phase involving electronic documents. The task is intended to be more ‘realistic’ in that it allows participants to engage an expert so as to better define the set of documents that are relevant to the topic. Here ‘relevant’ means that the participants recover the same set of documents (from the set of documents available) that a lead litigating attorneys would select; thus, the interaction with an expert who helps define relevance. The success of the participants searches are measured in terms of recall, precision, and a ‘summary measure of effectiveness’.
Discovery is a key phase of litigation, concerning the identification of information that is important to the litigators in arguing the case. However, we may consider whether it is central to legal argument itself; the evidence discovered is used in arguing the case as evidence for one claim or another, but it is unclear how distinct this is from any sort of argument where evidence is crucial. For example, in a scientific context, one might argue that a certain protein functions in a certain way to impede cancer growth, then search in the document space for supporting evidence. In other words, there are questions concerning how the task in the Legal Track bears on specifically legal reasoning such as case based reasoning, factor analysis, precedent, and grounding decisions in the law. This would be a rather different and very worthwhile task for the TREC Legal Track.
The TREC Legal Track is very closely related to workshops on e-discovery/e-disclosure DESI which are organised by many of the people involved in the TREC Legal Track.
Copyright © 2009 Adam Wyner