Rinke Hoekstra and I have a paper which will appear in Knowledge Engineering Review.
A Legal Case OWL Ontology with an Instantiation of Popov v. Hayashi
Adam Wyner and Rinke Hoekstra
To appear in Knowledge Engineering Review
Abstract
The paper provides an OWL ontology for legal cases with an instantiation of the legal case Popov v. Hayashi. The ontology makes explicit the conceptual knowledge of the legal case domain, supports reasoning about the domain, and can be used to annotate the text of cases, which in turn can be used to populate the ontology. A populated ontology is a case base which can be used for information retrieval, information extraction, and case based reasoning. The ontology contains not only elements of indexing the case (e.g. the parties, jurisdiction, and date), but as well elements used to reason to a decision such as argument schemes and the components input to the schemes. We use the Protege ontology editor and knowledge acquisition system, current guidelines for ontology development, and tools for visual and linguistic presentation of the ontology.
By Adam Wyner
Distributed under the Creative Commons
Attribution-Non-Commercial-Share Alike 2.0
Recent Paper Submissions
During my time at the Leibniz Center for Law working on the IMPACT, I and my colleagues Tom van Engers and Kiavash Bahreini prepared and submitted three papers to conferences and workshops. The drafts of the papers are linked below along with the abstracts. Comments welcome.
A Framework for Enriched, Controlled On-line Discussion Forums for e-Government Policy-making
Adam Wyner and Tom van Engers
Submitted to eGOV 2010
Abstract
The paper motivates and proposes a framework for enriched on-line discussion forums for e-government policy-making, where pro and con statements for positions are structured, recorded, represented, and evaluated. The framework builds on current technologies for multi-threaded discussion lists by integrating modes, natural language processing, ontologies, and formal argumentation frameworks. With modes other than the standard reply “comment”, users specify the semantic relationship between a new statement and the previous statement; the result is an argument graph. Natural language processing with a controlled language constrains the domain of discourse, eliminates ambiguity and unclarity, allows a logical representation of statements, and facilitates information extraction. However, the controlled language is highly expressive and natural . Ontologies represent the knowledge of the domain. Argumentation frameworks evaluate the argument graph and generate sets of consistent statements. The output of the system is a rich and articulated representation of a set of policy statements which supports queries, information extraction, and inference
From Policy-making Statements to First-order Logic
Adam Wyner, Tom van Engers, and Kiavash Bahreini
Submitted to eGOVIS 2010
Abstract
Within a framework for enriched on-line discussion forums for e-government policy-making, pro and con statements for positions are input, structurally related, then logically represented and evaluated. The framework builds on current technologies for multi-threaded discussion, natural language processing, ontologies, and formal argumentation frameworks. This paper focuses on the natural language processing of statements in the framework. A small sample policy discussion is presented. We adopt and apply a controlled natural language (Attempto Controlled English) to constrain the domain of discourse, eliminate ambiguity and unclarity, allow a logical representation of statements which supports inference and consistency checking, and facilitate information extraction. Each of the polity statements is automatically translated into rst-order logic. The result is logical representation of the policy discussion which we can query, draw inferences (given ground statements), test for consistency, and extract detailed information.
Towards Web-base Mass Argumentation in Natural Language
Adam Wyner and Tom van Engers
Submitted to EKAW 2010
Abstract
Within the artificial intelligence community, argumentation has been studied for quite some years now. Despite progress, the field has not yet succeeded in creating support tools that members of the public could use to contribute their views to discussions of public policy. One important reason for that is that the input statements of participants in policy-making discussions are put forward in natural language, while translating the statements into the formal models used by argumentation scientists is cumbersome. These formal models can be used to automatically reason with, query, or transmit domain knowledge using web-based technologies. Making this knowledge explicit, formal, and expressed in a language which a machine can process is a labour, time, and knowledge intensive task. To make such translation and it requires expertise that most participants in policy-making debates do not have. In this paper we describe an approach with which we aim at contributing to a solution of this knowledge acquisition bottle-neck. We propose a novel, integrated methodology and framework which adopts and adapts existing technologies. We use semantic wikis which support mass, collaborative, distributive, dynamic knowledge acquisition. In particular, ACEWiki incorporates NLP tools, enabling linguistically competent users to enter their knowledge in natural language, while yielding a logical form that is suitable for automated processing. In the paper we will explain how we can extend the ACEWiki and augment it with argumentation tools which elicit knowledge from users, making implicit information explicit, and generate subsets of consistent knowledge bases from inconsistent knowledge bases. To a set of consistent propositions, we can apply automated reasoners, allowing users to draw inferences and make queries. The methodology and framework take a fragmentary, incremental development approach to knowledge acquisition in complex domains.
By Adam Wyner
Distributed under the Creative Commons
Attribution-Non-Commercial-Share Alike 2.0
Semantic Processing of Legal Texts Workshop
In this post you will find information on the Semantic Processing of Legal Texts workshop, held in conjunction with the Language Resources and Evaluation Conference. Below please find a link to the conference, information on the workshop, and a program for the conference.
LREC
Language Resources and Evaluation Conference, May 17-23, Malta.
LREC 2010 Workshop on
SEMANTIC PROCESSING OF LEGAL TEXTS (SPLeT-2010)
23 May 2010, Malta
Workshop Description
The legal domain represents a primary candidate for web-based information distribution, exchange and management, as testified by the numerous e-government, e-justice and e-democracy initiatives worldwide. The last few years have seen a growing body of research and practice in the field of Artificial Intelligence and Law which addresses a range of topics: automated legal reasoning and argumentation, semantic and cross-language legal information retrieval, document classification, legal drafting, legal knowledge discovery and extraction, as well as the construction of legal ontologies and their application to the law domain. In this context, it is of paramount importance to use Natural Language Processing techniques and tools that automate and facilitate the process of knowledge extraction from legal texts.
Over the last two years, a number of dedicated workshops and tutorials specifically focusing on different aspects of semantic processing of legal texts has demonstrated the current interest in research on Artificial Intelligence and Law in combination with Language Resources (LR) and Human Language Technologies (HLT). The LREC 2008 Workshop on “Semantic processing of legal texts” was held in Marrakech, Morocco, on the 27th of May 2008. The JURIX 2008 Workshop on “the Natural Language Engineering of Legal Argumentation: Language, Logic, and Computation (NaLEA)”, which focused on recent advances in natural language engineering and legal argumentation. The ICAIL 2009 Workshops “LOAIT ’09 – the 3rd Workshop on Legal Ontologies and Artificial Intelligence Techniques joint with the 2nd Workshop on Semantic Processing of Legal Texts” and “NALEA’09 – Workshop on the Natural Language Engineering of Legal Argumentation: Language, Logic, and Computation”, the former focusing on Legal Knowledge Representation with particular emphasis on the issue of ontology acquisition from legal texts, the latter tackling issues related to legal argumentation and linguistic technologies.
To continue this momentum, a 3rd Workshop on “Semantic Processing of Legal Texts” is being organised at the LREC conference to bring to the attention of the broader LR/HLT community the specific technical challenges posed by the semantic processing of legal texts and also share with the community the motivations and objectives which make it of interest to researchers in legal informatics. The outcome of these interactions are expected to advance research and applications and foster interdisciplinary collaboration within the legal domain.
The main goals of the workshop are to provide an overview of the state-of-the-art in legal knowledge extraction and management, to explore new research and development directions and emerging trends, and to exchange information regarding legal LRs and HLTs and their applications.
Areas of Interest
The workshop will focus on the topics of the automatic extraction of information from legal texts and the structural organisation of the extracted knowledge. Particular emphasis will be given to the crucial role of language resources and human language technologies. Papers are on, but not limited to, the following topics:
Workshop Chairs
Program Committee
Program
A Description Language for Content Zones of German Court Decisions
Florian Kuhn
Controlling the language of statutes and regulations for semantic processing
Stefan Hoefler and Alexandra Bünzli
Named entity recognition in the legal domain for ontology population
Mírian Bruckschen, Caio Northfleet, Douglas da Silva, Paulo Bridi, Roger Granada, Renata Vieira, Prasad Rao and Tomas Sander
Coffee break
Legal Claim Identification: Information Extraction with Hierarchically Labeled Data
Mihai Surdeanu, Ramesh Nallapati and Christopher Manning
On the Extraction of Decisions and Contributions from Summaries of French Legal IT Contract Cases
Manuel Maarek
Towards Annotating and Extracting Textual Legal Case Factors
Adam Wyner and Wim Peters
Legal Rules Learning based on a Semantic Model for Legislation
Enrico Francesconi
The IMPACT Project — first two days
As I mentioned in a previous post, I am working in Amsterdam for the next three months on setting up a research project at the Leibniz Center for Law. The focus here is to develop information extract of textual debates (using GATE) and a tool for inputting debates in a structured manner that can be further processed for reasoning.
The official IMPACT Project information on CORDIS.
As part of my contribution, I have two draft papers, written in the spring and summer of 2009, which will be further developed at Leibniz: From Arguments in Natural Language to Argumentation Frameworks and Multi-modal Multi-threaded Online Forums. While these are early drafts of papers and not for wider circulation, they give a good indication of the line of thinking and of some of the key ideas we will be pursuing. Comments about these works are very welcome.
By Adam Wyner
Distributed under the Creative Commons
Attribution-Non-Commercial-Share Alike 2.0
Estrella Project Overview
Until September 2009, I worked on the Estrella Project (The European project for Standardized Transparent Representations in order to Extend Legal Accessibility) at the University of Liverpool. One of the documents which I co-authored (with Trevor Bench-Capon) for the project was the ESTRELLA User Report, which is an open document about key elements of the project. In the context of commercial, academic, governmental collaborations, many of the issues and topics from that project are still relevant, especially concerning motivations and goals of open source materials for legal informatics. In order to circulate this discussion further afield, I have taken the liberty to reproduce an extracted from the article. LKIF stands for the Legal Knowledge Interchange Format, which was a key deliverable in the project. For further documents from the project, see the Estrella Project website.
Overview
The Estrella Project (The European project for Standardized Transparent Representations in order to Extend Legal Accessibility) has developed a platform which allows public administrations to deploy comprehensive solutions for the management of legal knowledge. In reasoning about social benefits or taxation, public administrators must represent and reason with complex legislation. The platform is intended to support the representation of and reasoning about legislation in a way that can help public administrations to improve the quality and efficiency of their services. Moreover, given a suitable interface, the legislation can be made available for the public to interact with. For example, LKIF tools could be made available to citizens via the web to help them to assess their eligibility for a social benefits as well as filling out the appropriate application forms.
The platform has been designed to be open and standardised so that public administrations need not become dependent on proprietary products of particular vendors. Along the same lines, the platform supports interoperability among various components for legal knowledge-based systems allowing public administrations to freely choose among the components. A standardised platform also enables a range of vendors to develop innovative products to suit particular market needs without having to be concerned with an all-encompassing solution, compatibility with other vendors, or being locked out of a strategic market by “monolithic” vendors. As well, the platform abstracts from the expression of legislation in different natural languages so providing a common, abstract legal “lingua franca”.
The main technical achievement of the Estrella Project is the development of a Legal Knowledge Interchange Format (LKIF), which represents legal information in a form which builds upon emerging XML-based standards of the Semantic Web. The project platform provides Application Programmer Interfaces (APIs) for interacting with legal knowledge-based systems using LKIF. LKIF provides formalisms for representing concepts (“ontologies”), inference rules, precedent cases and arguments. An XML document schema for legislation has been developed, called MetaLex, which complements and integrates national XML standards for legislation. This format supports document search, exchange, and association among documents as well as enforces a link between legal sources and the legal knowledge systems which reason about the information in the sources. In addition, a reference inference engine has been developed which supports reasoning with legal knowledge represented in LKIF. The utility of LKIF as an interchange format for legal knowledge has been demonstrated with pilot tests of legal documents which are expressed in proprietary formats of several vendors then translated to and from the format of one vendor to that of another.
Background Context
The Estrella Project originated in the context of European Union integration, where:
- The European Parliament passes EU wide directives which need to be incorporated into or related to the legislation of member states.
- Goods, services, and citizens are free to move across open European borders.
- Democratic institutions must be strengthened as well as be more responsive to the will of the citizenry.
- Public administrations must be more efficient and economical.
In the EU, the legal systems of member states have been composed of heterogeneous, often conflicting, rules and regulations concerning taxes, employment, education, pensions, health care, property, trade, and so on. Integration of new EU legislation with existing legislation of the member states as well as homogenisation of legal systems across the EU has been problematic, complex, and expensive to implement. As the borders of member states open, the rules and regulations concerning the benefits and liabilities of citizens and businesses must move as people, goods, and services move. For example, laws concerning employment and pension ought to be comparable across the member states so as to facilitate the movement of employees across national boundaries. In addition, there are more general concerns about improving the functionality of the legal system so as to garner public support for the legal system, promoting transparency, compliance, and citizen involvement. Finally, the costs of administering the legal system by EU administrative departments, administrations of member states, and companies throughout the EU are signficant and rising. The more complex and dynamic the legislative environment, the more burdensome the costs.
Purposes
Given this background context, the Estrella Project was initiated with the following purposes in mind:
- to facilitate the integration of EU legal systems
- to modernise public administration at the levels of the EU and within member states by supporting efficiency, transparency, accountability, accessibility, inclusiveness, portability, and simplicity of core governmental processes and services
- to improve the quality of legal information by testing legal systems for consistency (are there contradictions between portions of the law) and correctness (is the law achieving the goal it is specied for?).
- to reduce the costs of public administration
- to reduce private sector costs of managing their legal obligations
- to encourage public support for democratic institutions by participation, transparency, and personalisation of services
- to ease the mobility of goods, services, and EU citizens within the EU
- to support businesses across EU member states
- to provide the means to “modularise” the legal systems for different levels of EU legal structure, e.g. provide a “municipal government” module which could be amended to suit local circumstances
- to support a range of governmental and legal processes across organisations and on behalf of citizens and businesses
- to support a variety of reasoning patterns as needed across a range of resources (e.g. directives, legal case bases).
Forthcoming Article: On Controlled Natural Languages: Properties and Prospects
I am a co-author of the forthcoming article On Controlled Natural Languages: Properties and Prospects. From the abstract:
This collaborative report highlights the properties and prospects of Controlled Natural Languages (CNLs). The report poses a range of questions concerning the goals of the CNL, the design, the linguistic aspects, the relationships and evaluation of CNLs, and the application tools. In posing the questions, the report attempts to structure the field of CNLs and to encourage further systematic discussion by researchers and developers.
The reference and link to the article:
A. Wyner, K. Angelov, G. Barzdins, D. Damljanovic, N. Fuchs, S. Hoefler, K. Jones, K. Kaljurand, T. Kuhn, M. Luts, J. Pool, M. Rosner, R. Schwitter, and J. Sowa. On Controlled Natural Languages: Properties and Prospects, to appear in: N.E. Fuchs (ed.), Workshop on Controlled Natural Languages, CNL 2009, LNCS/LNAI 5972, Springer, 2010.
Information Extraction of Legal Case Features with Lists and Rules
In this post, we show how legal case features can be annotated using lists and rules in GATE. By features, we mean a range of detailed information that may be relevant to searching for cases or extracting information such as the parties, the other legal professionals involved (judges, lawyers, etc), location, decision, case citation, legislation, and so on. In a forthcoming related post, we discuss how to use an ontology to annotate cases. We have some background discussion of case based reasoning Information Extraction of Legal Case Factors. (See introductory notes on this and related posts.)
Features of cases
Legal cases contain a wealth of detailed information such as:
- Case citation.
- Names of parties.
- Roles of parties, meaning plaintiff or defendant.
- Sort of court.
- Names of judges.
- Names of attorneys.
- Roles of attorneys, meaning the side they represent.
- Final decision.
- Cases cited.
- Relation of precedents to current case.
- Case structural features such as sections.
- Nature of the case, meaning using keywords to classify the case in terms of subject (e.g. criminal assault, intellectual property, ….)
With respect to these features, one would want to make a range of queries (using some appropriate query language).
- In what cases has company X been a defendant?
- In what cases has attorney Y worked for company X, where X was a defendant?
- What are the final decisions for judge Z?
- If the case concerns criminal assault, was a weapon used?
We initially based our work on Bransford-Koons Ph.D. Thesis 2005, commenting on, adapting, and adding to it. We used cases from California Criminal Courts which were used in that work since the lists and rules are highly specific.
Output
We have the following sample outputs from our lists and rules applied to People v. Coleman, 117 Cal App. 2d 565. In the first figure, we find the address, court district, citation, case name, counsels for each side, and the roles. There are aspects which need to be further cleaned up, but this gives a flavour of the annotations.

In the second figure, we focus on additional information such as structural sections (e.g. Opinion), the name of the judge, and terms having a bearing on criminal assault and weapons.

In the final figure, we identify the decision.

GATE
In the archive, we have the application, lists, JAPE rules, and graphics. The lists.def file in this archive are associated with the various other lists. The JAPE rules may have different names from what is found in the application and discussed below, but (so far as we understand), this should make no difference in the functionality.
Lists
Gazetteer lists which were used are the following; these are lists contained in a master list labelled DSAGaz. We samples and comment below.
- lists.def. The gazetteer list which contains the lists below. When importing this along with the standard ANNIE list, this list is renamed in the application.
- attack_words.lst. Actions that can be construed as attacks such as hit, hitting, throw, thrown, threw,….
- intention.lst. Terms for intention such as intend, intends, intending,…, expect, expects,….
- judgements.lst. Terms related to judgment such as granted, denied, reversed, overturned, remanded,….
- judgeindicator.lst. The indicator J.. This is a problematic indicator if it is part of an individual’s name.
- criminal_assault.lst. Terms related to assault such as assault, violent injury, ability,…. It is unclear just how cohesive this set of terms is.
- legal_appellate_districts.lst. A list of appellate districts such as Fifth Appellate District, Fifth Dist.,….
- legal_casenames.lst. Terms that can be used to indicate case names such as v., In Re,
- legal_counselnames.lst. Terms for counselor titles such as Attorney General, Deputy Public Defender,….
- legal_general.lst. Terms for footnotes or numbering sections such as fn., No.,….
- legal_opinion_sections.lst. Terms for sections of legal opinion such as concurring, counsel, dissenting, opinion,….
- legal_coa.lst. Terms for causes of action such as aggravated assault, assault, breaking and entering, burglary, robbery,….
- legal_code_citations.lst. Code citation information such as Civ. Code, Penal Code,….
- us_district_abb_01.lst. Abbreviations for legal districts such as Cal., P., Wis.,….
- us_context_abb_01.lst. Abbreviations for participant roles such as App., Rptr,….
- legal_citations.lst. Abbreviations for citations and related to districts such as Cal.2d, Cal.App. 3d,….
- legal_parties.lst. Terms for legal roles such as amicus curie, appellant, appellee, counsel, defendant, plaintiff, victim, witness,….
- lower_courts.lst. Phrases for other courts such as Municipal Court of, Superior Court of,….
- possible_weapons.lst. A list of items that could be weapons such as automobile, bat, belt,….
- weapons.lst. A list of items that are weapons such as assault rifle, axe, club, fist, gun,….
Discussion of Lists
We used some of the lists directly from Bransford-Koons 2005, but they are clearly in need of reconstruction and extension. A general problem is that the lists are defined for US case law and particularly the California district courts. Thus, we cannot simply apply the lists to different jurisdictions, e.g. the United Kingdom; the lists and rules must be relativised to different contexts. More technically, lists have alternative graphical (capital or lower case) or morphological forms, which would be better addressed using a Flexible Gazetteer. In addition, it is unclear how one could bound the range of relevant terms appropriately and give them interpretations that are relevant to the context; in general, a lexicon or ontology could give us a better list of terms, but we must find some means to construe them as need be in the legal context. For example, we have a range of attack action terms such as hit, hitting, throw, thrown, threw,….; in some contexts these actions need not be construed as attack, e.g. baseball. Some means needs to be found to ascribe the appropriate interpretation in context. A related issue is whether we must list all alternative forms of some terms (also taking into consideration spaces) or whether we can better write JAPE rules; this is relevant for the list of appellate districts, where we find both abbreviations and alternative elements of information as in Fifth Appellate District, Fifth Appellate District Div 1, and Fifth Appellate District, Division 1. Along these lines, we would prefer a systematic means to relate abbreviations to the terms they abbreviate. In our view, more general solutions are better than specific ones which list information; lists ought to be contain arbitrary information, while JAPE rules construct systematic information.
JAPE Rules
Given the lists, we have JAPE rules to annotate the relevant portions of text.
- AppellantCounsel: annotates the appellant counsel.
- RespondentCounsel: annotates the respondent counsel.
- DSACounsellor: annotates counsels.
- SectionsTerm: annotates sections relative to the list of section terms.
- CaseRoleGeneral
- DSACaseName2: annotates the case name.
- DSACaseName: annotates the case name.
- DSACaseCit: annotates the case citation.
- CriminalAssault: annotates terms for criminal assault.
- CauseOfAction: annotates for causes of action.
- AttackTerm: annotates attack terms.
- AppellateDistrict: annotates districts of courts.
- DecisionStatement: annotates a sentence as the decision statement.
- JudgementTerm: annotates terms related to judgement.
- JudgeName: annotates the names of judges.
- JudgeInd: annotates the judge name indicator.
- IntentTerm: annotates terms of intent.
Discussion
Some of these rules annotate sentences, while others annotate entities with respect to some property. Some of the rules don’t work quite as well as we would wish and could stand further refinement such as the rule for the roles of counsels; the solution we have is rather ad hoc. Nonetheless, as a first pass, the lists and rules give some indication of what is possible.
Order of application
- Document Reset PR
- RegexSentenceSplitter
- ANNIE English Tokeniser
- ANNIE POS Tagger
- MorphologicalAnalyzer
- DSAGaz
- AnnieGaz
- Flexible Gazetteer
- NPChunker
- ANNIE NE Transducer
- IntentTerm
- JudgeInd
- JudgeName
- JudgementTerm
- DecisionStatement
- Weapons
- AppellateDistrict
- AttackTerm
- CauseOfAction
- CriminalAssault
- DSACaseCit
- DSACaseName
- DSACaseName2
- DSACaseNameAZW
- CaseRoleGeneral
- SectionsTerm
- DSACounsellor
- RespondentCounsel
- AppellantCounsel
Discussion
Despite the limitations, this gives some useful, preliminary results which can easily be built upon. Moreover, we know of no other public, open system of annotating case elements (or factors).
By Adam Wyner
Distributed under the Creative Commons
Attribution-Non-Commercial-Share Alike 2.0
Information Extraction with ANNIC
In Information Extraction of Legal Case Factors, we presented lists and rules for annotation of legal case factors. In this post, we go one step further and use the ANNotations In Context (ANNIC) tool of GATE. This is a plug which helps to search for annotations, visualise them, and inspect features. It is useful for JAPE rule development. We outline how to plug in, load, and run ANNIC. (See introductory notes on this and related posts.)
Introduction to ANNIC
ANNIC is an annotation indexing and retrieval system. It is integrated with the data stores, where results of annotations on a corpus can be saved. Once a processing pipeline is run over the corpus, we can use ANNIC to query and inspect the contexts where annotations appear; the queries are in a subset of the JAPE language, so can be complex. The results of the queries are presented graphically, making them easy to understand. As such, ANNIC is a very useful tool in the development of rules as one can discover and test patterns in corpora. There is also an export facility, so the results can be presented in a file, but this is not a full information extraction system such as one might want with templates.
For later, but important to know from the documentation: “Be warned that only the annotation sets, types and features initially indexed will be updated when adding/removing documents to the datastore. This means, for example, that if you add a new annotation type in one of the indexed document, it will not appear in the results when searching for it.” This implies that where one adds new annotations to the pipeline, one should delete the old data store and create another one with respect to the new results. For example, if one ran the pipeline without POS, one cannot add POS later and inspect it in the pipeline.}
Further details on ANNIC are available at GATE documentation on ANNIC and there is an online video.
Instantiating the serial data store
The following steps are used to create the requisite parts and inspect them with ANNIC. One starts with an empty GATE, then adds processing resources, language resources, and pipelines since these can all be related to the data store in a later step. This material is adapted or adopted from the GATE ANNIC documentation, cutting out many of the options. To instantiate a serial data store (SSD), which is how the annotated documents are saved and searched. The application, lists, and rules that this example uses is from Information Extraction of Legal Case Factors.
- RC on Datastores > Create datastore.
- From the drop-down list select “Lucene Based Searchable DataStore”.
- At the input window, provide the following parameters:
- DataStore URL: Select an empty folder where the data store is created.
- Index Location: Select an empty folder. This is where the index will be created.
- Annotation Sets: Provide the annotation sets that you wish to include or exclude from being indexed. There are options here, but we want to index all the annotation sets in all the documents, so make this list empty.
- Base-Token Type: These are the basic tokens of any document (e.g. Token) which your documents must in order to get indexed.
- Index Unit Type: This specifies the unit of Index (e.g. Sentence). In other words, annotations lying within the boundaries of the annotations are indexed (e.g. in the case of Sentence, no annotations that are spanned across the boundaries of two sentences are considered for indexing). We use the Sentence unit.
- Features: Users can specify the annotation types and features that should be included or excluded from being indexed (e.g. exclude SpaceToken, Split, or Person.matches).
- Click OK. If all parameters are OK, a new empty searchable SSD will be created.
- Create an empty corpus and save it to the SSD.
- Populate the corpus with some documents. Each document in the corpus is automatically indexed and saved to the data store.
- Load some processing resources and then a pipeline. Run the pipeline over the corpus.
- Once the pipeline has finished (and there are no errors), save the corpus in the SSD by right clicking on the corpus, then “Save to its datastore”.
- Double click on the SSD file under Datastores. Click on the “Lucene DataStore Searcher” tab to activate the search GUI.
- Now you are ready to specify a search query of your annotated documents in the SSD.
Output
The GUI opens with parts as shown in the following two figures:


Working with the GUI
The figures above show three main sections. In the top section, left section, there is a blank text area in which one can write a query (more on this below); the search query returns the “content” of the annotations. There are options to select a corpus, annotation set, the number of results, the size of the context (e.g. the number of tokens to the left and right of what one searches for). In the central section, one can see a visualisation of annotations and values given the search query. In the bottom section, one has a list of the matches to the query across the corpus, giving the left and right contexts relative to the search results. An annotation rows manager lets one add (green plus sign) or remove (red minus sign) annotation types and features to display in the central section. The bottom section contains the results table of the query, i.e. the text that matches the query with their left and right contexts. The bottom section also contains tabbed panes of statistics such as how many instances of particular annotation appear.
Queries
The queries written in the blank text area are a subset of the JAPE patterns and use the annotations used in the pipeline. Queries are activated by hitting ENTER (or the Search icon). The following are some template patterns that can be used Below we give a few examples of JAPE pattern clauses which can be used as SSD queries.
- String
- {AnnotationType}
- {AnnotationType == String}
- {AnnotationType.feature == feature value}
- {AnnotationType1, AnnotationType2.feature == featureValue}
- {AnnotationType1.feature == featureValue,
AnnotationType2.feature == featureValue}
Specific queries are:
- Trandes — returns all occurrences of the string where it appears in the corpus.
- {Person} — returns annotations of type Person.
- {Token.string == “Microsoft”} — returns all occurrences of “Microsoft”.
- {Person}({Token})*2{Organization} — returns Person followed by zero or up to two tokens followed by Organization.
- {Token.orth==”upperInitial”, Organization} — returns Token with feature orth with value set to “upperInitial” and which is also annotated as Organization.
- {Token.string==”Trandes”}{Token})*10{Secret} — returns string “Trandes” followed by zero to ten tokens followed by Secret.
- {Token.string ==”not”}({Token})*4{Secret} — returns the string “not”, followed by 4 or less tokens, followed by something annotated with Secret.
An example of a result for the last query is:
Trandes averred nothing more than that it possessed secret.
In ANNIC, the result of the query appears as:

One can write queries using the JAPE operators: | (OR operator), +, and *. ({A})+n means one and up to n occurrences of annotation {A}, and ({A})*n means zero or up to n occurrences of annotation {A}.
Summary
ANNIC is particularly useful in writing and refining one’s JAPE rules. Finally, one’s results can be exported at HTML files.
By Adam Wyner
Distributed under the Creative Commons
Attribution-Non-Commercial-Share Alike 2.0
Information Extraction of Legal Case Factors
This post reports initial steps in legal case factor annotation. We first give a very brief and highly simplified overview of case based reasoning using case factors, then present how case factors can be identified using text mining. (See introductory notes on this and related posts.)
Case based reasoning background
In Common Law legal systems such as in the USA and UK, judges make decisions concerning a case; we can say the judges make the law. This is in contrast to Civil Law legal systems as in Europe (excluding the UK) or elsewhere in which legislatures make law and which must be followed by judges. Neither legal system is common law or civil law in practice: the USA and UK have laws made by legislatures; in Europe, the application of legislative acts in particular circumstances (refining the law to apply to the facts) takes on aspects of common law.
In a Common Law system, judges and lawyers argue using case based reasoning: a current undecided case with respect to precedent cases, which are cases that have already been decided by a court and are accepted as “good law”. In essence, if the current case were exactly like a particular precedent case in all essential ways, then the current case ought to be decided as was the precedent case. Where the current case varies, one must argue comparatively with respect to other precedents. Among the ways in which cases are compared and contrasted, we find the case factors, where factors are prototypical fact patterns of a case. In virtue of the facts of a case and along with the applicable laws and precedents, a judge decides a case. It is, therefore, crucial to be able to identify the facts of a case in order to compare and contrast the cases.
In AI and Law, case based reasoning has a long and well developed history and literature (see the work of Hafner, Rissland, Ashley, and Bench-Capon among others. We make specific reference to Aleven’s 1997 Ph.D. Thesis. Given an analysis of cases in terms of factors, one can reason about how a current undecided case should, according to the precedents, be decided. However, a central problem is the knowledge bottleneck — how to analyse cases in terms of factors. By an large, this has been a manual labour. In the CATO database of cases discussed in Aleven 1997 (about 140 cases concerning intellectual property), the factors are manually annotated. There has been some effort to automate textual identification of factors in cases (see Bruninghaus and Ashley, but this is done with case summaries, not “actual” cases; moreover, the database, annotation, and other system supports are unavailable, so the results of their experiments are not independently verifiable and cannot be developed by other researchers.
Factors in text
In the CATO system, texts of case decisions are presented to the student along with a menu of factors; the student associates the factors with the text, in effect, annotating the case as a whole with the factors, but not the linguistic aspects which gave rise to the annotation. The factors are not extracted. The CATO system has other components to support case based argumentation, but these are not relevant to our discussion at this point.
Factors are legal concepts that range over facts. While Aleven 1997 has 27 factors and a factor hierarchy, we only look at two factors in order to give a flavour of our approach.
- Security-Measures
- Description: The plaintiff took efforts to maintain the secrecy of its information.
- The factor applies if: The plaintiff limited access to and distribution of information. Examples: nondisclosure agreements, notification that the information is confidential, securing the information with passwords and secure storage facilities, secure document distribution systems, etc.
- Secrets-Disclosed-Outsiders
- Description: The information was disclosed to outsiders or was in the public domain. The plaintiff either did not have secret information or did not have an interest in maintaining the secrecy of information.
- The factor applies if: The plaintiff disclosed the product information to licensees, customers, subcontractors, etc.
- The factor does not apply if: Plaintiff published the information in a public forum. All we know is that plaintiff marketed a product from which the information could be ascretained by reverse engineering.
Aleven 1997 illustrates the association of factors with textual passages in a case.

Given the factor description, we make lists and rules which at least highlight candidate passages in the case which might be relevant.
Output
The results of annotating terms and sentences appears in:


Note that the disclosure sentence seems to be a reasonable candidate about the disclosure factor, but the secrecy sentence is a discussion about the factor rather than a presentation of the factor itself. As we have said, at this point we provide candidate expressions for the factors; further work must be done to more accurately automatically annotate the text.
GATE
The lists, JAPE rules, graphics, and application state are in the archive. See the related post Information Extraction with ANNIC which uses a GATE plugin to further analyse the results so they can be improved.
Lists
To highlight the relevant passages, we created Lookup lists and then JAPE rules. To create the Lookups, we turned to disclosure and secret in WordNet, taking the SynSets of each, as well as looking at hypernyms (superordinate terms). Making a selection, we created lists using the infinitival, lower case form. This gave us two lists — disclosure.lst and secret.lst.
- disclosure.lst: announce, betray, break, bring out, communicate, confide, disclose, discover, divulge, expose, give away, impart, inform, leak, let on, let out, make known, pass on, reveal, tell
- secret.lst: confidential, confidentiality, hidden, private, secrecy, secret
In the gazetteer itself, disclosure.lst has a majorType disclose, and secret.lst has a majorType secret. With these lists, we homogenize the alternative words for these concepts. It is importantly that these particular lists are integrated into a lists.def file; in our example, this is ListGaz, but is not included in the distribution. As the application uses the Flexible Gazetteer (not discussed here), we can Lookup alternative morphological forms of words in the lists.
JAPE rules
Then we write JAPE rules so we can more easily identify them. The first rules make the majorType into an annotation for the annotation set, highlighting any occurrence of the terms; we could have skipped this, but it is worthwhile to see where and how the terms appear. The second rules classify sentences as relating to disclosure and secrecy.
- SecretFactor01.jape: Annotates any word from the secret.lst.
- DisclosureFactor01.jape: Annotates any word from the disclosure.lst.
- SecretFactorSentence01.jape: Annotates any sentence which contains an annotation Secret.
- DisclosureFactorSentence01.jape: Annotates any sentence which contains an annotation Disclosure.
Application order
The order of application of the processing resources is:
- Document Reset PR
- ANNIE Sentence Splitter
- ANNIE English Tokeniser
- ListGaz
- SecretFactor01.jape
- DisclosureFactorSentence01.jape
- SecretFactorSentence01.jape
- DisclosureFactorSentence01.jape
Discussion
As we have already pointed out, the annotations highlight potentially relevant passages. Further refinement is needed. This would be clearer were one to look at more applications of the annotation. It will also be important to consider more factors on more cases and across more domains of case law.
By Adam Wyner
Distributed under the Creative Commons
Attribution-Non-Commercial-Share Alike 2.0
Information Extraction of Conditional Rules
In this post, we extract conditional rules, such as If it rains, then the sidewalk is wet both in simple examples and from a sample fragment of legislation. (See introductory notes on this and related posts.)
Sample legislation
In legislation (and elsewhere in the law), conditional statements of the form If P, then Q are used. A well-researched example in AI and Law is the UK Nationality Act. In this post, we provide some initial JAPE rules to annotate conditional statements.
We work with a several variants of simple conditional statements and a (modified) conditional statement from the UK Nationality Act. For each statement, we want to annotate them as rules as well as to identify the portions of the rule.
-
If Bill is happy, then Jill is happy.
Jill is happy, if Bill is happy.
Jill is happy if:
- 1) Bill is happy;
- 2) Bill and Jill are together.
Acquisition by birth or adoption
- (1) A person born in the United Kingdom after commencement shall be a British citizen if —
- (a) at the time of the birth his father or mother is a British citizen; or
- (b) at the time of the birth his father or mother is settled in the United Kingdom.
Output
What we want to get is not only do we have a sentence which we have identified as being a rule, but that we can also identify the parts of the rule, namely the antecedent and the consequent. This may be useful for further processing.
The results appear in a graphic as:

Below, we discuss some of the problems with annotating the legislative rule.
GATE
In the zip file we have the application state, text, graphic, and JAPE rules.
Lists
There are no particular lists for this section; we used the same lists from the rulebook development.
JAPE Rules
We have a cascade of rules as follows.
- AntecedentInd01: finds the token “if” in the text. We use this as an indicator that the sentence is or may be a rule. We may have a range of such rules that we take to indicate a rule. We can use them to examine results from a body of texts, refining what is identified as a rule and how. Overgenerate, then prune. After we are clear about the results from individual rules, we can gather the annotations together under another annotation, which generalises the result.
- AntecedentInd02: finds the conditional indicator inside a sentence and annotates the resulting sentence as a rule with a conditional. A general rule like this can be used as we refine the indicators of rule. It also is an example of sentence annotation with respect to properties contained in the sentence.
- ConditionalParts01: finds the string between if and some punctuation, then labels it antecedent. This labels Bill is happy as antecedent in simple sentences such as If Bill is happy, then Jill is happy and Jill is happy, if Bill is happy. It does not work for the list.
- ConditionalParts02: finds the string between a preceding sentence and a comma followed by a conditional indicator, then labels it consequent. This labels Jill is happy as consequent in simple sentences such as Jill is happy, if Bill is happy.
- ConditionalParts03: finds the string between then and the end of the sentence, labelling it consequent. This labels Jill is happy as consequent in simple sentences such as If Bill is happy, then Jill is happy.
- ConditionalParts04: find the string between a preceding sentence and a conditional indicateor followed by a colon, then labels it consequent. This labels Jill is happy as consequent in constructions where the antecedents are presented in a list such as Jill is happy if: Bill is happy and Jill and Bill are together.
- ConditionalParts05: finds the strings between list indicators (see the section on legislative presentation) and some punctuation (semi-colon or period), and labels them as antecedents. This labels Bill is happy as antecedent in Jill is happy if: Bill is happy and Jill and Bill are together.
- ConditionalSentenceClass: annotates sentences as conditionals if they contain a conditional indicator.
Application order
The order of application of the processing resources is:
- Document Reset PR
- ANNIE English Tokeniser
- ANNIE Sentence Splitter
- ListFlagLevel1
- AntecedentInd01
- ConditionalParts01
- ConditionalParts02
- ConditionalParts03
- ConditionalParts04
- ConditionalParts05
- ConditionalSentenceClass
Comments
While our application clearly works well for the simple samples of conditional statements, it does not do well with respect to our sample legislation. There are a range of problems: list recognition “(x)”, use of “;” , use of “–“, and use of “or”. Most of these have to do with refining the notions of lists that we inherited from the rulebook example, so we need to refine the rules to the particular context of use. We leave this as an exercise.
By Adam Wyner
Distributed under the Creative Commons
Attribution-Non-Commercial-Share Alike 2.0