Introduction to a Series of Posts on Legal Information Extraction with GATE

This post has notes on and links to several other posts about legal information annotation and extraction using the General Architecture for Text Engineering system (GATE). The information in the posts was presented at my tutorial at JURIX 2009, Rotterdam, The Netherlands; the slides are available here. See the GATE website or my slides for introductory material about NLP and text annotation. For particulars about NLP and legal resources, see the posts and files at the links below.
The Posts
The following posts discuss different aspects of legal information extraction using GATE (live links indicate live posts):

Prototypes
The samples presented in the posts are prototypes only. No doubt there are other ways to accomplish similar tasks, the material is not as streamlined or cleanly presented as it could be, and each section is but a very small fragment of a much larger problem. In addition, there are better ways to present the lists and rules “in one piece”; however, during development and for discussion, it seems more helpful to have elements separate. Nonetheless, as a proof of concept, the samples make their point.
If there are any problems, contact Adam Wyner at adam@wyner.info.
Files
The posts are intended to be self-contained and to work with GATE 5.0. The archive files include the .xgapp file, which is a saved application state, along with text/corpus, the lists, and JAPE rules needed to run the application. In addition, the archive files include any graph outputs as reference. As noted, one may need to ‘fiddle’ a bit with the gazetteer lists in the current version.
Graphics
Graphics in the posts can be viewed in a larger and clearer size by right clicking on the graphic and selecting View Image. The Back button on your browser will close the image and return you to the post.
License
The materials are released under the following license:
By Adam Wyner
Distributed under the Creative Commons
Attribution-Non-Commercial-Share Alike 2.0
If you want to commercially exploit the material, you must seek a separate license with me. That said, I look forward to further open development on these materials; see my post on Open Source Legal Information.

Using XSLT to Re-represent GATE Output

Once one has processed some documents with GATE, what can one do with the result? After all, information extraction implies that the information is extracted, not simply annotated. (See introductory notes on this and related posts.)
There are several paths. One is to use Annotations in Context (ANNIC), which searches for and returns a display of annotated elements; we discuss how to use ANNIC in a separate post. However, this does not appear to support an “export” function to further process the results. Another path is to export the document with inline annotations; this, with a bit of further manual work, can then be processed further with EXtensible Stylesheet Language Transformations — XSLT. There are other approaches (e.g. XQUERY), but this post provides an example of using XSLT to present output as a rule book.
In Legislative Rule Extraction, we annotated some legislation. We carry on with the annotated legislation.
Output of GATE
In addition to the graphic output from GATE’s application, we can output the results of the annotation either inline or offset. As we are interested to provide alternative presentations of the annotated material, we look at the inline annotation.
In GATE, by right clicking on the document file (after applying the application to it) and choose “Save preserving document format'”. For out sample text, the result is:


 Article 1 
 Subject matter 
 This Directive lays down rules concerning the
following :
 1) 
 the taking-up and pursuit, within the Community,
of the self-employed activities of direct insurance and
reinsurance ;
 2) 
 the supervision in the case of insurance and
reinsurance groups ;
 3) 
 the reorganisation and winding-up of direct
insurance undertakings .

Legal XML
The GATE output needs to be made into proper XML, having a root and being properly nested. As there will be several rules, each rule extracted should go between some legal XML annotation. There is an issue about how to save and process a full corpus, as the only options to save are XML or Datastore, but we leave this aside for the time being. For now, we ‘manually’ wrap our GATE output as below.
I used the online XSLT editor at w3schools, which has nice online functionality which allows one to experiment and get results right away. In particular, one can cut and paste the XML rulebook (below) into the left hand pane and the XSLT code (below) into the right hand pane, hit the edit button, and get the transformed output. Caveat, one might have to do a bit of editing on the XML rulebook for spaces and returns since there are some bumps between what appears in WordPress and what is needed to run code.
The XML Rulebook:





 Article 1 
 Subject matter 
 This Directive lays down rules concerning the
following :
 1) 
 the taking-up and pursuit, within the Community,
of the self-employed activities of direct insurance and
reinsurance ;
 2) 
 the supervision in the case of insurance and
reinsurance groups ;
 3) 
 the reorganisation and winding-up of direct
insurance undertakings .



The XSLT code:






  
  
  

My Rulebook

Reference Code:
Title:
Description:
Description:
Description:

XSLT Output
The result is the following:
Output of XSLT on the XML Rulebook
In general, one can create any number of rulebooks from the same underlying data, varying the layout and substance of the presentation. For example, we can change the colours or headers easily; we can present more or less information. This is a lot more powerful than the static book that now exists.
Problems and Issues
Our example is a simple illustration of what can be done. Note that we have not yet fulfilled the requirements from our initial post since we have not numbered the sections, but this can be added later.
An important problem is that GATE annotations are not always in accordance with XML standards. In particular, XML markups must be strictly embedded as in

      

There can be no crossover such as in

     

though this may well occur for GATE annotations. There may be several approaches to this problem, but we leave that for future discussion.
Another problem is that “Save preserving document format” only works with documents and not corpora, and we might want to work with corpora.
Finally, XSLT is useful for transforming XSL files, not in extracting information from XML files, for which one would need something such as XQuery.
By Adam Wyner
Distributed under the Creative Commons
Attribution-Non-Commercial-Share Alike 2.0

Legislative Rule Extraction

In this post, we discuss the annotation of information from legislation, for example, to create a rule book from legislation. There are two distinct tasks and two tools. First, we want to take the original legislation and annotate it; for this, we use GATE. Second, we want to transform the output of GATE, using the annotations, into some alternative, web-compatible format; for this, we use EXtensible Stylesheet Language Transformations (XSLT). This is presented in STUB. John Cyriac of compliancetrack outlined the problem that is addressed in these two posts. (See introductory notes on this and related posts.)
Sample legislation and text
The text we are working with is a sample from Insurance and Reinsurance (Solvency II) from the European Parliament.
SUBJECT MATTER AND SCOPE
Article 1
Subject matter
This Directive lays down rules concerning the following:
1) the taking-up and pursuit, within the Community, of the self-employed activities of direct insurance and insurance;
2) the supervision in the case of insurance and reinsurance groups;
3) the reorganisation and winding-up of direct insurance undertakings.
Article 2
Scope
1. This Directive shall apply to direct life and non-life insurance undertakings which are established in the territory of a Member State or which wish to become established there. It shall also apply to reinsurance undertakings, which conduct only reinsurance activities, and which are established in the territory of a Member State or which wish to become established there with the exception of Title IV.

There are additional articles which we do not work with. The article is not a logical statement (an If, then statement), but identifies the matters which the directive is concerned with. Each statement of the article may be understood as a conjunct: the rules concern a, b, and c. However, this is not yet relevant to our analysis. See the separate post about rule extraction for conditionals.
Target result
We want to annotate the first article, picking out each section for extraction. In particular, for a practitioner to use the extraction, he should have it in a format which identifies the following:
Reference Code: Article 1
Title: Subject Matter
Level: 1.0
Description: This Directive lays down rules concerning the following:
Level: 1.1
Description: the taking-up and pursuit, within the Community, of the self-employed activities of direct insurance and reinsurance;
Level: 1.2
Description: the supervision in the case of insurance and reinsurance groups;
Level: 1.3
Description: the reorganisation and winding-up of direct insurance undertakings;

Output
The output of GATE appears in the following figure:
Annotating the structure of legislative rules
GATE
To get this output, we used the files and application state in GATELegislativeRulebook.tar.gz.
Text
The text is a fragment of the legislation above and is found in the SmallRulebookText.tex file.
Lists
We use the following lists in addition to standard ANNIE lists, meaning that a lists.def file ought to incorporate the files. This is the resource ListGaz given in the .xgapp file (though this may require some additional fiddling and files to work).

  • roman_numerals_i-xx.lst: It has majorType = roman_numeral. This is a list of roman numbers from i to xx.
  • rulebooksectionlabel.lst: It has majorType = rulebooksection. This is a list of section headings such as: Subject matter, Scope, Statutory systems, Exclusion from scope due to size, Operations, Assistance, Mutual undertakings, Institutions, Operations and activities.

The list of section headings is taken from the legislation, which presumably follows standard guidelines for section heading labels. For the list of roman numerals, there are more general methods using Regex to match well-formed numerals (see Roman Numerals in Python and Regex for Roman Numerals); however, for our purposes it is simpler to use limited lists rather than Regex. In either case, several problems arise, as we see later.
JAPE rules

  • ListArticleSection.jape: What is annotated with Article (from the lookup) and a number is annotated ArticleFlag.
  • ListFlagLevel1.jape: The string number followed by a period of closed parenthesis is annotated ListFlagLevel1.
  • ListFlagLevel1sub.jape: A number followed by a letter followed by a period is annotated ListFlagLevel1sub.
  • ListFlagLevel2.jape: A string of lower case letters followed by a closed parenthesis is annotated ListFlagLevel2.
  • ListFlagLevel3.jape: A roman number from a lookup list followed by a closed parenthsis is annotated ListFlagLevel3.
  • RuleBookSectionLabel.jape: Looks up section labels from a list and annotates them SectionType. For example, Subject matter, Scope, and Statutory systems.
  • ListStatement01.jape: A string which occurs between SectionType annotation and a colon is annotated ListStateTop.
  • ListStatement02.jape: A string which occurs between a ListFlagLevel1 and a semicolon is annotated SubListStatementPrefinal.
  • ListStatement03.jape: A string which occurs between a ListFlagLevel1 and a period is annotated SubListStatementFinal.

Application order
The order of application of the processing resources is:

  • Document Reset PR
  • ANNIE Sentence Splitter
  • ANNIE English Tokeniser
  • ListGaz
  • RulebookSectionLabel:
  • ListArticleSection
  • ListStatement01
  • ListFlagLevel01
  • ListStatement02
  • ListStatement03

Additional issues
This example does not show the other list flag levels (e.g. using letters, roman numerals etc.), nor the results on other parts of the legislation.
While the result for the specific text is attractive, there is much work to be done. The lists and rules overgenerate. For example, the rules indicate that avrt is a level flag because v is recognised as a roman numeral. In other cases, too long a passage is selected as the statement at the top of the list. Yet, the example is still useful to demonstrate a proof of concept, particularly in conjunction with the post on XSLT.
By Adam Wyner
Distributed under the Creative Commons
Attribution-Non-Commercial-Share Alike 2.0

Open Source Information Extraction: Data, Lists, Rules, and Development Environment

Open source software development and standards are widely discussed and practiced. It has led to a range of useful applications and services. GATE is one such example.
However, one quickly learns that open source can easily mean open to a certain extent: GATE is open source, but the applications and additional functionalities that are developed with respect to GATE often are not. On the one hand, this makes perfect sense as the applications and functionalities are added value, labour intensive, and so on. On the other hand, the scientific community cannot verify, validate, or build on prior work unless the applications and functionalities are available. This can also hinder commercial development since closed development impedes progress, dissemination, and a common framework from which everyone benefits. It also does not recognise the fundamentally experimental aspect of information extraction. In contrast, the rapid growth and contributions of the natural (Biology, Physics, Chemistry, etc) or theoretical (Maths) sciences could only have occurred in an open, transparent development environment.
I advocate open source information extraction where an information extraction result can only be reported if it can be independently verified and built on by members of the scientific community. This means that the following must be made available concurrent with the report of the result:

  • Data and corpora
  • Lists (e.g. gazetteers)
  • Rules (e.g. JAPE rules)
  • Any additional processing components (e.g. information extraction to schemes or XSLT)
  • Development environment (e.g. GATE)

In other words, the results must be independently reproducible in full. The slogan is:

No publication without replicability.

This would:

  • Contribute to the research community and build on past developments.
  • Support teaching and learning.
  • Encourage interchange. The Semantic Web chokes on different formats.
  • Return academic research to the common (i.e. largely taxpayer funded) good rather than owned by the researcher or university. If someone needs to keep their work private, they should work at a company.
  • Lead to distributive, collaborative research and results, reducing redundancy and increasing the scale and complexity of systems.

Solving the knowledge bottleneck, particularly in relation to language, has not and likely will not be solved by any one individual or research team. Open source information extraction will, I believe, make greater progress toward addressing it.
Obviously, money must be made somewhere. One source is public funding, including contributions from private organisations which see a value in building public infrastructure. Another source is, like other open source software, systems, or other public information, to make money “around” the free material by adding non-core goods, services, or advertising.
By Adam Wyner
Distributed under the Creative Commons
Attribution-Non-Commercial-Share Alike 2.0

Natural Language Processing Techniques for Managing Legal Resources on the Semantic Web — Tutorial Slides

I gave a tutorial on natural language processing for legal resource management at the International Conference on Legal Information Systems (JURIX) 2009 in Rotterdam, The Netherlands. The slides are available below. Comments welcome.
The following people attended:

  • Andras Forhecz, Budapest University of Technology and Economics, Hungary
  • Ales Gola, Ministry of Interior of Czech Republic
  • Harold Hoffman, University Krems, Austria
  • Czeslaw Jedrzejek, Poznan University of Technology, Poland
  • Manuel Maarek, INRIA Grenoble, Rhone-Alpes
  • Michael Sonntag, Johannes Kepler University Linz, Austria
  • Vit Stastny, Ministry of Interior of Czech Republic

I thank the participants for their comments and look forward to continuing the discussions which we started in the tutorial.
At the link, one can find the slides. Comments are very welcome. The file is 2.2MB. The slides were originally prepared using Open Office’s Impress, then converted to PowerPoint.
Natural Language Processing Techniques for Managing Legal Resources on the Semantic Web
There is a bit more in the slides than was presented at the tutorial, covering in addition ontologies, parsers, and semantic interpreters.
In the coming weeks, I will make available additional detailed instructions as well as gazetteers and JAPE rules. I also plan to continue to add additional text mining materials.
By Adam Wyner
Distributed under the Creative Commons
Attribution-Non-Commercial-Share Alike 2.0

Instructions for GATE's Onto Root Gazetteer

In this post, I present User Manual notes for GATE’s Onto Root Gazetteer (ORG) and references to ORG. In Discussion of GATE’s Onto Root Gazetteer, I discuss aspects of Onto Root Gazetteer which I found interesting or problematic. These notes and discussion may be of use to those researchers in legal informatics who are interested in text mining and annotation for the semantic web.
Thanks to Diana Maynard, Danica Damljanovic, Phil Gooch, and the GATE User Manual for comments and materials which I have liberally used. Errors rest with me (and please tell me where they are so I can fix them!).
Purpose
Onto Root Gazetteer links text to an ontology by creating Lookup annotations which come from the ontology rather than a default gazetteer. The ontology is preprocessed to produce a flexible, dynamic gazetteer; that is, it is a gazetteer which takes into account alternative morphological forms and can be added to. An important advantage is that text can be annotated as an individual of the ontology, thus facilitating the population of the ontology.
Besides being flexible and dynamic, some advantages of ORG over other gazetteers:

  • It is more richly structured (see it as a gazetteer containing other gazetteers)
  • It allows one to relate textual and ontological information by adding instances.
  • It gives one richer annotations that can be used for further processes.

In the following, we present the step by step instructions for ‘rolling your own’, then show the results of the ‘prepackaged’ example that comes with the plugin.
Setup
Step 1. Add (if not already used) the Onto Root Gazetteer plugin to GATE following the usual plugin instructions.
Step 2. Add (if not already used) the Ontology Tools (OWLIM Ontology LR, OntoGazetteer, GATE Ontology Editor, OAT) plugin. ORG uses ontologies, so one must have these tools to load them as language resources.
Step 3. Create (or load) an ontology with OWLIM (see the instructions on the ontologies). This is the ontology that is the language resource that is then used by Onto Root Gazetteer. Suppose this ontology is called myOntology. It is important to note that OWLIM can only use OWL-Lite ontologies (see the documentation about this). Also, I succeeded in loading an ontology only from the resources folder of the Ontology_Tools plugin (rather than from another drive); I don’t know if this is significant.
Step 4. In GATE, create processing resources with default parameters:

  • Document Reset PR
  • RegEx Sentence Splitter (or ANNIE Sentence Splitter, but that one is likely to run slower
  • ANNIE English Tokeniser
  • ANNIE POS Tagger
  • GATE Morphological Analyser

Step 5. When all these PRs are loaded, create a Onto Root Gazetteer PR and set the initial parameters as follows. Mandatory ones are as follows (though some are set as defaults):

  • Ontology: select previously created myOntology
  • Tokeniser: select previously created Tokeniser
  • POSTagger: select previously created POS Tagger
  • Morpher: select previously created Morpher.

Step 6. Create another PR which is a Flexible Gazetteer. At the initial parameters, it is mandatory to select previously created OntoRootGazetteer for gazetteerInst. For another parameter, inputFeatureNames, click on the button on the right and when prompt with a window, add ‘Token.root’ in the provided text box, then click Add button. Click OK, give name to the new PR (optional) and then click OK.
Step 7. To create an application, right click on Application, New –> Pipeline (or Corpus Pipeline). Add the following PRS to the application in this order:

  • Document Reset PR
  • RegEx Sentence Splitter
  • ANNIE English Tokeniser
  • ANNIE POS Tagger
  • GATE Morphological Analyser
  • Flexible Gazetteer

Step 8. Run the application over the selected corpus.
Step 9. Inspect the results. Look at the Annotation Set with Lookup and also the Annotation List to see how the annotations appear.
Small Example
The ORG plugin comes with a demo application which not only sets up all the PRs and LRs (the text, corpus, and ontology), but also the application ready to run. This is the file exampleApp.xgapp, which is in resource folder of the plugin (Ontology_Based_Gazetteer). To start this, start GATE with a clean slate (no other PRs, LRs, or applications), then Applications, then right click to Restore application from file, then load the file from the folder just given.
The ontology which is used for an illustration is for GATE itself, giving the classes, subclasses, and instances of the system. While the ontology is loaded along with the application, one can find it here. The text is simple (and comes with the application): language resources and parameters.
FIGURE 1 (missing at the moment)
FIGURE 2 (missing at the moment)
One can see that the token “language resources” is annotated with respect to the class LanguageResource, “resources” is annotated with GATEResource, and “parameters” is annotated with ResourceParameter. We discuss this further below.
One further aspect is important and useful. Since the ontology tools have been loaded and a particular ontology has been used, one can not only see the ontology (open the OAT tab in the window with the text), but one can annotate the text with respect to the ontology — highlight some text and a popup menu allows one to select how to annotate the text. With this, one can add instances (or classes) to the ontology.
Documentation
One can consult the following for further information about how the gazetteer is made, among other topics:

Discussion
See the related post Discussion of GATE’s Onto Root Gazetteer.
By Adam Wyner
Distributed under the Creative Commons
Attribution-Non-Commercial-Share Alike 2.0