I (albeit remotely) participated in a Workshop in London on June 25, 2019 on Legislation and Computers. This meeting is part of ongoing developments by an international group of Parliamanetary Counsels to explore and discuss recent developments in machine-readable and executable legislation.
I gave my talk in conjunction with one of my collaborators Fraser Gough, Parliamentary Counsel of the Parliamentary Counsel Office, Scottish Government.
The program of the workshop is here and the slides of the talk are here.
A study in online, collaborative legal informatics Adam Wyner, University of Aberdeen Wim Peters, University of Sheffield Daniel Katz, Michigan State University — Introduction —
This is an academic research study on legal informatics (information processing of the law). The study uses an online, collaborative tool to crowdsource the annotation of legal cases. The task is similar to legal professionals’ annotation of cases. The result will be a public corpus of searchable, richly annotated legal cases that can be further processed, analysed, or queried for conceptual annotations.
Adam and Wim are computer scientists who are interested in language, law, and the Internet. Dan is an academic lawyer also interested in law and the Internet.
We are inviting people to participate in this collaborative task. This is a beta version of the exercise, and we welcome comments on how to improve it. Please read through this blog post, look at the video, and get in contact. — Highlighting, Annotations, and Legal Case Briefs —
In reading, analysing, and preparing a summary of a legal case, law students and legal professionals annotate cases by highlighting and colour coding elements of the case to make for easy identification. Different elements are annotated: the holding, the parties, the facts, and so on. A sample image of annotations is:
— Problem —
To analyse a legal case, legal professionals annotate the case into its constituent parts. The analysis is summarised in a case brief. However, the current approach is very limited:
Analysis is time-consuming and knowledge-intensive.
Case briefs may miss relevant information.
Case analyses and briefs are privately held.
Case analyses are in paper form, so not searchable over the Internet.
Current search tools are for text strings, not conceptual information. We want to search for concepts such as for the holdings by a particular judge and with respect to causes of action against a particular defendant. With annotated legal cases, we can enable conceptual search.
There is no capacity to systematically compare, contrast, and evaluate the work by different annotators. Consequently, the annotation task itself is not used as an opportunity to gain greater expertise in case analysis.
— Solution: Crowdsource Annotation —
We use an online legal case annotation tool and share the results to support:
Online search in legal cases for case details and concepts.
Semantic web applications and information extraction.
Crowd-source a legal case corpus.
Training and learning for legal case analysis.
The results of the study would be useful to:
Law school students learning case analysis.
Legal professionals in identifying relevant cases.
Researchers of legal informatics.
Law faculty in training students to analyse legal cases.
Broadly speaking, a corpus of analysed cases makes case law a public resource that democratises legal knowledge. — Annotations: types and features —
To crowdsource conceptual annotations of legal cases, we use the General Architecture of Text Engineering (GATE) Teamware tool. Teamware is a web-based application that provides an annotator with a text to annotate and a list of annotations to use. The task is a web-based version of what legal analysts of cases already do.
We use familiar annotations for legal cases, divided (for ease of reference) into types and features. For example, we have a type Legal Roles and various features to select among, e.g. defendant. We are counting on you to have learned and used these annotations in the course of your legal study and practice.
You do not need to memorise the types and features as they will appear in the GATE Teamware tool. It may be handy to keep this webpage open so you can consult it or you could also print out the page.
The annotations we use are: Argument For Party – arguments for a particular party, using the most general notion:
for Appellee, for Appellant, for Defendant, for Plaintiff.
Facts – legal and procedural facts:
Cause of Action – the specific legal theory upon which the plaintiff brings the suit.
Defenses raised by Defendant – the defendant defenses against the cause of action.
Legal Facts – the legally relevant facts of the case that are used in arguing the issues.
Remedy requested by Plaintiff – what the plaintiff asks the court to grant.
Indexes – various indicative information:
Case Citation – the citation of the particular case being annotated.
Court Address – the address of the court.
Hearing Date – the date of the hearing.
Judge Name – the names of the judge, annotated one at a time.
Jurisdiction – the legal jurisdiction of the case.
Issues – the issues before the court:
Procedural Issue – what the appellee claims that the lower court did wrong.
Substantive Issue – the point of law that is in dispute (legal facts have their own annotation).
Legal Roles – the role of the parties in the case:
General – buyer/seller, employer/employee, landlord/tenant, etc.
Other – relevant information not covered by the other annotations. Procedural History – the disposition of the case with respect to the lower court(s):
Appeal Information – who appealed and why they appealed.
Damages – the damages awarded by the lower court.
Lower Court Decision – the lower court’s decision.
Reasoning Outcomes – various parts of the legal decision:
Dicta – commentary about the judgement and holding, but not part of the rationale.
Holding – the rule of law or legal principle that was applied in making the judgement. You can think about this as the new ground that the court is covering in this case. What legal rule(s) is the court developing or clarifying? The case can have more than one holding if there is more than one legal rule being considered. Note that a holding from a cited precedent is to be considered part of the rationale.
Judgement – Given the holding and the corresponding rationale for the holding, the judgement is the court’s final decision about the rights of the parties, the court’s response to a party’s request for relief, and bearing on prior decisions (e.g. affirmed, reversed, remanded, etc.).
Rationale – the court’s analysis of the issues and the reasons for the holding.
— Strategic Phases —
From previous experience and following discussions, we believe it is best if the annotations are grouped together and done in three phases. This allows the annotator to do simpler tasks first and to keep in mind a subset of the relevant annotations.
Phase I: Indexes and Legal Roles
Phase II: Procedural History and Reasoning Outcomes
Phase III: Facts and Issues
For the time being, we are not attending to annotations of Arguments for Party and Other. — Collaborate —
Take a look at the instructional video below. If you wish to collaborate on the task, send an email to Adam Wyner – firstname.lastname@example.org
In the email, please include brief information for:
Your professional affiliation, e.g. institution, company, firm…
Your role where you work
Your background as a legal professional
This will help us know who we are collaborating with; from the pool of candidates, we will select participants for this early study.
You will be sent a user name and password so you can login to Teamware.
We respect your privacy. We are only interested in data in the aggregate and will not reveal any personal data to third parties. — Next —
We have an instructional video that you can open in a new tab or window and that uses QuickTime. It lasts about 14 minutes. This will give you a good idea of what you will be doing. The presenter is Adam Wyner. You can see this here:
Or follow the link on YouTube — Crowdsourcing Legal Case Annotation Instructional Video. Please view in a large (ok definition) or full screen (grainy definition) mode, which may need to be reloaded in YouTube.
There are additional points about using the tool in section below on questions, problems, and observations.
After reading this blog, viewing the instructional video, and receiving your username and password, you can login to begin annotating at — GATE Teamware — Survey —
When you are done with your task, please answer the questions on the survey to give us feedback on your experience using the annotation tool. The survey is available below. You can scroll down and answer the questions. Don’t forget to hit the “Done” button to submit your responses, which will be very useful in helping us understand your experience and thoughts about using the tool:
— What Then? —
We analyse the annotations from several annotators, comparing and contrasting them (interannotator agreement). This will show us similarities and differences in the understanding of the annotations and cases. As well, the results will help us develop a Gold Standard Corpus of legal cases, which are annotations of cases that annotators agree on. A Gold Standard is essential for information extraction and the development of advanced processing. We will publicly report the analysis of the exercise and make the annotated cases publicly available for re-use.
Once we have a better sense of how this study goes, we plan to roll out a larger version with more cases. And this is only the start…. — Questions, Problems, and Observations — Thanks to participants for letting us know about their problems and sending their observations. How easy is it to learn to use the tool? Take a look at the video to get a sense of this. With a little bit of practice, it is rather straightforward. What if I don’t agree with some of your annotations or features? Write a comment or send us an email, and we will consider your comment. Try to be as clear and specific as you can. We are not lawyers, and we are dealing with a global community with local variation, so it is likely there will be some disagreement and variation. Can I get the results of my annotations? Our approach is to make individual contributions to the whole. So, you will be able to access annotated cases after the exercise. There will be further information on how to work with the material. How many cases must I do? You can do one or you can do as many as we have (not many in the beta project). How much time will it take? About as long as it would take you to do a similar highlighting and annotation task with paper and markers. What if I have a problem with using the tool or if the tool is buggy? Be patient and try to work with the tool. Sometimes things go wrong. Write a comment or send us an email, and we will try to advise. Note – we are only consumers of GATE Teamware, so are not responsible for the system. How thoroughly should I annotate the cases? The more cases that are annotated fully and accurately, the better. Apply the same diligence as you would to thoroughly and carefully analyse cases with pen and paper. As you will be the beneficiary of the work of others, so too should you work to benefit them. Do we track good annotators and bad annotators? We are interested in data in the aggregate, and are only interested in interannotator agreement and disagreement. This information will help us better understand differences in how the cases are understood and annotated. But, we can see how much time each person takes with each annotation task and measure how they perform against other annotators or a gold standard. If we have bad annotators, we will see this in the results; we would contact the annotator and see how best to improve the situation. As we noted above, we are not sharing information with third parties. I cannot login with the username and password. Please let me know if you have this problem, and I will look into it. I can login, but I cannot get the java webstart file to start. This is a tough problem to address over the internet. Some people have no problem, but some people are. Please let me know if you have this problem. Do check that you have followed the instructions (on blog and in movie). I can login and start the annotation tool, but I cannot get the task. Please let me know, and I will look into it. The text is too small and single spaced. At the moment, there is nothing we can do about this. We’ll try to keep this in mind for the future. The highlighting tool is not easy to use. When I want to move from one annotated text to some new text, the tool doesn’t move to the new text. This is bit of a problem with the tool, which is not entirely reliable in the functionality. Try to play around with this to see what works for you. One strategy that I have found that improves performance is to annotate something. Then the annotation types appears in the upper right hand corner window among the list of annotations. Sometimes it is a good idea, when the problem occurs, is to click the annotations in that upper right hand corner window off and on (toggle them on and off). This seems to clear the system a bit so that one can go on to the next annotation. Give this a try. If you have problems, please let me know. I found it very challenging. It is important to us to know this information to gauge how much text and the variety of annotations. We might reduce the number of annotations, breaking up the whole set into parts of the overall task. Decision date is more important than hearing date, or at least should be provided in addition to hearing date. Probably this will be added to future iterations. A participant, e.g. “Cone”, was originally a defendant, but was dismissed out before this appeal. I wonder if he should still be coded as “Defendant” or if he should be coded as an other role-holder. Good observation. I’ll have to consult with some lawyers further about this point. There are sentences where the court introduced a fact and also appeared to reason using it. Is it right to code the whole sentence both as a legal fact and as a rationale. Yes, this is the way to handle this. Double annotations are always possible. A similar problem occurred where the court offered a fact but also put a gloss on it as to its legal significance. Double annotations are always possible. Some of the names of the categories were confusing or unclear. For example, using “Holding” for the name of the legal rule or principle was confusing (“Legal Rule” might be more intuitive). This is another point that we will need to consult further with other lawyers. There may also be some variation in terminology. There is sometimes unclarity about role-players. A case involved a plaintiff, who was an appellee but also a cross-appellant, and a defendant who was thus an appellant and cross-appellee. These can be coded where on is plaintiff and appellee and the other defendant and appellant. But, they could have both been coded as appellee and appellant, given the existence of the cross appeal. Double (or more) annotating is fine. Procedural History/Damages might be better framed as Procedural History/Remedies, as courts often provide injunctive relief or, as in this case, an accounting, as a remedy. This is another point that we will need to consult further with lawyers about terminology. What if a case does not state any legal rules? Can implicit legal rules be annotated. For example, where novelty and non-obviousness are a sine qua non of a valid patent, one would not have known to mark some of the sentences as rationales. This isn’t a problem. If something is not in the case, then it is not annotated. We are not (yet) concerned with implicit information. But, if you know the implicit information, then annotate it. How can I automatically search for and annotate the same string with the same annotation? In the instructional video, we wanted to keep the material short and to the point, so there are aspects of the annotation tool we did not cover. However, it is tedious to manually search for the same string and annotate it with the same annotation. Teamware’s Annotation Editor has a tool to support automatic search and annotation. To see how to do this, we have the video here:
How should I annotate holdings which may appear as holdings in cited cases and as part of the procedural history, as holdings in the current case, or as part of the rationale in the current case? This is an interesting and subtle point for us, and we will have to have a full consultation with lawyers to decide. But, for the time being, there can be no harm in multiple annotations, which we can then look at and work with later. — Paper —
If you are interested in some of the ideas behind this project, please see our paper: Semantic Annotations for Legal Text Processing using GATE Teamware
The paper will appear in May 2012 in the Proceedings of the LREC Conference Workshop on Semantic Processing of Legal Texts, Istanbul, Turkey. The exercise here is a version of the exercise proposed in the paper.
A shortlink to this blog page is: http://wyner.info/LanguageLogicLawSoftware/?p=1315 — Thanks for collaborating! —
— If you have any questions, please submit a comment! — — Update Note —
July 29, 2013 to reflect Dan Katz’s amended definitions for Holding. Updated in various ways July 12, 2013. The previous blog post of July 28, 2012 has been updated to note the participation of Dan Katz and his students of Michigan State University. — Honour Role —
For the very first study, we would like to thank the following individuals who gave of their time and intelligence to carry out their tasks.
My colleagues and I have had two papers (one long and one short) accepted for presentation at The 24th International Conference on Legal Knowledge and Information Systems (JURIX 2011). The papers are available on the links. On Rule Extraction from Regulations
Adam Wyner and Wim Peters
Rules in regulations such as found in the US Federal Code of Regulations can be expressed using conditional and deontic rules. Identifying and extracting such rules from the language of the source material would be useful for automating rulebook management and translating into an executable logic. The paper presents a linguistically-oriented, rule-based approach, which is in contrast to a machine learning approach. It outlines use cases, discusses the source materials, reviews the methodology, then provides initial results and future steps. Populating an Online Consultation Tool
Sarah Pulfrey-Taylor, Emily Henthorn, Katie Atkinson, Adam Wyner, and Trevor Bench-Capon
The paper addresses the extraction, formalisation, and presentation of public policy arguments. Arguments are extracted from documents that comment on public policy proposals. Formalising the information from the arguments enables the construction of models and systematic analysis of the arguments. In addition, the arguments are represented in a form suitable for presentation in an online consultation tool. Thus, the forms in the consultation correlate with the formalisation and can be evaluated accordingly. The stages of the process are outlined with reference to a working example. Shortlink to this page.
By Adam Wyner
Until September 2009, I worked on the Estrella Project (The European project for Standardized Transparent Representations in order to Extend Legal Accessibility) at the University of Liverpool. One of the documents which I co-authored (with Trevor Bench-Capon) for the project was the ESTRELLA User Report, which is an open document about key elements of the project. In the context of commercial, academic, governmental collaborations, many of the issues and topics from that project are still relevant, especially concerning motivations and goals of open source materials for legal informatics. In order to circulate this discussion further afield, I have taken the liberty to reproduce an extracted from the article. LKIF stands for the Legal Knowledge Interchange Format, which was a key deliverable in the project. For further documents from the project, see the Estrella Project website. Overview
The Estrella Project (The European project for Standardized Transparent Representations in order to Extend Legal Accessibility) has developed a platform which allows public administrations to deploy comprehensive solutions for the management of legal knowledge. In reasoning about social benefits or taxation, public administrators must represent and reason with complex legislation. The platform is intended to support the representation of and reasoning about legislation in a way that can help public administrations to improve the quality and efficiency of their services. Moreover, given a suitable interface, the legislation can be made available for the public to interact with. For example, LKIF tools could be made available to citizens via the web to help them to assess their eligibility for a social benefits as well as filling out the appropriate application forms.
The platform has been designed to be open and standardised so that public administrations need not become dependent on proprietary products of particular vendors. Along the same lines, the platform supports interoperability among various components for legal knowledge-based systems allowing public administrations to freely choose among the components. A standardised platform also enables a range of vendors to develop innovative products to suit particular market needs without having to be concerned with an all-encompassing solution, compatibility with other vendors, or being locked out of a strategic market by “monolithic” vendors. As well, the platform abstracts from the expression of legislation in different natural languages so providing a common, abstract legal “lingua franca”.
The main technical achievement of the Estrella Project is the development of a Legal Knowledge Interchange Format (LKIF), which represents legal information in a form which builds upon emerging XML-based standards of the Semantic Web. The project platform provides Application Programmer Interfaces (APIs) for interacting with legal knowledge-based systems using LKIF. LKIF provides formalisms for representing concepts (“ontologies”), inference rules, precedent cases and arguments. An XML document schema for legislation has been developed, called MetaLex, which complements and integrates national XML standards for legislation. This format supports document search, exchange, and association among documents as well as enforces a link between legal sources and the legal knowledge systems which reason about the information in the sources. In addition, a reference inference engine has been developed which supports reasoning with legal knowledge represented in LKIF. The utility of LKIF as an interchange format for legal knowledge has been demonstrated with pilot tests of legal documents which are expressed in proprietary formats of several vendors then translated to and from the format of one vendor to that of another. Background Context
The Estrella Project originated in the context of European Union integration, where:
The European Parliament passes EU wide directives which need to be incorporated into or related to the legislation of member states.
Goods, services, and citizens are free to move across open European borders.
Democratic institutions must be strengthened as well as be more responsive to the will of the citizenry.
Public administrations must be more efficient and economical.
In the EU, the legal systems of member states have been composed of heterogeneous, often conflicting, rules and regulations concerning taxes, employment, education, pensions, health care, property, trade, and so on. Integration of new EU legislation with existing legislation of the member states as well as homogenisation of legal systems across the EU has been problematic, complex, and expensive to implement. As the borders of member states open, the rules and regulations concerning the benefits and liabilities of citizens and businesses must move as people, goods, and services move. For example, laws concerning employment and pension ought to be comparable across the member states so as to facilitate the movement of employees across national boundaries. In addition, there are more general concerns about improving the functionality of the legal system so as to garner public support for the legal system, promoting transparency, compliance, and citizen involvement. Finally, the costs of administering the legal system by EU administrative departments, administrations of member states, and companies throughout the EU are signficant and rising. The more complex and dynamic the legislative environment, the more burdensome the costs. Purposes
Given this background context, the Estrella Project was initiated with the following purposes in mind:
to facilitate the integration of EU legal systems
to modernise public administration at the levels of the EU and within member states by supporting efficiency, transparency, accountability, accessibility, inclusiveness, portability, and simplicity of core governmental processes and services
to improve the quality of legal information by testing legal systems for consistency (are there contradictions between portions of the law) and correctness (is the law achieving the goal it is specied for?).
to reduce the costs of public administration
to reduce private sector costs of managing their legal obligations
to encourage public support for democratic institutions by participation, transparency, and personalisation of services
to ease the mobility of goods, services, and EU citizens within the EU
to support businesses across EU member states
to provide the means to “modularise” the legal systems for different levels of EU legal structure, e.g. provide a “municipal government” module which could be amended to suit local circumstances
to support a range of governmental and legal processes across organisations and on behalf of citizens and businesses
to support a variety of reasoning patterns as needed across a range of resources (e.g. directives, legal case bases).
I gave a tutorial on natural language processing for legal resource management at the International Conference on Legal Information Systems (JURIX) 2009 in Rotterdam, The Netherlands. The slides are available below. Comments welcome.
The following people attended:
Andras Forhecz, Budapest University of Technology and Economics, Hungary
Ales Gola, Ministry of Interior of Czech Republic
Harold Hoffman, University Krems, Austria
Czeslaw Jedrzejek, Poznan University of Technology, Poland
Manuel Maarek, INRIA Grenoble, Rhone-Alpes
Michael Sonntag, Johannes Kepler University Linz, Austria
Vit Stastny, Ministry of Interior of Czech Republic
I thank the participants for their comments and look forward to continuing the discussions which we started in the tutorial.
At the link, one can find the slides. Comments are very welcome. The file is 2.2MB. The slides were originally prepared using Open Office’s Impress, then converted to PowerPoint. Natural Language Processing Techniques for Managing Legal Resources on the Semantic Web
There is a bit more in the slides than was presented at the tutorial, covering in addition ontologies, parsers, and semantic interpreters.
In the coming weeks, I will make available additional detailed instructions as well as gazetteers and JAPE rules. I also plan to continue to add additional text mining materials.
By Adam Wyner
Distributed under the Creative Commons Attribution-Non-Commercial-Share Alike 2.0
Next week, 16 December 2009, I am giving a three hour tutorial at JURIX (International Conference on Legal Knowledge and Information Systems) in Rotterdam, The Netherlands on Natural Language Processing Techniques for Managing Legal Resources on the Semantic Web. The tutorial description appears below. Further material from the tutorial will be presented on the blog.
Legal resources such as legislation, public notices, and case law are increasingly available on the internet. To be automatically processed by web services, the resources must be annotated using semantic web technologies such as XML, RDF, and ontologies. However, manual annotation is labour and knowledge intensive. Using natural language processing techniques and systems (NLP), a significant portion of these resources can be automatically annotated. In this tutorial, we outline the motivations and objectives of NLP, give an overview of several accessible systems (General Architecture on Text Engineering, C&C/Boxer, Attempto Controlled English), provide examples of processing legal resources, and discuss future directions in this area.
Over the last couple of months, I have had discussions about text mining and annotating rules in legislation with several people (John Sheridan of The Office of Public Sector Information, Richard Goodwin of The Stationery Office, and John Cyriac of Compliance Track). While nothing yet concrete has resulted from these discussions, it is clearly a “hot topic”.
In the course of these discussions, I prepared a short outline of the issues and approaches, which I present below. Comments, suggestions, and collaborations are welcome. Vision, context, and objectives
One of the main visions of artificial intelligence and law has been to develop a legislative processing tool. Such a tool has several related objectives:
[1.] To guide the drafter to write well-formed legal rules in natural language.[2.] To automatically parse and semantically represent the rules.[3.] To automatically identify and annotate the rules so that they can be extracted from a corpus of legislation for web-based applications.[4.] To enable inference, modeling, and consistency testing with respect to the rules.[5.] To reason with respect to domain knowledge (an ontology).[6.] To serve the rules on the web so that users can use natural language to input information and receive determinations.
While no such tool exists, there has been steady progress on understanding the problems and developing working software solutions. In early work (see The British nationality act as a logic program (1986)), an act was manually translated into a program, allowing one to draw inferences given ground facts. Haley is a software and service company which provides a framework which partially addresses 1, 2, 4, and 6 (see Policy Automation). Some research addresses aspects of 3 (see LKIF-Core Ontology). Finally, there are XML annotation schemas for legislation (and related input support) such as The Crown XML Schema for Legislation and Akoma Ntoso, both of which require manual input. Despite these advances, there is much progress yet to be made. In particular, no results fulfill [3.].
In consideration of [3.], the primary objective of this proposal is to use the General Architecture for Text Engineering (GATE) framework in order to automatically identify and annotate legislative rules from a corpus. The annotation should support web-based applications and be consistent with semantic web mark ups for rules, e.g. RuleML. A subsidiary objective is to define an authoring template which can be used within existing authoring applications to manually annotate legislative rules. Benefits
Attaining these objectives would:
Support automated creation, maintenance, and distribution of rule books for compliance.
Contribute to the development of a legislative processing tool.
Make legislative rules accessible for web-based applications. For example, given other annotations, one could identify rules that apply with respect to particular individuals in an organisation along with relevant dates, locations, etc.
Enable further processing of the rules such as removing formatting, parsing the content of the rules, and representing them semantically.
Allow an inference engine to be applied over the formalised rule base.
Make legislation more transparent and communicable among interested parties such as government departments, EU governments, and citizenry.
To attain the objectives, we propose the following phases, where the numbers represent weeks of effort:
Create a relatively small sample corpus to scope the study.
Manually identify the forms of legislative rules within the corpus.
Develop or adapt an annotation scheme for rules.
Apply the analysis tools of GATE and annotate the rules.
Validate that GATE annotates the rules as intended.
Apply the annotation system to a larger corpus of documents.
For each section, we would produce a summary of results, noting where difficulties are encountered and ways they might be addressed. Extending the work
The work can be extended in a variety of ways:
Apply the GATE rules to a larger corpus with more variety of rule forms.
Process the rules for semantic representation and inference.
Take into consideration defeasiblity and exceptions.
In this post, I present User Manual notes for GATE’s Onto Root Gazetteer (ORG) and references to ORG. In Discussion of GATE’s Onto Root Gazetteer, I discuss aspects of Onto Root Gazetteer which I found interesting or problematic. These notes and discussion may be of use to those researchers in legal informatics who are interested in text mining and annotation for the semantic web.
Thanks to Diana Maynard, Danica Damljanovic, Phil Gooch, and the GATE User Manual for comments and materials which I have liberally used. Errors rest with me (and please tell me where they are so I can fix them!). Purpose
Onto Root Gazetteer links text to an ontology by creating Lookup annotations which come from the ontology rather than a default gazetteer. The ontology is preprocessed to produce a flexible, dynamic gazetteer; that is, it is a gazetteer which takes into account alternative morphological forms and can be added to. An important advantage is that text can be annotated as an individual of the ontology, thus facilitating the population of the ontology.
Besides being flexible and dynamic, some advantages of ORG over other gazetteers:
It is more richly structured (see it as a gazetteer containing other gazetteers)
It allows one to relate textual and ontological information by adding instances.
It gives one richer annotations that can be used for further processes.
In the following, we present the step by step instructions for ‘rolling your own’, then show the results of the ‘prepackaged’ example that comes with the plugin. Setup
Step 1. Add (if not already used) the Onto Root Gazetteer plugin to GATE following the usual plugin instructions.
Step 2. Add (if not already used) the Ontology Tools (OWLIM Ontology LR, OntoGazetteer, GATE Ontology Editor, OAT) plugin. ORG uses ontologies, so one must have these tools to load them as language resources.
Step 3. Create (or load) an ontology with OWLIM (see the instructions on the ontologies). This is the ontology that is the language resource that is then used by Onto Root Gazetteer. Suppose this ontology is called myOntology. It is important to note that OWLIM can only use OWL-Lite ontologies (see the documentation about this). Also, I succeeded in loading an ontology only from the resources folder of the Ontology_Tools plugin (rather than from another drive); I don’t know if this is significant.
Step 4. In GATE, create processing resources with default parameters:
Document Reset PR
RegEx Sentence Splitter (or ANNIE Sentence Splitter, but that one is likely to run slower
ANNIE English Tokeniser
ANNIE POS Tagger
GATE Morphological Analyser
Step 5. When all these PRs are loaded, create a Onto Root Gazetteer PR and set the initial parameters as follows. Mandatory ones are as follows (though some are set as defaults):
Ontology: select previously created myOntology
Tokeniser: select previously created Tokeniser
POSTagger: select previously created POS Tagger
Morpher: select previously created Morpher.
Step 6. Create another PR which is a Flexible Gazetteer. At the initial parameters, it is mandatory to select previously created OntoRootGazetteer for gazetteerInst. For another parameter, inputFeatureNames, click on the button on the right and when prompt with a window, add ‘Token.root’ in the provided text box, then click Add button. Click OK, give name to the new PR (optional) and then click OK.
Step 7. To create an application, right click on Application, New –> Pipeline (or Corpus Pipeline). Add the following PRS to the application in this order:
Document Reset PR
RegEx Sentence Splitter
ANNIE English Tokeniser
ANNIE POS Tagger
GATE Morphological Analyser
Step 8. Run the application over the selected corpus.
Step 9. Inspect the results. Look at the Annotation Set with Lookup and also the Annotation List to see how the annotations appear. Small Example
The ORG plugin comes with a demo application which not only sets up all the PRs and LRs (the text, corpus, and ontology), but also the application ready to run. This is the file exampleApp.xgapp, which is in resource folder of the plugin (Ontology_Based_Gazetteer). To start this, start GATE with a clean slate (no other PRs, LRs, or applications), then Applications, then right click to Restore application from file, then load the file from the folder just given.
The ontology which is used for an illustration is for GATE itself, giving the classes, subclasses, and instances of the system. While the ontology is loaded along with the application, one can find it here. The text is simple (and comes with the application): language resources and parameters.
FIGURE 1 (missing at the moment)
FIGURE 2 (missing at the moment)
One can see that the token “language resources” is annotated with respect to the class LanguageResource, “resources” is annotated with GATEResource, and “parameters” is annotated with ResourceParameter. We discuss this further below.
One further aspect is important and useful. Since the ontology tools have been loaded and a particular ontology has been used, one can not only see the ontology (open the OAT tab in the window with the text), but one can annotate the text with respect to the ontology — highlight some text and a popup menu allows one to select how to annotate the text. With this, one can add instances (or classes) to the ontology. Documentation
One can consult the following for further information about how the gazetteer is made, among other topics: