BBC's Radio 4 on Vagueness in Law

On the BBC Radio 4 Analysis program, there was an episode about the Sorities Paradoxes. These are the sorts of paradoxes that arise about categories that have no sharp boundaries:

One grain of sand is not a heap of sand; two grains of sand are not a heap of sand; …. ; adding one more grain of sand to some sand is not enough to make a heap of sand; yet, at some point, we agree we have a heap of sand.

So, where are the boundaries?
Part of what is interesting to me is that while we might have problems to provide a formal, systematic analysis, we seem to have strong intuitions that are (more or less, and in fact more, where all things are otherwise equal) in agreement with the intuitions of others.
In law, such issues about vagueness also arise, and they lead to legal contention, so are important to decide. In this radio broadcast, there is a fun discussion of the sorities paradoxes and some mention of how legislators address them; in particular, just how can legislators ‘define’ nudity?
Analysis Extra: The Philosopher’s Arms: Sorites’ Heap 10 Sep 2012
The program is about 30 minutes long and should play in your browser. The broadcast content is copyright the BBC. Radio 4 is great!

General Architecture for Text Engineering Summer School 2011

I had the opportunity (thanks Katie Atkinson!) to attend the General Architecture for Text Engineering Summer School 2011. The GATE people have really developed this summer school very well. It was well attended (70 participants?) and well structured (three sections and various talks). GATE attacts a good, outgoing, helpful, and diverse group of people. A whole week of GATE and never a dull moment. Geeky, but true. And text analytics seems to be a growing area (at least according to the May 2011 issue of New Scientist, which lists it as one of seven “disruptive” technologies; I’ve always wanted to be bad).
As this was my second time at the GATE summer school, I sat in on the Advanced GATE session. All the slides and all the materials for hands on exercises are available on the GATE Summer School Wiki. In my week, we covered the following:

  • Module 9: Ontologies and Semantic Annotation
    • Introduction to Ontologies
    • GATE Ontology Editor
    • GATE Ontology Annotation Tools for Entities and Relations
    • Automatic Semantic Annotation in GATE
    • Measuring Performance
    • Using the Large Knowledge Base gazetteer (LKB)
  • Module 10: Advanced GATE Applications
    • Customising ANNIE
    • Working with different languages
    • Complex applications
    • Conditional Processing
    • Section-by-section processing
  • Module 11: Machine Learning
    • Machine learning and evaluation concepts
    • Using ML in GATE
    • Engines and algorithms)
    • Entity learning hands-onl session
    • Relation extraction hands-on session
  • Module 12: Opinion Mining
    • Introduction to opinion mining and sentiment analysis
    • Using GATE tools to perform sentiment analysis
    • Machine learning for sentiment analysis hands-on session
    • Future directions for opinion mining
  • Module 13: Semantic Technology and Linked Open Data: Basics, Tools, and Applications
    • Linked Open Data: Introduction of key principles and some key tools (FactForge, LinkedLifeData)
    • Semantic Annotation with Linked Data
    • Semantic Search

By Adam Wyner

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

Presentation at Legal Know-how Workshop, Nov. 10, 2010

I have been invited to make a presentation on Textual information extraction and ontologies for legal case-based reasoning at a Legal Know-how Workshop, which is an industry oriented event organised by the International Society for Knowledge Management UK.
Date: 10 November 2010
Time: 13:30-19:00
Venue: University College London
Medical Sciences Building
A. V. Hill Lecture Theatre
Gower Street
London, WC1E 6BT
See the workshop website for registration fee (either free or under £25) and booking.
This will be a very interesting opportunity to hear from and talk with industry consultants and experts about the latest developments in legal knowledge management. My thanks to Stella Dextre Clarke of ISKO-UK for organising the event and inviting me to take part.

Programme

13:30 Registration
14:00 Welcome from ISKO-UK by Stella Dextre Clarke
14:05 Legal knowledge – the practitioner’s viewpoint
Melanie Farquharson, 3Kites Consulting

This session will focus on the practical situations in which lawyers look for knowledge in order to deliver legal services to their clients. It will identify some typical ‘use cases’ and consider ways in which knowledge can be delivered to the practitioner – even without them having to look for it.

14:35 Why lawyers need taxonomies – adventures in organising legal knowledge
Kathy Jacob & Lynley Barker, Pinsent Masons LLP;
Graham Barbour & Mark Fea, LexisNexis

This presentation will cover the practical issues encountered by a law firm in its quest to improve findability of one of its key resources – knowledge and information. We will discuss our approach to building taxonomies, the tools and processes deployed and how we anticipate our taxonomy will be applied and consumed by lawyers and publishers.
The LexisNexis part of the presentation will focus on the challenges of building and applying legal taxonomies to suit the breadth and depth of content they provide online. It will also examine ways in which taxonomies can be surfaced in the user interface and help to drive compelling functionality that improves the user’s search experience.

15:20 Taxonomy management at Clifford Chance
Mats Bergman, Clifford Chance

This talk will describe how taxonomy management works in practice at Clifford Chance. As an increasing number of core knowledge resources are making use of the same set of firm-wide taxonomies, the increased interdependencies necessitate the implementation of a controlled process for updating the taxonomies. A simple governance model will be presented. Some thoughts will follow on the evolution of taxonomy development within a larger organisation and the current challenge of using social tagging in conjunction with controlled vocabularies.

15:50 Refreshments (Lower Refectory)
16:20 Textual information extraction and ontologies for legal case-based reasoning
Adam Wyner, University of Liverpool

This talk gives a brief overview of current developments and prospects in two related areas of the legal semantic web for legal cases – textual information extraction and ontologies. Textual information extraction is a process of automatically annotating and extracting textual information from the legal case base (precedents), thereby identifying elements such as participants, the roles the participants play, the factors which were considered in arriving at a decision, and so on. The information is valuable not only for search (to find applicable precedents), but also to populate an ontology for legal case-based reasoning. An ontology is a formal representation of key aspects of the knowledge of legal professionals with which we can reason (e.g. given an assertion that something is a legal case, we can infer other properties) and with respect to which we can write rules (e.g. reasoning using case factors to arrive at a legal decision). Since it is expensive to manually populate an ontology (meaning to read cases and input the data into the ontology), we use textual information extraction to automatically populate the ontology. We conclude with an appeal for open source, collaborative development of legal knowledge systems among partners in academia, industry, and government.

17:00 Collaboration across boundaries
Gwenda Sippings & Gerard Bredenoord, Linklaters LLP

In this presentation, we will look at approaches to managing legal know-how in a major global law firm. We will describe several boundaries that we have to consider when organising our know-how, including boundaries between professionals, countries, internal and external resources and the well debated boundary between information and knowledge. We will also share some of the ways in which we are making our know-how available to the fee earners and other professionals in the firm, using social and technological solutions.

17:35 Reconciling the taxonomy needs of different users
Derek Sturdy, Tikit Knowledge Services

The last decade has seen the development of a substantial number of legal know-how and knowledge databases. It has also shown up a serious question on whether the metadata, and especially the taxonomies, that are applied to the various knowledge items, should be tailored to the particular needs of end-users, or whether, so to speak, "one size can fit all". In particular, this talk will discuss the overlapping, but discrete, needs of those using knowledge resources primarily for legal drafting and document production, and of those conducting legal research, and will address the relative value today, (as opposed to in 2000), of the effort put into internal metadata creation for those two sorts of end-users.

By Adam Wyner
Distributed under the Creative Commons
Attribution-Non-Commercial-Share Alike 2.0

Legal Case Ontology OWL file and Case Graphic

In conjunction with the paper by Rinke Hoekstra and I (as previously noted on this blog), we are making the ontology and a graphic of Popov v. Hayashi available:
Legal Case Ontology v9
This is the OWL file. It was developed using Protege version 4, a knowledge acquisition and editing tool.
As we have not previously made this a publicly available ontology, consider it a beta release. Comments very welcome.
The graphic is the ontological representation of Popov v. Hayashi; it is a pdf file.
Ontological Graphic for Popov v. Hayashi
By Adam Wyner
Distributed under the Creative Commons
Attribution-Non-Commercial-Share Alike 2.0

New Article on Legal Case Ontologies in Knowledge Engineering Review

Rinke Hoekstra and I have a paper which will appear in Knowledge Engineering Review.
A Legal Case OWL Ontology with an Instantiation of Popov v. Hayashi
Adam Wyner and Rinke Hoekstra
To appear in Knowledge Engineering Review
Abstract
The paper provides an OWL ontology for legal cases with an instantiation of the legal case Popov v. Hayashi. The ontology makes explicit the conceptual knowledge of the legal case domain, supports reasoning about the domain, and can be used to annotate the text of cases, which in turn can be used to populate the ontology. A populated ontology is a case base which can be used for information retrieval, information extraction, and case based reasoning. The ontology contains not only elements of indexing the case (e.g. the parties, jurisdiction, and date), but as well elements used to reason to a decision such as argument schemes and the components input to the schemes. We use the Protege ontology editor and knowledge acquisition system, current guidelines for ontology development, and tools for visual and linguistic presentation of the ontology.
By Adam Wyner
Distributed under the Creative Commons
Attribution-Non-Commercial-Share Alike 2.0

Forthcoming Article: On Controlled Natural Languages: Properties and Prospects

I am a co-author of the forthcoming article On Controlled Natural Languages: Properties and Prospects. From the abstract:

This collaborative report highlights the properties and prospects of Controlled Natural Languages (CNLs). The report poses a range of questions concerning the goals of the CNL, the design, the linguistic aspects, the relationships and evaluation of CNLs, and the application tools. In posing the questions, the report attempts to structure the field of CNLs and to encourage further systematic discussion by researchers and developers.

The reference and link to the article:
A. Wyner, K. Angelov, G. Barzdins, D. Damljanovic, N. Fuchs, S. Hoefler, K. Jones, K. Kaljurand, T. Kuhn, M. Luts, J. Pool, M. Rosner, R. Schwitter, and J. Sowa. On Controlled Natural Languages: Properties and Prospects, to appear in: N.E. Fuchs (ed.), Workshop on Controlled Natural Languages, CNL 2009, LNCS/LNAI 5972, Springer, 2010.

Instructions for GATE's Onto Root Gazetteer

In this post, I present User Manual notes for GATE’s Onto Root Gazetteer (ORG) and references to ORG. In Discussion of GATE’s Onto Root Gazetteer, I discuss aspects of Onto Root Gazetteer which I found interesting or problematic. These notes and discussion may be of use to those researchers in legal informatics who are interested in text mining and annotation for the semantic web.
Thanks to Diana Maynard, Danica Damljanovic, Phil Gooch, and the GATE User Manual for comments and materials which I have liberally used. Errors rest with me (and please tell me where they are so I can fix them!).
Purpose
Onto Root Gazetteer links text to an ontology by creating Lookup annotations which come from the ontology rather than a default gazetteer. The ontology is preprocessed to produce a flexible, dynamic gazetteer; that is, it is a gazetteer which takes into account alternative morphological forms and can be added to. An important advantage is that text can be annotated as an individual of the ontology, thus facilitating the population of the ontology.
Besides being flexible and dynamic, some advantages of ORG over other gazetteers:

  • It is more richly structured (see it as a gazetteer containing other gazetteers)
  • It allows one to relate textual and ontological information by adding instances.
  • It gives one richer annotations that can be used for further processes.

In the following, we present the step by step instructions for ‘rolling your own’, then show the results of the ‘prepackaged’ example that comes with the plugin.
Setup
Step 1. Add (if not already used) the Onto Root Gazetteer plugin to GATE following the usual plugin instructions.
Step 2. Add (if not already used) the Ontology Tools (OWLIM Ontology LR, OntoGazetteer, GATE Ontology Editor, OAT) plugin. ORG uses ontologies, so one must have these tools to load them as language resources.
Step 3. Create (or load) an ontology with OWLIM (see the instructions on the ontologies). This is the ontology that is the language resource that is then used by Onto Root Gazetteer. Suppose this ontology is called myOntology. It is important to note that OWLIM can only use OWL-Lite ontologies (see the documentation about this). Also, I succeeded in loading an ontology only from the resources folder of the Ontology_Tools plugin (rather than from another drive); I don’t know if this is significant.
Step 4. In GATE, create processing resources with default parameters:

  • Document Reset PR
  • RegEx Sentence Splitter (or ANNIE Sentence Splitter, but that one is likely to run slower
  • ANNIE English Tokeniser
  • ANNIE POS Tagger
  • GATE Morphological Analyser

Step 5. When all these PRs are loaded, create a Onto Root Gazetteer PR and set the initial parameters as follows. Mandatory ones are as follows (though some are set as defaults):

  • Ontology: select previously created myOntology
  • Tokeniser: select previously created Tokeniser
  • POSTagger: select previously created POS Tagger
  • Morpher: select previously created Morpher.

Step 6. Create another PR which is a Flexible Gazetteer. At the initial parameters, it is mandatory to select previously created OntoRootGazetteer for gazetteerInst. For another parameter, inputFeatureNames, click on the button on the right and when prompt with a window, add ‘Token.root’ in the provided text box, then click Add button. Click OK, give name to the new PR (optional) and then click OK.
Step 7. To create an application, right click on Application, New –> Pipeline (or Corpus Pipeline). Add the following PRS to the application in this order:

  • Document Reset PR
  • RegEx Sentence Splitter
  • ANNIE English Tokeniser
  • ANNIE POS Tagger
  • GATE Morphological Analyser
  • Flexible Gazetteer

Step 8. Run the application over the selected corpus.
Step 9. Inspect the results. Look at the Annotation Set with Lookup and also the Annotation List to see how the annotations appear.
Small Example
The ORG plugin comes with a demo application which not only sets up all the PRs and LRs (the text, corpus, and ontology), but also the application ready to run. This is the file exampleApp.xgapp, which is in resource folder of the plugin (Ontology_Based_Gazetteer). To start this, start GATE with a clean slate (no other PRs, LRs, or applications), then Applications, then right click to Restore application from file, then load the file from the folder just given.
The ontology which is used for an illustration is for GATE itself, giving the classes, subclasses, and instances of the system. While the ontology is loaded along with the application, one can find it here. The text is simple (and comes with the application): language resources and parameters.
FIGURE 1 (missing at the moment)
FIGURE 2 (missing at the moment)
One can see that the token “language resources” is annotated with respect to the class LanguageResource, “resources” is annotated with GATEResource, and “parameters” is annotated with ResourceParameter. We discuss this further below.
One further aspect is important and useful. Since the ontology tools have been loaded and a particular ontology has been used, one can not only see the ontology (open the OAT tab in the window with the text), but one can annotate the text with respect to the ontology — highlight some text and a popup menu allows one to select how to annotate the text. With this, one can add instances (or classes) to the ontology.
Documentation
One can consult the following for further information about how the gazetteer is made, among other topics:

Discussion
See the related post Discussion of GATE’s Onto Root Gazetteer.
By Adam Wyner
Distributed under the Creative Commons
Attribution-Non-Commercial-Share Alike 2.0

Discussion of GATE's Onto Root Gazetteer

In Instructions for GATE’s Onto Root Gazetteer, I have information to set up Onto Root Gazetteer. In this post, I discusses aspects of the Onto Root Gazetteer that I found interesting or problematic.
For me, the documentation was not helpful as too much technical information was provided (e.g. preprocessing the ontology) rather than the steps just to get it to run. Also, no walk through example was clearly illustrated. I would still like (and will provide in the near future) a richer text (a nice paragraph) and a simpler ontology (couple of classes, subclasses, object and data properties, and individuals) to illustrate just what is done fully.
Though I have it running, there are several questions (and partial answers or musings):

  • What is the annotation relative to the ontology good for?
  • What is the difference between gazetteers derived from ontologies and default gazetteers?
  • What is the selection criteria for annotating the tokens?
  • What is the relationship between the annotated text and the ontology?

Concerning the first point, presumably more annotations allow more processing capabilities. A (simple) example would be very helpful.
Concerning the second point, matters are more complex (to my mind). First, default gazetteers (or flexible gazetteers for that matter) are flat lists (a list containing no sublists as parts) where the items in the list are annotated as per the properties of the list; for example, if we have a gazetteer for Organisation (call this the header of the list) which lists IBM, BBC, Hackney Council (call these the items of the list), then every token of IBM, BBC, and Hackney Council found in the corpus will be annotated Organisation. If there is a token organisation in the corpus, it will not be annotated with Organisation; similarly, no token of IBM in the corpus is annotated IBM. The list categorises, in effect, IBM, BBC, and Hackney Council as of the type Organisation.
ORG works differently (I believe, but may be wrong), but these points are not made in the documentation. First, a gazetteer which is derived from an ontology preserves the subsumption hierarchy of the ontology, giving us a list of lists. Such a gazetteer is a taxonomy of terminology, which is not the same as an ontology (though frequently mistaken to be identical). Second, if a token in the text is found to (flexibly) match an item in the gazetteer, then the token is annotated with that item, meaning that if the string IBM is a token in our text and an item in the gazetteer, then token is annotated IBM. In these respect, ORGs work differently from other gazetteers.
The third question might be addressed in the richer documentation concerning ORG. It relates to observations concerning the results of the example application. Consider the following. The token “language resources” has the annotation:
URI=http://gate.ac.uk/ns/gate-ontology#LanguageResource, heuristic_level=0, majorType=, propertyURI=http://www.w3.org/2000/01/rdf-scheme#label, type=class
The token “resources” has the annotation:
URI=http://gate.ac.uk/ns/gate-ontology#GATEResource, heuristic_level=0, majorType=, propertyURI=http://www.w3.org/2000/01/rdf-scheme#label, type=class
And the token “parameters” has annotation:
URI=http://gate.ac.uk/ns/gate-ontology#ResourceParameter, heuristic_level=0, majorType=, propertyURI=http://www.w3.org/2000/01/rdf-scheme#label, type=class
We see that the tokens in the text are annotated in relation to the ontology. Yet it is not clear why the token “resources” is not annotated with LanguageResource or ResourceParameter since these are components of the ORG as well. Likely there is some prioritising among the annotations that we need to learn.
Finally, concerning the last question, matters are somewhat unclear (to me) largely because the line between annotations, gazetteers, and ontologies are blurred, where for me the key unclarity focuses around annotations in the text that match items in the gazetteer. Consider the issue from a different point of view. ORG was developed in the context of a project to support ontology development from text — find terms and relations which are candidates for the ontology, then (if one wants) use the terms and relations to build the ontology. For example, if one sees lots of occurrences of “organisation” in the text, then perhaps it would be introduced as a concept in the ontology. We have a many-one relation from the tokens to the ontology. This makes sense. See it another way, where we have a default gazetteer where every given token (e.g. IBM) in a text has the same annotation, giving the impression of a one-many relation. This also makes sense. Neither of these seem problematic to me largely because I don’t really know much or presume much about the meaning of the annotation on the token: from the text, I abstract the concept, from the gazetteer, I label tokens as belonging to the same annotation class. In no case is a token “organisation” annotated with Organisation; even if it were, I couldn’t really object unless I said more about what I think the annotation means.
Contrast these points with what goes on with ORG (admittedly, this gets pretty philosophical, and in terms of day to day practice, it may not be relevant). First, it seems that one instance in the ontology is associated with multiple tokens in the text. Second, an instance or class in the ontology can be associated with a token that is intended to have some similar meaning — e.g. the individual IBM in the ontology is associated by annotation with every token of IBM in the text, and similarly for the classes. Neither of these make sense to me in terms of what ontologies are intended to represent, which is a state of knowledge (the fixed concepts, object and data properties, and individuals) about a domain. On the first point, how can I be assured that the intended meaning of tokens is the same throughout the corpus? In one document, we might find IBM as the name of a non-existent company, in an other for an existing company, and in another for a company that has gone bankrupt. Simply put, the string might remain the same, but the knowledge we have about it may vary. Ontologies (as they are currently represented) do not allow such dynamic interpretation. To ignore this point risks having annotations (and whatever might flow from the annotations) slip; for example, it would be wrong to find a relationship between IBM and owners where the company doesn’t exist. On the second point, conceptually it makes no sense to say that a token “organisation” is itself associated with the concept or instance or ‘organisation’ in the ontology. Or course, in developing the ontology, going from the text to the ontology makes good sense since one is abstracting from the text to the ontology. Yet, in that move, one makes something different — a concept over all the “ideas” drawn from the tokens. So, I disagree emphatically with Peters and Maynard (from the NeON article): “Texts are annotated with ontology classes, and the textual elements function as instances of these classes.” The textual element “organisation” or “IBM” is an instance of the concept organisation or the individual IBM? I think this is a category mistake.
In general, I find the relationship between the text, intermediate representations (gazettees), and ontologies (higher level representations of knowledge) rather interesting, but somewhat murky. As I said earlier, perhaps this is just philosophy. Depending on the domain of discussion, the corpus, and the way the annotations and ontologies are used, perhaps my intuition of lurking trouble will not be realised…. Equally, there is likely something simple that I’m missing. If so, please enlighten me.
By Adam Wyner
Distributed under the Creative Commons
Attribution-Non-Commercial-Share Alike 2.0

Meeting with John Sheridan on the Semantic Web and Public Administration

I met today with John Sheridan, Head of e-Services, Office of Public Sector Information, The National Archives, located at the Ministry of Justice, London, UK. Also at the meeting was John’s colleague Clare Allison. John and I had met at the ICAIL conference in Barcelona, where we briefly discussed our interests in applications of Semantic Web technologies to legal informatics in the public sector. Recently, John got back in contact to talk further about how we might develop projects in this area.
Perhaps most striking to me is that John made it clear that the government (at least his sector) is proactive, looking for research and development projects that make government data available and usable in a variety of ways. In addition, he wanted to develop a range of collaborations to better understand the opportunities the Semantic Web may offer.
As part of catching up with what is going on, I took a look around the web for relatively recent documents on related activities.

In our discussion, John gave me an overview of the current state of affairs in public access to legislation, in particular, the legislative markup and API. The markup is intended to support publication, revision, and maintenance of legislation, among other possibilities. We also had some discussion about developing an ontology of goverment which would be linked to legislation.
Another interesting dimension is that John’s office is one of a few that I know of which are actively engaged to develop a knowledge economy partly encouraged by public administrative requirements and goals. Others in this area are the Dutch and the US (with xml.gov). All very promising and discussions well worth following up on.
Copyright © 2009 Adam Wyner

Participating in One-Lex — Managing Legal Resources on the Semantic Web

Later this summer, I’ll be participating in the summer school Managing Legal Resources in the Semantic Web, September 7 to 12 in San Domenico di Fiesole (Florence, Italy). This program will focus on several aspects of legal document management:

  • Drafting methods, to improve the language and the structure of legislative texts
  • Legal XML standards, to improve the accessibility and interoperability of legal resources
  • Legal ontologies, to capture legal metadata and legal semantics
  • Formal representation of legal contents, to support legal reasoning and argumentation
  • Workflow models, to cope with the lifecycle of legal documentation

While I’m familiar with several of these areas, I’m using this opportunity to fill in my knowledge in these key areas.