Rules as Code

Knowledge Sharing Session

On Friday, 28 February, I attended and participated in a Rules as Code knowledge sharing session at the Bank of England, London, UK.

Thanks to Angus Moir (Bank of England) and Luke Norbury (Parliamentary Counsel Office) for organising the event.

Good presentations, interesting comments, and new contacts. I look forward to more collaborations on this important work.

There were participants from:

      • Office of Parliamentary Counsel
      • Bank of England
      • Swansea University
      • Financial Conduct Authority
      • Department for Business, Energy & Industrial Strategy
      • Department for Work and Pensions
      • Home office
      • Government legal service
      • Financial Reporting Council
      • Legisative drafting office, Jersey
      • Office of the Legislative Counsel, Wales
      • Office of the Legislative Counsel, Northern Ireland

Agenda

      • 14:00 – 14:20: Welcome & introductions [OPC]
      • 14:20 – 14:30: Overview of Digital First Policy Making & Rules as Code [Angus/BOE]
      • 14:30 – 14:40: OPC / Uni. Swansea Rules as Code collaboration [Adam/Uni Swansea and Paul/OPC]
      • 14:40 – 14:50: Non-financial legislation – human input, discretion & logic. [Matthew Waddington/Jersey PC]
      • 14.50 – 15.00: DRR: Showcasing machine executable regulation [Angus/BOE]
      • 15.00 – 15.10: Universal Credit Logic Map [Luke/OPC and Bridget/DWP]
      • 15.10 – 15.25: BEIS Better Regulation Executive: Projects to support the Digital Transformation of Regulation [Adrienn/BEIS]
      • 15.25 – 15.40: General Q&A/ discussion [Luke/OPC]
      • 15:40 – 15:50: Wrap up and next steps [Angus/BOE]

Legislation and Computers – London Workshop

I (albeit remotely) participated in a Workshop in London on June 25, 2019 on Legislation and Computers. This meeting is part of ongoing developments by an international group of Parliamanetary Counsels to explore and discuss recent developments in machine-readable and executable legislation.

I gave my talk in conjunction with one of my collaborators Fraser Gough, Parliamentary Counsel of the Parliamentary Counsel Office, Scottish Government.

The program of the workshop is here and the slides of the talk are here.

Office of the Parliamentary Counsel, London

I had the opportunity to give a talk at the Office of the Parliamentary Counsel, London, on 21 May 2019 about a small pilot project done with colleagues in the Parliamentary Counsel Office of the Scottish Government.

The talk is about applying some LegalRuleML elements as annotations to a corpus of Scottish legislation, making the annotated documents accessible on the Web, then visualising and querying the corpus to access particularly relevant information from across the corpus.

The slides of the talk are available here.

Thanks especially to Luke Norbury for the kind invitation and to the audience.

CodeX Weekly Meeting

I spoke briefly at the online CodeX weekly meeting about small pilot project done with colleagues in the Parliamentary Counsel Office of the Scottish Government.

The talk is about applying some LegalRuleML elements as annotations to a corpus of Scottish legislation, making the annotated documents accessible on the Web, then visualising and querying the corpus to access particularly relevant information from across the corpus.

There were other excellent presentations by:

CodeX will make a recording of the session available.

The slides of my talk are available here. The talk is a shortened and slightly modified version of a talk at the Office of the Parliamentary Counsel, London.

Thanks especially to Jameson Dempsey at CodeX for inviting me to participate.

PhD Projects in AI and Law

Proposals

Below, there are a range of PhD project in AI and Law. Other project proposals in the general AI and Law topic area are also welcome. I am interested in NLP (information extraction, semantic representation, controlled natural languages, classification, chatbots, dialogue/discourse), argumentation, various forms of legal reasoning (case-based reasoning, client consultations, legislation/regulations, court proceedings, contracts), and ontologies/knowledge graph.

An Argument Chatbot

Chatbots are a popular area of development. In this project, you develop a chatbot for policy-making, legal consultations, or scientific debates. The chatbot should be capable of various dialogue types such as information-seeking, deliberation, and persuasion; in addition, the dialogue should be tied to patterns of argument and critical questions. The underlying techniques will be natural language processing (rule-based and machine learning), structured argumentation, and knowledge representation and reasoning. The project may be done in collaboration with IBM UK, working with IBM scientists and engineers.

Argument Extraction and Reconstruction

The goal is to identify textual passages which indicate argument and rhetorical structure (premises, claim, continuation) or argumentation schemes (patterns of everyday reasoning such as Expert Witness, Practical Reasoning, Commitment, etc). The student will review some background literature, analyse a selection of argumentation schemes, identify the particular elements to be extracted using an NLP tool, create the processing components, carry out a small evaluation exercise, and connect the NLP output to a computational argumentation tool. The particular corpus of text is to be determined.

Textual Entailment

Textual entailment is about taking a sentence or passage and drawing inferences from it, for example, the sentence “Bill turned off the light” implies “The light was off”. There are several available NLP tools to develop textual entailment. In this project, the student will apply the textual entailment tools to the corpus, evaluate them against a “gold standard”, then modify a tool to improve performance. There are existing corpora of texts to train and evaluate textual entailment.

Contrast Identification

Debates express contrasting positions on a particular topic of interest. A key problem is to determine the semantic contrariness of the positions as expressed by statements within the positions. Such a task is relatively easy for people to do, but difficult for automated identification since there are many linguistic ways to express contrasts, some of which may be synonymous. Annotation of contrast would help support semi-automatic construction of arguments and counter-arguments from text. The student will review some background literature, analyse a selection of contrasting expressions, identify the particular elements to be extracted using an NLP tool, create the processing components, carry out a small evaluation exercise, and connect the NLP output to a computational argumentation tool.

Classification of legal texts

In legal texts such as legislation or case law, different segments of the text serve different purposes. For example, one portion may be a statement of facts, while another is a statement of a rule. The project specifies the portions and classifications of a corpus of legal texts, creates a gold standard, then applies machine learning techniques to classify the portions to a high level of accuracy. Another topic within this area is legal decision prediction, wherein legal decisions (cases) are classified in various ways.

Bar Exam

In the US, to become a lawyer, a Bar Exam must be taken and past. The Bar Exam consists of 200 multiple choice questions, covering an extensive range of legal topics. The task in the project is to classify, using machine learning, the questions in the Bar Exam and to design a system to pass the Bar Exam, using techniques from NLP, logic, and machine learning.

A Controlled Natural Language with Defeasibility

Controlled Natural Languages (CNLs) are standardised, formal subsets of a natural language (such as English), which are both human readable and machine processable. Several CNLs have been developed, such as IBM’s ITA Controlled English (CE) or Business Rules Language (BRL), OMG’s Semantics of Business Vocabulary and Business Rules, and several academic languages. Some CNLs support ‘strict’ reasoning for ontologies, terminologies, or Predicate Logic, which is sufficient in many contexts. However, defeasible reasoning is essential in other contexts where there is inconsistent and partial knowledge, such as in political, legal, or scientific debates.  The project explores the representation of and reasoning with defeasibility in a CNL, which could lead to a CNL that has much wide applicability and impact. The project can be done in collaboration with IBM UK, working with IBM scientists and engineers. The project can be either a theoretical study or an implementation (or a mixture of both). The supervisor has extensive background CNLs and argumentation/defeasibility.

Rule Extraction from Legislation or Case Law

Legal texts (legislation, regulations, and case law) provide the “operational legal rules” for businesses, organisations, and individuals. It is important to be able to identify and extract such rules, particularly for rulebook compliance or to transform rules in natural language into machine-readable, executable rules. The student will analyse a selection of regulations, identify the particular elements to be extracted using NLP, create the processing components, translate rules from natural language to executable rules, draw inferences, and evaluate the results.

An Expert System to Support Reasoning in Juries

Jury trials are a fundamental aspect of the Common Law legal system in the UK and USA. In jury trials, jurors are members of the public who are required to reason about the facts of the case and about the legal rules to arrive at a decision (e.g. whether the plaintiff is guilty or innocent). This is a difficult and important task for a person to do who is not schooled in the law. Fortunately, in some jurisdictions, there are standardised “catalogues” of jury instructions to guide the jurors in how to reason. In this project, the student analyses a selection of jury instructions and implements them as an interactive juror decision support tool.

Legal Case Based Reasoning

Case based reasoning is about using known information to determine unknown problems. Legal case based reasoning is the structure of legal reasoning in courts in the UK and the USA. The project will implement several existing formalisations of legal case based reasoning.

Logical Formalisations of the Law

The law can be formalised in a variety of ways, and there are tools and techniques to support the task. Such formalisations can be queried and inferences drawn. The project will examine existing tools, see what can be improved, and provide fragments of formalised law.

Legal Ontologies/Knowledge Graph

In an ontology/KG, domain knowledge about entities, their properties, and their relations are formally represented. Representations also facilitate querying, extraction, linking, and inference. There are legal ontologies/KGs that represent the law, legal processes, and legal relationships. The project will examine existing legal ontologies, augment them, and build a richer ontological representation using existing tools.

Abstract Contract Calculator in Haskell

Create a program in Haskell, which is a functional programming language, to execute ‘theoretical’ legal contracts, which are contracts that have the form of an actual legal contract, but not the content.

BiCi Seminar "Frontiers and Connections between Argumentation Theory and Natural Language Processing"

Bertinoro, Italy
Date
July 20th-25th, 2014
Location
Bertinoro International Center for Informatics (BiCi)
Bertinoro, Italy
The seminar will be held in the BiCi, which is located in the small medieval hilltop town of Bertinoro, Italy, about 50km east of Bologna. The town is picturesque. Meetings are held in an archiepiscopal castle that has been converted into a modern conference center.
Context
Large amounts of text are added to the Web daily from social media, web-based commerce, scientific papers, eGovernment consultations, etc. Such texts are used to make decisions in the sense that people read the texts, carry out some informal analysis, and then (in the best case) make a decision; for example, a consumer might read the comments on an Amazon website about a camera before deciding what camera to buy. The problem is that the information is distributed, unstructured, and not cumulative. In addition, the argument structure – justifications for a claim and criticisms – might be implicit or explicit within some document, but harder to discern across documents. The sheer volume of information overwhelms users. Given all these problems, reasoning about arguments on the web is currently infeasible.
A solution to these problems would be to develop tools to aggregate, synthesize, structure, summarize, and reason about arguments in texts. Such tools would enable users to search for particular topics and their justifications, trace through the argument (justifications for justifications and so on), as well as to systematically and formally reason about the graph of arguments. By doing so, a user would have a better, more systematic basis for making a decision. However, deep, manual analysis of texts is time-consuming, knowledge intensive, and thus unscalable. To acquire, generate, and transmit the arguments, we need scalable machine-based or machine-supported approaches to extract arguments. The application of tools to mine arguments would be very broad and deep given the variety of contexts where arguments appear and the purposes they are put to.
On the one hand, text analysis is a promising approach to identify and extract arguments from text, receiving attention from the natural language processing community. For example, there are approaches on argumentation mining of legal documents, on-line debates, product reviews, newspaper articles, court cases, scientific articles, and other areas. On the other hand, computational models of argumentation have made substantial progress in providing abstract, formal models to represent and reason over complex argumentation graphs. The literature covers alternative models, a range of semantics, complexity, and formal dialogues. Yet, there needs to be progress not only within each domain, but in bridging between textual and abstract representations of argument so as to enable reasoning from source text.
To make progress and realize automated argumentation, a range of interdisciplinary approaches, skills, and collaborations are required, covering natural language processing technology, linguistic theories of syntax, semantics, pragmatics and discourse, domain knowledge such as law and science, computer science techniques in artificial intelligence, argumentation theory, and computational models of argumentation.
Objectives and Outcomes
The objective of the seminar is to gather an interdisciplinary group of scholars together for an extended, collaborative discussion about the various aspects of connecting argumentation and natural language processing. The intended outcome of the seminar is a roadmap that outlines the state-of-the art, identifies key problems and issues, and suggests approaches to addressing them. More precisely, theseminar is conceived for the writing of a monograph “A Prospective View of Argumentation Theory and Natural Language Processing” that should become a standard reference in the field and should provide guidelines for future research by putting that activity in focus and identify the most significant research issues in combining these two research fields. This roadmap will have several sections authored by the participants at the seminar and edited by the seminar organizers.
Format and Process
The seminar will adopt a structure, where personal interaction and open discussion are prominent, emphasizing discussion of results, ideas, sketches, works in progress, and open problems. Participants will be requested to prepare individual contributions around specific topics (see a tentative list below) so that the outcome of the workshop will constitute a roadmap for the area to be published in the near future. The allocation of topics as well as the mechanism for compiling and elaborating contributions into a coherent draft — that will form the working document for the workshop — will be made known in a future communication to those individuals who accept to participate in this workshop.
Currently we have identified the following areas of research to be presented for discussion at the workshop (and we welcome suggestions about additional topics):

  • Automatic identification of argument elements and relationships between arguments in a document;
  • Argumentation and negation & contrariness;
  • Argumentation and discourse;
  • Argumentation and dialogue;
  • Approaches combining NLP methods and argumentation frameworks;
  • Creation/evaluation of high quality annotated natural language corpora to prove argumentative models on naturally occurring data, or to train automatic systems on tasks related to argumentation (e.g. arguments detection).
  • Applications of argumentation mining: summarization, extraction, visualization, retrieval;

Organizers
Elena Cabrio
INRIA Sophia-Antipolis Mediterranee, France
elena.cabrio@inria.fr
http://www-sop.inria.fr/members/Elena.Cabrio
Serena Villata
INRIA Sophia-Antipolis Mediterranee, France
serena.villata@inria.fr
http://www-sop.inria.fr/members/Serena.Villata
Adam Wyner
University of Aberdeen, Scotland
azwyner@abdn.ac.uk
http://wyner.info/LanguageLogicLawSoftware
Structure of Position Paper Submissions
Participants will be expected to submit position papers (with references) prior to the seminar. Submissions details will be discussed over the course of the seminar. The seminar organizers will facilitate a fruitful exchange of ideas and information in order to integrate the discussion topics.
Position papers should follow the two-column format of ACL 2014 proceedings without exceeding eight (6) pages of content plus two extra pages for references. We strongly recommend the use of ACL LaTeX style files. Submissions must conform to the official style guidelines, which are contained in the ACL style files, and they must be in PDF.
Subsequent to the seminar, draft roadmap documents will be circulated amongst the participants for further discussion and prior to submission for publication. We plan to publish the roadmap in a volume of the CEUR workshop proceedings series. In addition, we have a journal that has agreed to publish a special issue based on expanded and revised versions of the material presented at the workshop.
Organizational Issues
The total registration fees for each person for the whole stay (arrival Sunday evening – departure Friday after lunch) are 600 Euro. Participants pay their own costs; however, organizers are seeking funding to defray the expenses. We will update as information becomes available. Fees include seminar registration, accommodation, WiFi and meals (included an excursion and the social dinner).
BiCi Registration
Shortlink

Presentation at LaTeCH 2014 on "Text Analytics for Legal History

Swedish Coast
The Swedish Coast
At the EACL 2014 Workshop Language Technology for Cultural Heritage, Social Sciences, and Humanities (LaTeCH), I’m presenting a paper on A Text Analytic Approach to Rural and Urban Legal Histories. Link to the presentation below.
A Text Analytic Approach to Rural and Urban Legal Histories
The ACL publication appears on
http://www.aclweb.org/anthology/W/W14/W14-0614.pdf
This is the bib reference

Paper Accepted to LaTech 2014 Workshop at EACL

My colleagues and I have had a paper accepted to EACL 2014 workshop on: Language Technology for Cultural Heritage, Social Sciences, and Humanities, which has a special theme on linked data in the Humanities. The workshop is April 26 2014 in Gothenburg, Sweden.
Gothenburg, Sweden
Text Analysis of Aberdeen Burgh Records 1530-1531
Adam Wyner, Jackson Armstrong, Andrew Mackillop, and Philip Astley
Abstract
The paper outlines a text analytic project in progress on a corpus of entries in the historical burgh and council registers from Aberdeen, Scotland. Some preliminary output of the analysis is described. The registers run in a near-unbroken sequence form 1398 to the present day; the early volumes are a UNESCO UK listed cultural artefact. The study focusses on a set of transcribed pages from 1530-1531 originally hand written in a mixture of Latin and Middle Scots. We apply a text analytic tool to the corpus, providing deep semantic annotation and making the text amenable to linking to web-resources.
Bibtex
Shortlink to this page.
By Adam Wyner

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.