Argumentation and Linguistics Tutorial at ACAI 2013

I presented a tutorial on Argumentation and Linguistics at the Advanced Course on Artificial Intelligence (ACAI 2013) held at the Department of Informatics, King’s College London. The course focussed on Argumentation and Artificial Intelligence. From the description:

The ACAI Summer School 2013 (ACAI 2013) will be held at at King’s College London, UK, from the 1st July to the 5th July 2013 and is on the topic of Argumentation in Artificial Intelligence. Computational models of argument, and the development of agreement technologies, is becoming an important area in artificial intelligence. The aim of the summer school is to provide the attendees with a solid grounding in the basic ideas in formal modelling of argumentation, dialogue, and negotiation. Furthermore, there will be a programme of lectures on application areas, lab sessions on software developments, and lectures linking with areas in AI and beyond.

There were about 40 students in attendance. The ACAI course on argumentation covered a good, broad range of topics, presented by my european colleagues. The core of the programme consisted of four main speakers who gave 6 hours of lectures:

  • Pietro Baroni (Università degli Studi di Brescia) on Abstract Argumentation
  • Philippe Besnard (Institut de Recherche en Informatique de Toulouse) on Logic-Based Argumentation
  • Nicolas Maudet (University Pierre et Marie Curie) on Negotiation
  • Simon Parsons (University of Liverpool) on Dialogue

There were also presentations on applications of argumentation and agreement technologies:

  • Leila Amgoud (Institut de Recherche en Informatique de Toulouse) on Argumentation in Decision-Making
  • Katie Atkinson (University of Liverpool) on Argumentation in eGovernment
  • John Fox (University of Oxford) on Argumentation in Medicine
  • Nir Oren (University of Aberdeen) on Argumentation in Planning
  • Henry Prakken (Utrecht University) on Argumentation in Law
  • Chris Reed (University of Dundee) on Argumentation on the Web
  • Stefan Woltran (Vienna University of Technology) on Implementation of Argumentation
  • Adam Wyner (University of Aberdeen) on Argumentation and Linguistics

The slides of my talk are available on the link:
Argumentation and Linguistics
Adam Wyner
Shortlink to this page.
By Adam Wyner

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

Presentation at THiNK Network 2013

I participated in the THINK: The Humanities Knowledge Transfer Network meeting on July 1, 2013 at the RSA House in London
The RSA House Great Room
I made a presentation on Opportunities and Challenges of Textual Big Data for the Humanities, prepared with my colleague Prof. Barbara Fennell, Department of Linguistics, University of Aberdeen. Barbara was very generous in bringing me into this network; we’ve had several fruitful meetings, and I look forward to future collaborations.
Shortlink to this page.
By Adam Wyner

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

Paper at RuleML Special Session on Human-Rules

I’m co-author of a paper in the special session Human-Rules at The 7th International Web Rule Symposium (RuleML 2013), Seattle, Washington, USA.
Seattle, Washington, USA
A Study on Translating Regulatory Rules from Natural Language to Defeasible Logic
Adam Wyner and Guido Governatori
Abstract
Legally binding regulations are expressed in natural language. Yet, we cannot formally or automatically reason with regulations in that form. Defeasible Logic has been used to formally represent the semantic interpretation of regulations; such representations may provide the abstract specification for a machine-readable and processable representation as in LegalRuleML. However, manual translation is prohibitively costly in terms of time, labour, and knowledge. The paper discusses work in progress using the state-of-the-art in automatic translation of a sample of regulatory clauses to a machine readable formal representation and a comparison to correlated Defeasible Logic representations. It outlines some key problems and proposes tasks to address the problems.
Bibtex
@INPROCEEDINGS{WynerGovernatoriH-R2013,
author = {Adam Wyner and Guido Governatori},
title = {A Study on Translating Regulatory Rules from Natural Language to Defeasible Logic},
booktitle = {Proceedings of the {R}ule{ML} 2013},
publisher = {{CEUR}},
year = {2013},
pages = {??-??},
address = {Seattle, Washington, USA},
note = {To appear}
}
Shortlink to this page.
By Adam Wyner

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

ICAIL 2013 Papers

I’m co-author of three papers at The 14th International Conference on Artificial Intelligence and Law (ICAIL 2013), Rome, Italy, The Netherlands.
Il Colloseo, Rome, Italy
OASIS LegalRuleML
Tara Athan, Harold Boley, Guido Governatori, Monica Palmirani, Adrian Paschke, Adam Wyner
Abstract
In this paper we present the motivation, use cases, design principles, abstract syntax, and initial core of LRML. The LRML core is sufficiently rich for expressing legal sources, time, defeasibility, and deontic operators. An example is provided. LRML is compared to related work.
Bibtex
@INPROCEEDINGS{AthanEtAl2013,
author = {Tara Athan and Harold Boley and Guido Governatori and Monica Palmirani and Adrian Paschke and Adam Wyner},
title = {{OASIS} {L}egal{R}ule{ML}},
booktitle = {Proceedings of the Fourteenth International Conference on Artificial Intelligence and Law (ICAIL 2013)},
year = {2013},
pages = {3-12},
address = {Rome, Italy}
}
Argument Schemes for Reasoning with Legal Cases Using Values
Trevor Bench-Capon, Henry Prakken, Adam Wyner, and Katie Atkinson
Abstract
Argument schemes can provide a means of explicitly describing reasoning methods in a form that lends itself to computation. The reasoning required to distinguish cases in the manner of CATO has been previously captured as a set of argument schemes. Here we present argument schemes that encapsulate another way of reasoning with cases: using preferences between social values revealed in past decisions to decide cases which have no exact matching precedents when the cases are described in terms of factors. We provide a set of schemes, with variations to capture different ways of comparing sets and varying degrees of promotion of values; we formalise these schemes; and we illustrate them with some examples.
Bibtex
@INPROCEEDINGS{BenchCaponPrakkenWynerAtkinsonValueCBR2013,
author = {Trevor Bench-Capon and Henry Prakken and Adam Wyner and Katie Atkinson},
title = {Argument Schemes for Reasoning about Legal Cases},
booktitle = {Proceedings of the Fourteenth International Conference on Artificial Intelligence and Law (ICAIL 2013)},
year = {2013},
pages = {13-22},
address = {Rome, Italy}
}
Argumentation Based Tools for Policy-Making
Maya Wardeh, Adam Wyner, Trevor Bench-Capon, and Katie Atkinson
Abstract
Short paper, so no abstract.
Bibtex
@INPROCEEDINGS{WardehWynerAtkinsonBenchCaponDemos2013,
author = {Maya Wardeh and Adam Wyner and Trevor Bench-Capon and Katie Atkinson},
title = {Argumentation Based Tools for Policy-Making},
booktitle = {Proceedings of the Fourteenth International Conference on Artificial Intelligence and Law (ICAIL 2013)},
year = {2013},
pages = {249-250},
address = {Rome, Italy}
}
Shortlink to this page.
By Adam Wyner

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

Seminar Presentation at Aberdeen Law School

I was recently invited to give a seminar at the Law School at the University of Aberdeen about AI and Law related research in general and text analytics for legal studies in particulars. Though it was held at 16:00 on a Friday (!) it was well attended (thanks to all who came), and there was good discussion afterwards. I hope this is the start of a collaboration between me and my colleagues in the Law School.
The slides have some references and links that might be interesting. Click on the title link for the slides.
Textual Processing of Legal Cases
Adam Wyner
Shortlink to this page.
By Adam Wyner

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

Tutorial on "Textual Information Extraction from Legal Resources" at the 16th International Conference on Artificial Intelligence and Law, Rome, Italy

Topic

Legal resources such as legislation, public notices, case law, and other legally relevant documents are increasingly freely available on the internet. They are almost entirely presented in natural language and in text. Legal professionals, researchers, and students need to extract and represent information from such resources to support compliance monitoring, analyse cases for case based reasoning, and extract information in the discovery phase of a trial (e-discovery), amongst a range of possible uses. To support such tasks, powerful text analytic tools are available. The tutorial presents an in depth demonstration of one toolkit the General Architecture for Text Engineering (GATE) with examples and several briefer demonstrations of other tools.

Goals

Participants in the tutorial should come away with some theoretical sense of what textual information extraction is about. They will also see some practical examples of how to work with a corpus of materials, develop an information extraction system using GATE and the other tools, and share their results with the research community. Participants will be provided with information on where to find additional materials and learn more.

Intended Audience

The intended audience includes legal researchers, legal professionals, law school students, and political scientists who are new to text processing as well as experienced AI and Law researchers who have used NLP, but wish to get a quick overview of using GATE.

Covered Topics

  • Motivations to annotate, extract, and represent legal textual information.
  • Uses and domains of textual information extraction. Sample materials from legislation, case decisions, gazettes, e-discovery sources, among others.
  • Motivations to use an open source tool for open source development of textual information extraction tools and materials.
  • The relationship to the semantic web, linked documents, and data visualisation.
  • Linguistic/textual problems that must be addressed.
  • Alternative approaches (statistical, knowledge-light, machine learning) and a rationale for a particular bottom-up, knowledge-heavy approach in GATE.
  • Outline of natural language processing modules and tasks.
  • Introduction to GATE – loading and running simple applications, inspecting the results, refining the search results.
  • Development of fragments of a GATE system – lists, rules, and examination of results.
  • Discussion of more complex constructions and issues such as fact pattern identification, which is essential for case-based reasoning, named entity recognition, and structures of documents.
  • Introduction to ontologies.
  • Link textual information extraction to ontologies.
  • Introduction to related tools and approaches: C&C/Boxer (parser and semantic interpreter), Attempto Controlled English, scraperwiki, among others.

Date, Time, Location, and Logistics

Monday, June 10, afternoon session.
The tutorial was held at the Casa dell’Aviatore, viale dell’Università 20 in Rome, Italy.
Information about the conference is available at the website for the 16th International Conference on Artificial Intelligence and the Law (ICAIL).

Slides

The slides from the presentation are available here:
Textual Information Extraction from Legal Resources

Further Information

Contact the lecturer.

Lecturer

Dr. Adam Wyner
Lecturer, Department of Computing Science, University of Aberdeen
Aberdeen, Scotland
azwyner at abdn dot ac dot uk
Website
The lecturer has a PhD in Linguistics, a PhD in Computer Science, and research background in computational linguistics. The lecturer has previously given a tutorial on this topic at JURIX 2009 and ICAIL 2011 along with an invited talk at RuleML 2012, has published several conference papers on text analytics of legal resources using GATE and C&C/Boxer, and continues to work on text analysis of legal resources.
A shortlink to this webpage
By Adam Wyner
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

Psychological Studies of Policy Reasoning

The New York Times had an article on the difficulties that the public has to understand complex policy proposals – I’m Right (For Some Reason). The points in the article relate directly to the research I’ve been doing at Liverpool on the IMPACT Project, for we decompose a policy proposal into its constituent parts for examination and improved understanding. See our tool live: Structured Consultation Tool
Policy proposals are often presented in an encapsulated form (a sound bite). And those receiving it presume that they understand it, the illusion of explanatory depth discussed in a recent article by Frank Keil (a psychology professor at Cornell when and where I was a Linguistics PhD student). This is the illusion where people believe they understand a complex phenomena with greater precision, coherence, and depth than they actually do; they overestimate their understanding. To philosophers, this is hardly a new phenomena, but showing it experimentally is a new result.
In research about public policy, the NY Times authors, Sloman and Fernbach, describe experiments where people state a position and then had to justify it. The results showed that participants softened their views as a result, for their efforts to justify it highlighted the limits of their understanding. Rather than statements of policy proposals, they suggest:

Instead, we voters need to be more mindful that issues are complicated and challenge ourselves to break down the policy proposals on both sides into their component parts. We have to then imagine how these ideas would work in the real world — and then make a choice: to either moderate our positions on policies we don’t really understand, as research suggests we will, or try to improve our understanding.

Breaking down policy proposals into component parts for further investigation and understanding is exactly what we’ve been doing in the IMPACT Project.
This article and the references to further literature are not only intrinsically interesting, but they also give me additional ways of thinking about these issues and an evaluative paradigm for our tools.

Presentation at Conference on Agreement Technologies

I participated in the 1st International Conference on Agreement Technologies in Dubrovnik, Croatia.

The talk, Arguing from a Point of View, addresses the issue of extracting argumentative information from web-based information sources such as consumer product reviews or recommendations. Jodi Schneider is a co-author. The paper is available on the previous post. Some of the topics are developed further in our paper at SWAIE 2012.
Shortlink to this page.
By Adam Wyner

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

Papers at JURIX 2012

I’m co-author of two papers at The 25th International Conference on Legal Knowledge and Information Systems (JURIX 2012), Amsterdam, The Netherlands. Links to the final drafts are forthcoming.
A Model-Based Critique Tool for Policy Deliberation
Adam Wyner, Maya Wardeh, Trevor Bench-Capon, and Katie Atkinson
Abstract
Domain models have proven useful as the basis for the construction and evaluation of arguments to support deliberation about policy proposals. Using a model provides the means to systematically examine and understand the fine-grained objections that individuals might have about the policy. While in previous approaches, a justification for a policy proposal is presented for critique by the user, here, we reuse the domain model to invert the roles of the citizen and the government: a policy proposal is elicited from the citizen, and a software agent automatically and systematically critiques it relative to the model and the government’s point of view. Such an approach engages citizens in a critical dialogue about the policy actions, which may lead to a better understanding of the implications of their proposals and that of the government. A web-based tool that interactively leads users through the critique is presented.
Bibtex
@INPROCEEDINGS{WynerEtAlCritique2012,
author = {Adam Wyner and Wardeh, Maya and Trevor Bench-Capon and Katie Atkinson},
title = {A Model-Based Critique Tool for Policy Deliberation},
booktitle = {Proceedings of 25th International Conference on Legal Knowledge and Information Systems (JURIX 2012)},
year = {2012},
pages = {167-176},
address = {Amsterdam},
publisher = {IOS Press}
comment = {Legal Knowledge and Information Systems. Jurix 2012: The AA-th Annual Conference}
}
An Empirical Approach to the Semantic Representation of Laws
Adam Wyner, Johan Bos, Valerio Basile, and Paulo Quaresma
Abstract
To make legal texts machine processable, the texts may be represented as linked documents, semantically tagged text, or translated to formal representations that can be automatically reasoned with. The paper considers the latter, which is key to testing consistency of laws, drawing inferences, and providing explanations relative to input. To translate laws to a form that can be reasoned with by a computer, sentences must be parsed and formally represented. The paper presents the state-of-the-art in automatic translation of law to a machine readable formal representation, provides corpora, outlines some key problems, and proposes tasks to address the problems.
Bibtex
@INPROCEEDINGS{WynerEtAlSemanticRep2012,
author = {Adam Wyner and Bos, Johan and Valerio Basile and Paulo Quaresma},
title = {An Empirical Approach to the Semantic Representation of Law},
booktitle = {Proceedings of 25th International Conference on Legal Knowledge and Information Systems (JURIX 2012)},
year = {2012},
pages = {177-180},
address = {Amsterdam},
publisher = {IOS Press}
}
Shortlink to this page.
By Adam Wyner

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

Oxford Internet Institute

My attention was drawn to the Oxford Internet Institute:

The Oxford Internet Institute was founded in 2001 at the University of Oxford, as an academic centre for the study of the societal implications of the Internet.
In the last forty years the Internet has grown from an arcane and specialized academic service to the sophisticated global network of networks we see today: during this period the complexity of its societal implications has become ever more obvious, as well as the many ways it shapes our lives. Grounded in a determination to measure, understand and explain the Internet’s multi-faceted interactions and effects, our research projects bring together some of the best international scholars within a multi-disciplinary department in one of the world’s top research universities. We are committed to being an informed, independent and nonpartisan source of the highest quality analysis and insight in all our research and policy-related activities.

The institute recently organised a conference on Internet, Politics, Policy 2012: Big Data, Big Challenges, where there were some papers bearing on policy-making. These are topics closely related to research that I do. An organisation worth following in the future.