Presentation at tGov on the IMPACT Project

On March 18, 2011, I gave a presentation at tGov 2011 on the IMPACT Project.
The idea behind “transformational government” (or t-government) is that new technologies will change the way that the public interacts with the operation and delivery of public services, which are web-based, joined-up, citizen-centric than in the past. See, for example, Directgov, the website for the UK government. The IMPACT Project, which relates to how public policy is made, clearly addresses some of these issues.
Follow the links for the slides of the talk A Structured Online Consultation Tool and the paper Towards a Structured Online Consultation Tool.

Argumentation for Public-Policy Making – Presentation at the Central Office of Information, United Kingdom

In October, 2010, I made a presentation on the various elements of the IMPACT Project, which aims to apply computational models of argumentation to support public-policy making, at the Central Office of Information (COI) in London, United Kingdom. The COI is the UK government’s center for marketing and communications. It works with government departments (on a contract basis) to inform and engage citizens in issues that affect their lives. The COI is under the Minister for the Cabinet Office.
This was an interesting opportunity to learn more about how the UK government gathers and delivers information to the public.
For my part, on behalf of the IMPACT Project, I was outlining the several tools which could be used to support public-policy making. I outlined several of the current tools (some of which are used by the COI), their limitations, and some of the advantages that would be gained from the IMPACT tools. The slides are IMPACT Project Presentation at the Central Office of Information.
Since that meeting (the second), I’ve been in touch with Suzannah Kinsella, Head of Public Engagement at the COI. However, the UK government has been under some reorganisation and review (see links under Review into Government Communications). Work from the IMPACT Project may be a useful part of this. In early April I shall again meet with her and another colleague at the COI to see how we can specifically move ahead in collaborating with the COI on development of the tools.

Invited Speakers for JURIX 2010 in Liverpool Dec. 16-17

The invited speakers at JURIX 2010 in Liverpool Dec. 16-17 are:

  • John Sheridan, Head of e-Services in the Information Policy and Services Directorate of The National Archives. John is one of the main people behind and
  • Wiebe van der Hoek, member of the Agent ART Group at the University of Liverpool. His research in agents concentrates on Logics for Agent Systems, Cooperation, Negotiation, Games and Agents, Data Mining and the Semantic Web.

I previously met John Sheridan August 2009 to discuss legislation and the semantic web; see my post. It will be very good to hear what has been going on since, particularly in the context of JURIX.
By Adam Wyner
Distributed under the Creative Commons
Attribution-Non-Commercial-Share Alike 2.0

ODET 2010: Online Deliberation Emerging Tools

The IMPACT Project that I am part of (at the University of Leeds) has some presentations coming up at the Online Deliberation Emerging Tools workshop (Leeds, 30 June) at the Conference on Online Deliberation (Leeds, 30 June–2 July). Interesting stuff (IMHO).
The program, including three members of the IMPACT Project — Ann Macintosh (who I work with at Leeds), Tom Gordon, and Sanjay Modgil.
9.30 Welcome: Simon Buckingham Shum (Open U. UK)
9.40 Tim van Gelder (Austhink Consulting, AUS — bCisive Online & MS Word Argumentation)
10.05 Paul Culmsee (Seven Sigma Business Solutions, AUS — Compendium case study)
10.30 Nikos Karacapilidis (U. Patras, GR — CoPe_it!)
10.55 Anna De Liddo & Simon Buckingham Shum (Open U., UK — Compendium/Cohere)
11.20 Refreshments
11.45 Mark Snaith (U. Dundee, UK — OVAview)
12.10 David Price (Debategraph, UK — Debategraph)
12.35 Sanjay Modgil (U. Liverpool, UK — Parmenides)
12.55 Any brief comments on the morning, continuing into lunch chats…
1.00 Lunch
2.15 Ann Macintosh (U. Leeds, UK) and Tom Gordon (Fraunhofer FOKUS, DE — Impact Project)
2.35 Mark Klein (MIT, USA — Deliberatorium)
3.00 Rob Ennals (Intel Labs, USA — DisputeFinder)
3.25 Refreshments
4.00 Closing discussion: did we go forwards?…
4.45 End

Research Fellow at University of Leeds

On May 4, I’m taking up a research fellow position. I’ll continue to work on the IMPACT Project:

IMPACT will conduct original research to develop and integrate formal, computational models of policy and arguments about policy, to facilitate deliberations about policy at a conceptual, language-independent level.

I’ll be based at the University of Leeds, Institute of Communication Studies, in the Centre for Digital Citizenship:

The CdC’s mission is to promote outstanding research on the changing nature of citizenship in a digitally networked society and to contribute to the analysis and development of policy in this area.

I’ll be working with Ann Macintosh:

My research agenda falls within two main socio-technical areas of interest. The first concerns the societal effect of technology on governance processes and the development of an evaluation framework for eParticipation. This area of my research is providing high-level insights into the mechanisms that need to be built into future online participation systems to appreciate how, where and why people use them. My second research area is the support for citizen engagement in policy making and the provision of public agency information and knowledge. Here the focus is on the use of Web 2.0 and computer supported argumentation systems to support deliberation and knowledge sharing.

Looking forward to working on these topics!
By Adam Wyner
Distributed under the Creative Commons
Attribution-Non-Commercial-Share Alike 2.0

Recent Paper Submissions

During my time at the Leibniz Center for Law working on the IMPACT, I and my colleagues Tom van Engers and Kiavash Bahreini prepared and submitted three papers to conferences and workshops. The drafts of the papers are linked below along with the abstracts. Comments welcome.
A Framework for Enriched, Controlled On-line Discussion Forums for e-Government Policy-making
Adam Wyner and Tom van Engers
Submitted to eGOV 2010
The paper motivates and proposes a framework for enriched on-line discussion forums for e-government policy-making, where pro and con statements for positions are structured, recorded, represented, and evaluated. The framework builds on current technologies for multi-threaded discussion lists by integrating modes, natural language processing, ontologies, and formal argumentation frameworks. With modes other than the standard reply “comment”, users specify the semantic relationship between a new statement and the previous statement; the result is an argument graph. Natural language processing with a controlled language constrains the domain of discourse, eliminates ambiguity and unclarity, allows a logical representation of statements, and facilitates information extraction. However, the controlled language is highly expressive and natural . Ontologies represent the knowledge of the domain. Argumentation frameworks evaluate the argument graph and generate sets of consistent statements. The output of the system is a rich and articulated representation of a set of policy statements which supports queries, information extraction, and inference
From Policy-making Statements to First-order Logic
Adam Wyner, Tom van Engers, and Kiavash Bahreini
Submitted to eGOVIS 2010
Within a framework for enriched on-line discussion forums for e-government policy-making, pro and con statements for positions are input, structurally related, then logically represented and evaluated. The framework builds on current technologies for multi-threaded discussion, natural language processing, ontologies, and formal argumentation frameworks. This paper focuses on the natural language processing of statements in the framework. A small sample policy discussion is presented. We adopt and apply a controlled natural language (Attempto Controlled English) to constrain the domain of discourse, eliminate ambiguity and unclarity, allow a logical representation of statements which supports inference and consistency checking, and facilitate information extraction. Each of the polity statements is automatically translated into rst-order logic. The result is logical representation of the policy discussion which we can query, draw inferences (given ground statements), test for consistency, and extract detailed information.
Towards Web-base Mass Argumentation in Natural Language
Adam Wyner and Tom van Engers
Submitted to EKAW 2010
Within the artificial intelligence community, argumentation has been studied for quite some years now. Despite progress, the field has not yet succeeded in creating support tools that members of the public could use to contribute their views to discussions of public policy. One important reason for that is that the input statements of participants in policy-making discussions are put forward in natural language, while translating the statements into the formal models used by argumentation scientists is cumbersome. These formal models can be used to automatically reason with, query, or transmit domain knowledge using web-based technologies. Making this knowledge explicit, formal, and expressed in a language which a machine can process is a labour, time, and knowledge intensive task. To make such translation and it requires expertise that most participants in policy-making debates do not have. In this paper we describe an approach with which we aim at contributing to a solution of this knowledge acquisition bottle-neck. We propose a novel, integrated methodology and framework which adopts and adapts existing technologies. We use semantic wikis which support mass, collaborative, distributive, dynamic knowledge acquisition. In particular, ACEWiki incorporates NLP tools, enabling linguistically competent users to enter their knowledge in natural language, while yielding a logical form that is suitable for automated processing. In the paper we will explain how we can extend the ACEWiki and augment it with argumentation tools which elicit knowledge from users, making implicit information explicit, and generate subsets of consistent knowledge bases from inconsistent knowledge bases. To a set of consistent propositions, we can apply automated reasoners, allowing users to draw inferences and make queries. The methodology and framework take a fragmentary, incremental development approach to knowledge acquisition in complex domains.
By Adam Wyner
Distributed under the Creative Commons
Attribution-Non-Commercial-Share Alike 2.0

The IMPACT Project — first two days

As I mentioned in a previous post, I am working in Amsterdam for the next three months on setting up a research project at the Leibniz Center for Law. The focus here is to develop information extract of textual debates (using GATE) and a tool for inputting debates in a structured manner that can be further processed for reasoning.
The official IMPACT Project information on CORDIS.
As part of my contribution, I have two draft papers, written in the spring and summer of 2009, which will be further developed at Leibniz: From Arguments in Natural Language to Argumentation Frameworks and Multi-modal Multi-threaded Online Forums. While these are early drafts of papers and not for wider circulation, they give a good indication of the line of thinking and of some of the key ideas we will be pursuing. Comments about these works are very welcome.
By Adam Wyner
Distributed under the Creative Commons
Attribution-Non-Commercial-Share Alike 2.0

Discussion with Jeremy Tobias-Tarsh of Practical Law Company

On Wednesday January 13, 2010, I had a meeting with Jeremy Tobias-Tarsh, director of Practical Law Company (PLC) and currently in charge of overseeing the company’s three year development plan. We had a very engaging, far-ranging discussion about the company’s interests in technological innovation in the legal domain. His colleagues at the meeting where Brigitte Kaltenbacher, who works on usability tests for searches among the company’s resources, and Sara Stangalini, who works with Brigitte.
The post gives an overview of our discussion — what PLC does, the ambitions for the future, a range of issues and tools to handle them, and some suggestions about moving ahead.
About PLC
PLC provides know-how for lawyers, meaning written analysis of current legal developments, practice notes (legal situations lawyers face and how the law treats them), standard draft documents, and checklists for managing actions. The services cover a range of legal areas such as arbitration, competition, corporate, construction, employment, finance, pensions, tax, and so on.
Jeremy spoke of an ambition at the company to use Semantic Web technologies on the company’s resources in order to give users faster, more precise, more meaningful and relevant results for searches in the resources — making the company’s content more findable. This might be done by annotating the content of the resources and supporting search with respect to the annotations. (Along these lines, an important advantage is that the company has been using an XML editor (Epic) for its documents for some time, so there is broad and widespread familiarity with what XML offers.)
Similarly, PLC could develop tools which improve the searches among a law firm’s documents. This is especially crucial where searches are done by junior staff with less knowledge of how and where to search. As made clear in discussions of knowledge management in law firms, an important task of senior lawyers in a firm is to train the new and junior lawyers in the details of the practice. While law schools may train law students in legal analysis and the law, the students may be unprepared for how to practice, which may have less to do with the law and more to do with finding and working with the relevant documents.
Any technology which can support junior lawyers in learning their tasks would be an advantage. In addition, any technology which could encode a senior lawyer’s knowledge would be useful to share throughout the firm and to preserve that knowledge where the lawyer is unavailable.
Some Sample Problems and Tools
An instance of such a tool might apply to contracts. PLC and firms have catalogues of preformatted draft documents, each of which may have variants developed over time. This may be seen as a contract base. A junior lawyer may be asked to find among this contract base a contract which is either an exact match for the current circumstances or close enough so that with some modifications it would suit. This can be viewed as an instance of case based reasoning, where the ‘factors’ are the particulars of the contracts and the current contractual setting. So, not only must there be some way to match similarity and difference among the documents, but there ought also to be some systematic way to manage the modifications.
To address this, three technologies could be used. Contracts could be annotated with the factors, then we apply case based reasoning. Alternatively, contracts could be linked to an ontology, so that the properties and relationships among the documents are made explicit. Researchers could search for the relevant documents using the ontology. Along with this, a contract modification tracking system, such as a modified version of which meets the MetaLex standard, could be developed.
Due Diligence
Another problem relates to due diligence. Law firms are up against constraints in terms of time and money in satisfying the requirements of due diligence. Firms increasingly are responsible to show due diligence in a wider range of areas. This means that more lawyers must be hired and more billable hours accrued. However, the companies hired by the law firms are reluctant to pay more for due diligence. Consequently, firms have a motivation to find ways to make due diligence more efficient. Moreover, it is not a task that junior lawyers can easily undertake without extensive training. Natural language expert systems might provide a useful technology.
Policy Consultations
We also had a discussion about policy consultations. PLC helped formed and serves as secretariat for the General Counsel 100 Group, which is comprised of senior legal officers drawn from FTSE 100 companies. The group is a forum for businesses to give input on policy consultations and to share best practices in law, risk management, compliance, and other common interests (see the various public papers on the link). In my EU Framework 7 proposal on argumentation, we explicitly referred to policy consultation as a key area to develop and apply the tool. Broadly speaking, we had a systematic plan to develop a tool which takes as input statements in natural language, then translates them into a logical formalism. Claims pro and con on a particular issue are systematically structured into an ‘argument’ network in order to ‘prove’ outcomes given premises as well as to provide sets of consistent statements for and against a claim. Other argument mapping technologies might be useful here as well.
We also talked about the development of ontologies and whether they can be automatically extracted from textual sources. This is an area where there is a lot of current interest and some significant progress.
Moving Ahead
Finally, we also touched on how to move ahead. A brainstorming and road-mapping exercisea could be very valuable experience. The exercise would include not only company representatives, but also clients served by PLC. Parties on ‘both sides of the fence’ could discover more about what they know, want, and imagine could be done. In addition, Jeremy suggested that I might be engaged to present some of the ‘main points’ about Semantic Web technologies and the law to some of PLC’s editors and clients.
It was an enjoyable and spirited discussion, which I hope we will find the opportunity in the near future to continue.
By Adam Wyner
Distributed under the Creative Commons
Attribution-Non-Commercial-Share Alike 2.0

Research on Argumentation at the Leibniz Center for Law in Amsterdam

I have a 3 month research job at the Leibniz Center for Law, University of Amsterdam starting February 1 and working with Tom van Engers. This is part of the IMPACT project:

IMPACT is an international project, partially funded by the European Commission under the 7th framework programme. It will conduct original research to develop and integrate formal, computational models of policy and arguments about policy, to facilitate deliberations about policy at a conceptual, language-independent level. To support the analysis of policy proposals in an inclusive way which respects the interests of all stakeholders, research on tools for reconstructing arguments from data resources distributed throughout the Internet will be conducted. The key problem is translation from these sources in natural language to formal argumentation structures, which will be input for automatic reasoning.

My role will be to set up a Ph.D. research project concerning the key problem. This is based on an unsuccessful larger research proposal that I made with Tom. I’ll be organising the database, the literature, some of the software, and outlining the approach the student would take. I’ll make notes on the progress as it happens.
I’m looking forward to living for a while in Amsterdam, working with Tom and my other colleagues at the center — Joost Breuker, Rinke Hoekstra, Emile de Maat. The Netherlands also has a very lively Department of Argumentation Theory. As an added bonus, my colleagues from Linguistics, Susan Rothstein and Fred Landman, are in Amsterdam on sabbatical. Will be a very interesting and fun period.

Annotating Rules in Legislation

Over the last couple of months, I have had discussions about text mining and annotating rules in legislation with several people (John Sheridan of The Office of Public Sector Information, Richard Goodwin of The Stationery Office, and John Cyriac of Compliance Track). While nothing yet concrete has resulted from these discussions, it is clearly a “hot topic”.
In the course of these discussions, I prepared a short outline of the issues and approaches, which I present below. Comments, suggestions, and collaborations are welcome.
Vision, context, and objectives
One of the main visions of artificial intelligence and law has been to develop a legislative processing tool. Such a tool has several related objectives:

      [1.] To guide the drafter to write well-formed legal rules in natural language.
      [2.] To automatically parse and semantically represent the rules.
      [3.] To automatically identify and annotate the rules so that they can be extracted from a corpus of legislation for web-based applications.
      [4.] To enable inference, modeling, and consistency testing with respect to the rules.
      [5.] To reason with respect to domain knowledge (an ontology).
      [6.] To serve the rules on the web so that users can use natural language to input information and receive determinations.

While no such tool exists, there has been steady progress on understanding the problems and developing working software solutions. In early work (see The British nationality act as a logic program (1986)), an act was manually translated into a program, allowing one to draw inferences given ground facts. Haley is a software and service company which provides a framework which partially addresses 1, 2, 4, and 6 (see Policy Automation). Some research addresses aspects of 3 (see LKIF-Core Ontology). Finally, there are XML annotation schemas for legislation (and related input support) such as The Crown XML Schema for Legislation and Akoma Ntoso, both of which require manual input. Despite these advances, there is much progress yet to be made. In particular, no results fulfill [3.].
In consideration of [3.], the primary objective of this proposal is to use the General Architecture for Text Engineering (GATE) framework in order to automatically identify and annotate legislative rules from a corpus. The annotation should support web-based applications and be consistent with semantic web mark ups for rules, e.g. RuleML. A subsidiary objective is to define an authoring template which can be used within existing authoring applications to manually annotate legislative rules.
Attaining these objectives would:

  • Support automated creation, maintenance, and distribution of rule books for compliance.
  • Contribute to the development of a legislative processing tool.
  • Make legislative rules accessible for web-based applications. For example, given other annotations, one could identify rules that apply with respect to particular individuals in an organisation along with relevant dates, locations, etc.
  • Enable further processing of the rules such as removing formatting, parsing the content of the rules, and representing them semantically.
  • Allow an inference engine to be applied over the formalised rule base.
  • Make legislation more transparent and communicable among interested parties such as government departments, EU governments, and citizenry.

To attain the objectives, we propose the following phases, where the numbers represent weeks of effort:

  • Create a relatively small sample corpus to scope the study.
  • Manually identify the forms of legislative rules within the corpus.
  • Develop or adapt an annotation scheme for rules.
  • Apply the analysis tools of GATE and annotate the rules.
  • Validate that GATE annotates the rules as intended.
  • Apply the annotation system to a larger corpus of documents.

For each section, we would produce a summary of results, noting where difficulties are encountered and ways they might be addressed.
Extending the work
The work can be extended in a variety of ways:

  • Apply the GATE rules to a larger corpus with more variety of rule forms.
  • Process the rules for semantic representation and inference.
  • Take into consideration defeasiblity and exceptions.
  • Develop semantic web applications for the rules.

By Adam Wyner
Distributed under the Creative Commons
Attribution-Non-Commercial-Share Alike 2.0