Papers at ITBAM 2012, ePart 2012, and EKAW 2012

Recent papers at various conferences. One is in the 3rd International Conference on Information Technology in Bio- and Medical Informatics (ITBAM 2012), Vienna, Austria. Another is in the 4th International Conference on eParticipation (ePart 2012), Kristainsand, Norway. And a final paper is in the 18th International Conference on Knowledge Engineering and Knowledge Management, Galway, Ireland.
Argumentation to represent and reason over biological systems
Adam Wyner, Luke Riley, Robert Hoehndorf, and Samuel Croset.
Abstract
In systems biology, networks represent components of biological systems and their interactions. It is a challenge to efficiently represent, integrate and analyse the wealth of information that is now being created in biology, where issues concerning consistency arise. As well, the information offers novel methods to explain and explore biological phenomena. To represent and reason with inconsistency as well as provide explanation, we represent a fragment of a biological system and its interactions in terms of a computational model of argument and argumentation schemes. Process pathways are represented in terms of an argumentation scheme, then abstracted into a computational model for evaluation, yielding sets of ‘consistent’ arguments that represent compatible biological processes. From the arguments, we can extract the corresponding processes. We show how the analysis supports explanation and systematic exploration in a biology network.
Bibtex
@INPROCEEDINGS{WynerEtAlITBAM2012,
author = {Adam Wyner and Riley, Luke and Robert Hoehndorf and Samuel Croset},
title = {Argumentation to Represent and Reason over Biological Systems},
booktitle = {Proceedings of the 3rd International Conference on Information Technology in Bio- and Medical Informatics ({ITBAM} 2012)},
year = {2012},
note = {To appear},
}
Model based critique of policy proposals
Adam Wyner, Katie Atkinson, and Trevor Bench-Capon
Abstract
Citizens may engage with policy issues both to critique official justifications, and to make their own proposals and receive reasons why they are not favoured. Either direction of use can be supported by argumentation schemes based on formal models, which can be used to verify and generate arguments, assimilate objections etc. Previously we have explored the citizen critiqing a justification using an argumentation scheme based on Alternating Action-based Transition Systems. We now present a system which uses the same model to critique proposals from citizens. A prototype has been implemented in Prolog and we illustrate the ideas with code fragments and a running example.
Bibtex
@INPROCEEDINGS{WynerABCEPart2012,
author = {Adam Wyner and Atkinson, Katie and Trevor Bench-Capon},
title = {Model Based Critique of Policy Proposals},
booktitle = {Proceedings of the 4th International Conference on e{P}articipation (e{P}art 2012)},
year = {2012},
note = {To appear},
}
Dimensions of argumentation in social media
Jodi Schneider, Brian Davis, and Adam Wyner
Abstract
Mining social media for opinions is important to governments and businesses. Current approaches focus on sentiment and opinion detection. Yet, people also justify their views, giving arguments. Understanding arguments in social media would yield richer knowledge about the views of individuals and collectives. Extracting arguments from social media is difficult. Messages appear to lack indicators for argument, document structure, or inter-document relationships. In social media, lexical variety, alternative spellings, multiple languages, and alternative punctuation are common. Social media also encompasses numerous genres. These aspects can confound the extraction of well-formed knowledge bases of argument. We chart out the various aspects in order to isolate them for further analysis and processing.
Bibtex
@INPROCEEDINGS{SchneiderEtAlEKAW2012,
author = {Jodi Schneider and Davis, Brian and Adam Wyner},
title = {Dimensions of Argumentation in Social Media},
booktitle = {Proceedings of the 18th International Conference on Knowledge Engineering and Knowledge Management ({EKAW} 2012)},
year = {2012},
note = {To appear},
}
Shortlink to this page.
By Adam Wyner

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

Papers at COMMA 2012

At the 4th International Conference on Computational Models of Argumentation in Vienna, Austria, I have a short paper in the main conference and a paper in the demo session.
Semi-automated argumentative analysis of online product reviews
Adam Wyner, Jodi Schneider, Katie Atkinson, and Trevor Bench-Capon
Abstract
Argumentation is key to understanding and evaluating many texts. The arguments in the texts must be identified; using current tools, this requires substantial work from human analysts. With a rule-based tool for semi-automatic text analysis support, we facilitate argument identification. The tool highlights potential argumentative sections of a text according to terms indicative of arguments (e.g. suppose or therefore) and domain terminology (e.g. camera names and properties). The information can be used by an analyst to instantiate argumentation schemes and build arguments for and against a proposal. The resulting argumentation framework can then be passed to argument evaluation tools.
Bibtex
@INPROCEEDINGS{WynerEtAlCOMMA2012a,
author = {Adam Wyner and Schneider, Jodi and Katie Atkinson and Trevor Bench-Capon},
title = {Semi-Automated Argumentative Analysis of Online Product Reviews},
booktitle = {Proceedings of the 4th International Conference on Computational
Models of Argument ({COMMA} 2012)},
year = {2012},
note = {To appear},
}
Critiquing justifications for action using a semantic model: Demonstration
Adam Wyner, Katie Atkinson, and Trevor Bench-Capon
Abstract
The paper is two pages with no abstract.
Bibtex
@INPROCEEDINGS{WynerABCDemoCOMMA2012,
author = {Adam Wyner and Atkinson, Katie and Trevor Bench-Capon},
title = {Critiquing Justifications for Action Using a Semantic Model: Demonstration},
booktitle = {Proceedings of the 4th International Conference on Computational Models of Argument ({COMMA} 2012)},
year = {2012},
pages = {1-2},
note = {To appear},
}
Shortlink to this page.
By Adam Wyner

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

Papers at AAMAS 2012 and ArgMas 2012

At the 11th International Conference on Autonomous Agents and Multi-agent Systems Conference in Valencia, Spain, I had a short paper in the main conference and a paper in the Argumentation in Multi-agent Systems Workshop
Opinion gathering using a multi-agent systems approach to policy selection
Katie Atkinson, Trevor Bench-Capon, and Adam Wyner
Abstract
An important aspect of e-democracy is consultation, in which policy proposals are presented and feedback from citizens is received and assimilated so that these proposals can be refined and made more acceptable to the citizens affected by them. We present an innovative web-based application that uses recent developments in multi-agent systems (MAS) to provide intelligent support for opinion gathering, eliciting a structured critique within a highly usable system.
Bibtex
@INPROCEEDINGS{AtkinsonBCW-AAMAS2012,
author = {Katie Atkinson and Trevor Bench-Capon and Adam Wyner},
title = {Opinion Gathering Using a Multi-Agent Systems Approach to Policy
Selection},
booktitle = {Proceedings of AAMAS 2012},
year = {2012},
editor = {Vincent Conitzer and Winikoff, Michael and Wiebe van der Hoek and
Lin Padgham},
pages = {1171-1172}
}
A functional perspective on argumentation schemes
Adam Wyner, Katie Atkinson, and Trevor Bench-Capon
Abstract
In multi-agent systems (MAS), abstract argumentation and argumentation schemes are increasingly important. To be useful for MAS, argumentation schemes require a computational approach so that agents can use the components of a scheme to present arguments and counterarguments. This paper proposes a syntactic analysis that integrates argumentation schemes with abstract argumentation. Schemes can be analysed into the roles that propositions play in each scheme and the structure of the associated propositions, yielding a greater understanding of the schemes, a uniform method of analysis, and a systematic means to relate one scheme to another. This analysis of the schemes helps to clarify what is needed to provide denotations of the terms and predicates in a semantic model.
Bibtex
@INPROCEEDINGS{WynerABCArgMAS2012,
author = {Adam Wyner and Atkinson, Katie and Trevor Bench-Capon},
title = {A Functional Perspective on Argumentation Schemes},
booktitle = {Proceedings of the 9th International Workshop on Argumentation in
Multi-Agent Systems ({ArgMAS} 2012)},
year = {2012},
editor = {Peter McBurney and Parsons, Simon and Iyad Rahwan},
pages = {203-222},
}
Shortlink to this page.
By Adam Wyner

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

Paper at CMN 2012

At the Language Resources and Evaluation Conference (LREC 2012) in Istanbul, Turkey, I participated in the Computational Models of Narrative workshop.
Arguments as Narratives
Adam Wyner
Abstract
Aspects of narrative coherence are proposed as a means to investigate and identify arguments from text. Computational analysis of argumentation largely focuses on representations of arguments that are either abstract or are constructed from a logical (e.g. propositional or first order) knowledge base. Argumentation schemes have been advanced for stereotypical patterns of defeasible reasoning. While we have well-formedness conditions for arguments in a first order language, namely the patterns for inference, the conditions for argumentation schemes is an open question, and the identification of arguments `in the wild’ is problematic. We do not understand the `source’ of rules from which inference follows; formally, well-formed `arguments’ can be expressed even with random sentences; moreover, argument indicators are sparse, so cannot be relied upon to identify arguments. As automated extraction of arguments from text increasingly finds important applications, it is pressing to isolate and integrate indicators of argument. To specify argument well-formedness conditions and identify arguments from unstructured text, we suggest using aspects of narrative coherence.
Slides for Arguments as Narratives
Bibtex
@INPROCEEDINGS{WynerCMN2012,
author = {Adam Wyner},
title = {Arguments as Narratives},
booktitle = {Proceedings of the Third Workshop on Computational Models of Narrative ({CMN} 2012)},
year = {2012},
editor = {Mark Finlayson},
pages = {178-180},
}
Shortlink to this page.
By Adam Wyner

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

Article in Artificial Intelligence and Law Journal for the 25th Anniversary of the International Conference on Artificial Intelligence and Law

A forthcoming special issue of the Journal of Artificial Intelligence and Law will be a long multi-author paper that celebrates 25 years of the International Conference on Artificial Intelligence and the Law.
A History of AI and Law in 50 Papers: 25 years of the International Conference on Artificial Intelligence and the Law
Bench-Capon et al.
Journal of Artificial Intelligence and Law
To appear.
Each of the authors who contributed to the special issue wrote about a paper from the conference from this 25 year period.
For this special issue, I wrote three sections:

The long paper itself serves as an excellent overview of the field these many years.
Shortlink to this page.
By Adam Wyner

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

The Summer School on Law and Logic, Florence, Italy

I will be participating as a teaching assistant in the Summer School on Law and Logic in Florence, Italy, July 16-20, 2012. The school is jointly hosted by the European University Institute and the Harvard Law School.
From the description:

The Summer School on Law and Logic is the first course ever to provide a comprehensive introduction to the wide variety of uses of logic in the law. Our aim at this Summer School is to provide law students, graduate law students, and legal professionals with a knowledge of the methods of formal logic and the ability to apply those methods to the analysis and critical evaluation of legal arguments and sources of law (including statutes, cases, regulations, constitutional provisions).
The Summer School includes the basics of propositional and predicate deductive logic, as well as the use of logic for capturing representing deontic and Hohfeldian modalities, analogical reasoning and inference to the best explanation. It also addresses presents some aspects of non-deductive reasoning in law, such as defeasible reasoning, including argumentation schemes and inductive reasoning.
We believe that the kind of background in formal logic we offer in this course can be a very powerful tool for use in legal theory, for developing doctrinal legal research, for working in legal informatics (the application of computer programs to the analysis of law), and, more generally, for the practice of law.

This is an innovative school about core issues and approaches in Artificial Intelligence and Law. For me, it will be an opportunity to connect with familiar colleagues, work with new ones, and find out what lawyers think about formal logic. In addition, some of the legal materials that we will be analysing will be new to me, so that will be instructive.
I hope that this school is the beginning of an integration of AI into law school education.
Shortlink to this page.
By Adam Wyner

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

21st Century Law Practice and Law Tech Camp Presentations

As part of the 21st Century Law Practice Summer London Law Program, I had the opportunity to present a class on Topics in Natural Language Processing of Legal Texts. My thanks to Dan Katz for organising this and to the class for their interest.
Dan, co-organiser Renee Knake at Michigan State University, and their colleagues at the University of Westminster are up to good things in law and technology – well worth watching.
To cap off the Law Program, the summer program organised a Law Tech Camp of short and TED style presentations on topics. It is an excellent program of talks from members of the legal industry, practicing lawyers, and academics. I have a talk about Crowdsourcing Legal Text Annotation, which is also discussed in a previous post. The talks are videotaped and made available online (TBA).
Shortlink to this page.
By Adam Wyner

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

Papers at the Workshop on Semantic Processing of Legal Texts (SPLeT 2012)

Two short papers appear in the proceedings of LREC Workshop on SPLeT 2012 – Semantic Processing of Legal Texts. The papers are available on the links.
Problems and Prospects in the Automatic Semantic Analysis of Legal Texts – A Position Paper
Adam Wyner
Abstract
Legislation and regulations are expressed in natural language. Machine-readable forms of the texts may be represented as linked documents, semantically tagged text, or translation to a logic. The paper considers the latter form, which is key to testing consistency of laws, drawing inferences, and providing explanations relative to input. To translate laws to a machine-readable logic, sentences must be parsed and semantically translated. Manual translation is time and labour intensive, usually involving narrowly scoping the rules. While automated translation systems have made significant progress, problems remain. The paper outlines systems to automatically translate legislative clauses to a semantic representation, highlighting key problems and proposing some tasks to address them.
Semantic Annotations for Legal Text Processing using GATE Teamware
Adam Wyner and Wim Peters
Abstract
Large corpora of legal texts are increasing available in the public domain. To make them amenable for automated text processing, various sorts of annotations must be added. We consider semantic annotations bearing on the content of the texts – legal rules, case factors, and case decision elements. Adding annotations and developing gold standard corpora (to verify rule-based or machine learning algorithms) is costly in terms of time, expertise, and cost. To make the processes efficient, we propose several instances of GATE’s Teamware to support annotation tasks for legal rules, case factors, and case decision elements. We engage annotation volunteers (law school students and legal professionals). The reports on the tasks are to be presented at the workshop.
Shortlink to this page.
By Adam Wyner

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

Crowdsourced Legal Case Annotation

A study in online, collaborative legal informatics
Adam Wyner, University of Aberdeen
Wim Peters, University of Sheffield
Daniel Katz, Michigan State University
— Introduction —
This is an academic research study on legal informatics (information processing of the law). The study uses an online, collaborative tool to crowdsource the annotation of legal cases. The task is similar to legal professionals’ annotation of cases. The result will be a public corpus of searchable, richly annotated legal cases that can be further processed, analysed, or queried for conceptual annotations.
Adam and Wim are computer scientists who are interested in language, law, and the Internet. Dan is an academic lawyer also interested in law and the Internet.
We are inviting people to participate in this collaborative task. This is a beta version of the exercise, and we welcome comments on how to improve it. Please read through this blog post, look at the video, and get in contact.
— Highlighting, Annotations, and Legal Case Briefs —
In reading, analysing, and preparing a summary of a legal case, law students and legal professionals annotate cases by highlighting and colour coding elements of the case to make for easy identification. Different elements are annotated: the holding, the parties, the facts, and so on. A sample image of annotations is:
Annotations

Annotations for Case Citations, Legal Roles, Jurisdiction, Hearing Date

— Problem —
To analyse a legal case, legal professionals annotate the case into its constituent parts. The analysis is summarised in a case brief. However, the current approach is very limited:

  • Analysis is time-consuming and knowledge-intensive.
  • Case briefs may miss relevant information.
  • Case analyses and briefs are privately held.
  • Case analyses are in paper form, so not searchable over the Internet.
  • Current search tools are for text strings, not conceptual information. We want to search for concepts such as for the holdings by a particular judge and with respect to causes of action against a particular defendant. With annotated legal cases, we can enable conceptual search.
  • There is no capacity to systematically compare, contrast, and evaluate the work by different annotators. Consequently, the annotation task itself is not used as an opportunity to gain greater expertise in case analysis.
  • — Solution: Crowdsource Annotation —
    We use an online legal case annotation tool and share the results to support:

  • Online search in legal cases for case details and concepts.
  • Semantic web applications and information extraction.
  • Crowd-source a legal case corpus.
  • Training and learning for legal case analysis.
  • The results of the study would be useful to:

  • Law school students learning case analysis.
  • Legal professionals in identifying relevant cases.
  • Researchers of legal informatics.
  • Law faculty in training students to analyse legal cases.
  • Broadly speaking, a corpus of analysed cases makes case law a public resource that democratises legal knowledge.
    — Annotations: types and features —
    To crowdsource conceptual annotations of legal cases, we use the General Architecture of Text Engineering (GATE) Teamware tool. Teamware is a web-based application that provides an annotator with a text to annotate and a list of annotations to use. The task is a web-based version of what legal analysts of cases already do.
    We use familiar annotations for legal cases, divided (for ease of reference) into types and features. For example, we have a type Legal Roles and various features to select among, e.g. defendant. We are counting on you to have learned and used these annotations in the course of your legal study and practice.
    You do not need to memorise the types and features as they will appear in the GATE Teamware tool.  It may be handy to keep this webpage open so you can consult it or you could also print out the page.
    The annotations we use are:
    Argument For Party – arguments for a particular party, using the most general notion:

  • for Appellee, for Appellant, for Defendant, for Plaintiff.
  • Facts – legal and procedural facts:

  • Cause of Action – the specific legal theory upon which the plaintiff brings the suit.
  • Defenses raised by Defendant – the defendant defenses against the cause of action.
  • Legal Facts – the legally relevant facts of the case that are used in arguing the issues.
  • Remedy requested by Plaintiff – what the plaintiff asks the court to grant.
  • Indexes – various indicative information:

  • Case Citation – the citation of the particular case being annotated.
  • Court Address – the address of the court.
  • Hearing Date – the date of the hearing.
  • Judge Name – the names of the judge, annotated one at a time.
  • Jurisdiction – the legal jurisdiction of the case.
  • Issues – the issues before the court:

  • Procedural Issue – what the appellee claims that the lower court did wrong.
  • Substantive Issue – the point of law that is in dispute (legal facts have their own annotation).
  • Legal Roles – the role of the parties in the case:

  • Appellee, Appellee’s Lawyer, Appellant, Appellant’s Lawyer, Defendant, Defendant’s Lawyer, Plaintiff, Plaintiff’s Lawyer.
  • General – buyer/seller, employer/employee, landlord/tenant, etc.
  • Other – relevant information not covered by the other annotations.
    Procedural History – the disposition of the case with respect to the lower court(s):

  • Appeal Information – who appealed and why they appealed.
  • Damages – the damages awarded by the lower court.
  • Lower Court Decision – the lower court’s decision.
  • Reasoning Outcomes – various parts of the legal decision:

  • Concurring Opinion.
  • Dicta – commentary about the judgement and holding, but not part of the rationale.
  • Dissenting Opinion.
  • Holding – the rule of law or legal principle that was applied in making the judgement. You can think about this as the new ground that the court is covering in this case. What legal rule(s) is the court developing or clarifying? The case can have more than one holding if there is more than one legal rule being considered. Note that a holding from a cited precedent is to be considered part of the rationale.
  • Judgement – Given the holding and the corresponding rationale for the holding, the judgement is the court’s final decision about the rights of the parties, the court’s response to a party’s request for relief, and bearing on prior decisions (e.g. affirmed, reversed, remanded, etc.).
  • Rationale – the court’s analysis of the issues and the reasons for the holding.
  • — Strategic Phases —
    From previous experience and following discussions, we believe it is best if the annotations are grouped together and done in three phases. This allows the annotator to do simpler tasks first and to keep in mind a subset of the relevant annotations.

  • Phase I: Indexes and Legal Roles
  • Phase II: Procedural History and Reasoning Outcomes
  • Phase III: Facts and Issues
  • For the time being, we are not attending to annotations of Arguments for Party and Other.
    — Collaborate —
    Take a look at the instructional video below. If you wish to collaborate on the task, send an email to Adam Wyner – adam@wyner.info
    In the email, please include brief information for:

  • Your name
  • Your professional affiliation, e.g. institution, company, firm…
  • Your role where you work
  • Your background as a legal professional
  • This will help us know who we are collaborating with; from the pool of candidates, we will select participants for this early study.
    You will be sent a user name and password so you can login to Teamware.
    We respect your privacy. We are only interested in data in the aggregate and will not reveal any personal data to third parties.
    — Next —
    We have an instructional video that you can open in a new tab or window and that uses QuickTime. It lasts about 14 minutes. This will give you a good idea of what you will be doing. The presenter is Adam Wyner. You can see this here:

    Or follow the link on YouTube — Crowdsourcing Legal Case Annotation Instructional Video. Please view in a large (ok definition) or full screen (grainy definition) mode, which may need to be reloaded in YouTube.
    There are additional points about using the tool in section below on questions, problems, and observations.
    After reading this blog, viewing the instructional video, and receiving your username and password, you can login to begin annotating at — GATE Teamware
    — Survey —
    When you are done with your task, please answer the questions on the survey to give us feedback on your experience using the annotation tool. The survey is available below. You can scroll down and answer the questions. Don’t forget to hit the “Done” button to submit your responses, which will be very useful in helping us understand your experience and thoughts about using the tool:

    Create your free online surveys with SurveyMonkey, the world’s leading questionnaire tool.

    — What Then? —
    We analyse the annotations from several annotators, comparing and contrasting them (interannotator agreement). This will show us similarities and differences in the understanding of the annotations and cases. As well, the results will help us develop a Gold Standard Corpus of legal cases, which are annotations of cases that annotators agree on. A Gold Standard is essential for information extraction and the development of advanced processing. We will publicly report the analysis of the exercise and make the annotated cases publicly available for re-use.
    Once we have a better sense of how this study goes, we plan to roll out a larger version with more cases. And this is only the start….
    — Questions, Problems, and Observations —
    Thanks to participants for letting us know about their problems and sending their observations.
    How easy is it to learn to use the tool? Take a look at the video to get a sense of this. With a little bit of practice, it is rather straightforward.
    What if I don’t agree with some of your annotations or features? Write a comment or send us an email, and we will consider your comment. Try to be as clear and specific as you can. We are not lawyers, and we are dealing with a global community with local variation, so it is likely there will be some disagreement and variation.
    Can I get the results of my annotations? Our approach is to make individual contributions to the whole. So, you will be able to access annotated cases after the exercise. There will be further information on how to work with the material.
    How many cases must I do? You can do one or you can do as many as we have (not many in the beta project).
    How much time will it take? About as long as it would take you to do a similar highlighting and annotation task with paper and markers.
    What if I have a problem with using the tool or if the tool is buggy? Be patient and try to work with the tool. Sometimes things go wrong. Write a comment or send us an email, and we will try to advise. Note – we are only consumers of GATE Teamware, so are not responsible for the system.
    How thoroughly should I annotate the cases? The more cases that are annotated fully and accurately, the better. Apply the same diligence as you would to thoroughly and carefully analyse cases with pen and paper. As you will be the beneficiary of the work of others, so too should you work to benefit them.
    Do we track good annotators and bad annotators? We are interested in data in the aggregate, and are only interested in interannotator agreement and disagreement. This information will help us better understand differences in how the cases are understood and annotated. But, we can see how much time each person takes with each annotation task and measure how they perform against other annotators or a gold standard. If we have bad annotators, we will see this in the results; we would contact the annotator and see how best to improve the situation. As we noted above, we are not sharing information with third parties.
    I cannot login with the username and password. Please let me know if you have this problem, and I will look into it.
    I can login, but I cannot get the java webstart file to start. This is a tough problem to address over the internet. Some people have no problem, but some people are. Please let me know if you have this problem. Do check that you have followed the instructions (on blog and in movie).
    I can login and start the annotation tool, but I cannot get the task. Please let me know, and I will look into it.
    The text is too small and single spaced. At the moment, there is nothing we can do about this. We’ll try to keep this in mind for the future.
    The highlighting tool is not easy to use. When I want to move from one annotated text to some new text, the tool doesn’t move to the new text. This is bit of a problem with the tool, which is not entirely reliable in the functionality. Try to play around with this to see what works for you. One strategy that I have found that improves performance is to annotate something. Then the annotation types appears in the upper right hand corner window among the list of annotations. Sometimes it is a good idea, when the problem occurs, is to click the annotations in that upper right hand corner window off and on (toggle them on and off). This seems to clear the system a bit so that one can go on to the next annotation. Give this a try. If you have problems, please let me know.
    I found it very challenging. It is important to us to know this information to gauge how much text and the variety of annotations. We might reduce the number of annotations, breaking up the whole set into parts of the overall task.
    Decision date is more important than hearing date, or at least should be provided in addition to hearing date. Probably this will be added to future iterations.
    A participant, e.g. “Cone”, was originally a defendant, but was dismissed out before this appeal. I wonder if he should still be coded as “Defendant” or if he should be coded as an other role-holder. Good observation. I’ll have to consult with some lawyers further about this point.
    There are sentences where the court introduced a fact and also appeared to reason using it. Is it right to code the whole sentence both as a legal fact and as a rationale. Yes, this is the way to handle this. Double annotations are always possible.
    A similar problem occurred where the court offered a fact but also put a gloss on it as to its legal significance. Double annotations are always possible.
    Some of the names of the categories were confusing or unclear. For example, using “Holding” for the name of the legal rule or principle was confusing (“Legal Rule” might be more intuitive). This is another point that we will need to consult further with other lawyers. There may also be some variation in terminology.
    There is sometimes unclarity about role-players. A case involved a plaintiff, who was an appellee but also a cross-appellant, and a defendant who was thus an appellant and cross-appellee. These can be coded where on is plaintiff and appellee and the other defendant and appellant. But, they could have both been coded as appellee and appellant, given the existence of the cross appeal. Double (or more) annotating is fine.
    Procedural History/Damages might be better framed as Procedural History/Remedies, as courts often provide injunctive relief or, as in this case, an accounting, as a remedy. This is another point that we will need to consult further with lawyers about terminology.
    What if a case does not state any legal rules? Can implicit legal rules be annotated. For example, where novelty and non-obviousness are a sine qua non of a valid patent, one would not have known to mark some of the sentences as rationales. This isn’t a problem. If something is not in the case, then it is not annotated. We are not (yet) concerned with implicit information. But, if you know the implicit information, then annotate it.
    How can I automatically search for and annotate the same string with the same annotation? In the instructional video, we wanted to keep the material short and to the point, so there are aspects of the annotation tool we did not cover. However, it is tedious to manually search for the same string and annotate it with the same annotation. Teamware’s Annotation Editor has a tool to support automatic search and annotation. To see how to do this, we have the video here:

    How should I annotate holdings which may appear as holdings in cited cases and as part of the procedural history, as holdings in the current case, or as part of the rationale in the current case? This is an interesting and subtle point for us, and we will have to have a full consultation with lawyers to decide. But, for the time being, there can be no harm in multiple annotations, which we can then look at and work with later.
    — Paper —
    If you are interested in some of the ideas behind this project, please see our paper:
    Semantic Annotations for Legal Text Processing using GATE Teamware
    The paper will appear in May 2012 in the Proceedings of the LREC Conference Workshop on Semantic Processing of Legal Texts, Istanbul, Turkey. The exercise here is a version of the exercise proposed in the paper.
    A shortlink to this blog page is:
    http://wyner.info/LanguageLogicLawSoftware/?p=1315
    — Thanks for collaborating! —
    — If you have any questions, please submit a comment! —

    — Update Note —
    July 29, 2013 to reflect Dan Katz’s amended definitions for Holding. Updated in various ways July 12, 2013. The previous blog post of July 28, 2012 has been updated to note the participation of Dan Katz and his students of Michigan State University.
    — Honour Role —
    For the very first study, we would like to thank the following individuals who gave of their time and intelligence to carry out their tasks.

  • First
  • Second
  • By Adam Wyner

    This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

    Modelling Policy-making – a Call for Papers

    A Special Issue the Journal of Artificial Intelligence and Law on
    Modelling Policy-making
    Special Issue Editors
    Adam Wyner, University of Liverpool, adam@wyner.info
    Neil Benn, University of Leeds, n.j.l.benn@leeds.ac.uk
    Paper Submission Deadline: May 28, 2012
    We invite submission of papers on modelling policy-making. Below we outline the intended audience, context, the topics of interest, and submission details.
    Context
    We live in an age where citizens are beginning to demand greater transparency and accountability of their political leaders. Furthermore, those who govern and decide on policy are beginning to realise the need for new governance models that emphasise deliberative democracy and promote widespread public participation in all phases of the policy-making cycle: 1) agenda setting, 2) policy analysis, 3) lawmaking, 4) implementation, and 5) monitoring. As governments must become more efficient and effective with the resources available, modern information and communications technology (ICT) are being drawn on to address problems of information processing in the phases. One of the key problems is policy content analysis and modelling, particularly the gap between on the one hand policy proposals and formulations that are expressed in quantitative and narrative forms and on the other hand formal models that can be used to systematically represent and reason with the information contained in the proposals and formulations.
    Special Issue Theme
    The editors invite submissions of original research about the application of ICT and Computer Science to the first three phases of the policy cycle – agenda setting, policy analysis, and lawmaking. The research should seek to address the gap noted above. The journal volume focusses particularly on using and integrating a range of subcomponents – information extraction, text processing, representation, modelling, simulation, reasoning, and argument – to provide policy making tools to the public and public administrators. While submissions about tool development and practice are welcome, the editors particularly encourage submission of articles that address formal, conceptual, and/or computational issues. Some specific topics within the theme are:

    • information extraction from natural language text
    • policy ontologies
    • formal logical representations of policies
    • transformations from policy language to executable policy rules
    • argumentation about policy proposals
    • web-based tools that support participatory policy-making
    • tools for increasing public understanding of arguments behind policy decisions
    • visualising policies and arguments about policies
    • computational models of policies and arguments about policies
    • integration tools
    • multi-agent policy simulations

    Submission Details:
    Authors are invited to submit an original, previously unpublished, research paper of up to 30 pages pertaining to the special issue theme. The paper should follow the journal’s instructions for authors and be submitted online. See the dropdown tab under the section FOR AUTHORS AND EDITORS.
    Instructions for Authors on:
    https://www.springer.com/computer/ai/journal/10506
    Submit Online on:
    https://www.springer.com/computer/ai/journal/10506
    Each submitted paper will be carefully peer-reviewed based on originality, significance, technical soundness, and clarity of exposition and relevance for the journal.
    The shortlink to this webpage is:
    http://wyner.info/LanguageLogicLawSoftware/?p=1258
    A PDF version of this CFP:
    CFP – Modelling Policy-making
    Contact the special issue editors with any questions.
    By Adam Wyner

    This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.