Argumentation Schemes for Reasoning about Factors with Dimensions
Katie Atkinson, Trevor Bench-Capon, Henry Prakken, and Adam Wyner Abstract
In previous work we presented argumentation schemes to capture the CATO and value based theory construction approaches to reasoning with legal cases with factors. We formalised the schemes with ASPIC+, a formal representation of instantiated argumentation. In ASPIC+ the premises of a scheme may either be a factor provided in a knowledge base or established using a further argumentation scheme. Thus far we have taken the factors associated with cases to be given in the knowledge base. While this is adequate for expressing factor based reasoning, we can further investigate the justifications for the relationship between factors and facts or evidence. In this paper we examine how dimensions as used in the HYPO system can provide grounds on which to argue about which factors should apply to a case. By making this element of the reasoning explicit and subject to argument, we advance our overall account of reasoning with legal cases and make it more robust. Bibtex
@INPROCEEDINGS{ABCPWJURIX2013,
author = {Katie Atkinson and Bench-Capon, Trevor and Henry Prakken and Adam Wyner},
title = {Argumentation Schemes for Reasoning about Factors with Dimensions},
booktitle = {Proceedings of 26th International Conference on Legal Knowledge and Information Systems (JURIX 2013)},
year = {2013},
pages = {??-??},
address = {Amsterdam},
publisher = {IOS Press}
} A Case Study on Legal Case Annotation
Adam Wyner, Wim Peters, and Daniel Katz Abstract
The paper reports the outcomes of a study with law school students to annotate a corpus of legal cases for a variety of annotation types, e.g. citation indices, legal facts, rationale, judgement, cause of action, and others. An online tool is used by a group of annotators that results in an annotated corpus. Differences amongst the annotations are curated, producing a gold standard corpus of annotated texts. The annotations can be extracted with semantic searches of complex queries. There would be many such uses for the development and analysis of such a corpus for both legal education and legal research. Bibtex
@INPROCEEDINGS{WynerPetersKatzJURIX2013,
author = {Adam Wyner and Peters, Wim, and Daniel Katz},
title = {A Case Study on Legal Case Annotation},
booktitle = {Proceedings of 26th International Conference on Legal Knowledge and Information Systems (JURIX 2013)},
year = {2013},
pages = {??-??},
address = {Amsterdam},
publisher = {IOS Press}
}
FORTHCOMING AUXILIARY MATERIALS FOR THE ANNOTATION STUDY – CASES CITED AND TOOL OUTPUT EXAMPLES. Shortlink to this page.
By Adam Wyner
Legal resources such as legislation, public notices, case law, and other legally relevant documents are increasingly freely available on the internet. They are almost entirely presented in natural language and in text. Legal professionals, researchers, and students need to extract and represent information from such resources to support compliance monitoring, analyse cases for case based reasoning, and extract information in the discovery phase of a trial (e-discovery), amongst a range of possible uses. To support such tasks, powerful text analytic tools are available. The tutorial presents an in depth demonstration of one toolkit the General Architecture for Text Engineering (GATE) with examples and several briefer demonstrations of other tools.
Goals
Participants in the tutorial should come away with some theoretical sense of what textual information extraction is about. They will also see some practical examples of how to work with a corpus of materials, develop an information extraction system using GATE and the other tools, and share their results with the research community. Participants will be provided with information on where to find additional materials and learn more.
Intended Audience
The intended audience includes legal researchers, legal professionals, law school students, and political scientists who are new to text processing as well as experienced AI and Law researchers who have used NLP, but wish to get a quick overview of using GATE.
Covered Topics
Motivations to annotate, extract, and represent legal textual information.
Uses and domains of textual information extraction. Sample materials from legislation, case decisions, gazettes, e-discovery sources, among others.
Motivations to use an open source tool for open source development of textual information extraction tools and materials.
The relationship to the semantic web, linked documents, and data visualisation.
Linguistic/textual problems that must be addressed.
Alternative approaches (statistical, knowledge-light, machine learning) and a rationale for a particular bottom-up, knowledge-heavy approach in GATE.
Outline of natural language processing modules and tasks.
Introduction to GATE – loading and running simple applications, inspecting the results, refining the search results.
Development of fragments of a GATE system – lists, rules, and examination of results.
Discussion of more complex constructions and issues such as fact pattern identification, which is essential for case-based reasoning, named entity recognition, and structures of documents.
Introduction to ontologies.
Link textual information extraction to ontologies.
Introduction to related tools and approaches: C&C/Boxer (parser and semantic interpreter), Attempto Controlled English, scraperwiki, among others.
Dr. Adam Wyner
Lecturer, Department of Computing Science, University of Aberdeen
Aberdeen, Scotland
azwyner at abdn dot ac dot uk Website
The lecturer has a PhD in Linguistics, a PhD in Computer Science, and research background in computational linguistics. The lecturer has previously given a tutorial on this topic at JURIX 2009 and ICAIL 2011 along with an invited talk at RuleML 2012, has published several conference papers on text analytics of legal resources using GATE and C&C/Boxer, and continues to work on text analysis of legal resources. A shortlink to this webpage
By Adam Wyner
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.
I will be participating as a teaching assistant in the Summer School on Law and Logic in Florence, Italy, July 16-20, 2012. The school is jointly hosted by the European University Institute and the Harvard Law School.
From the description:
The Summer School on Law and Logic is the first course ever to provide a comprehensive introduction to the wide variety of uses of logic in the law. Our aim at this Summer School is to provide law students, graduate law students, and legal professionals with a knowledge of the methods of formal logic and the ability to apply those methods to the analysis and critical evaluation of legal arguments and sources of law (including statutes, cases, regulations, constitutional provisions).
The Summer School includes the basics of propositional and predicate deductive logic, as well as the use of logic for capturing representing deontic and Hohfeldian modalities, analogical reasoning and inference to the best explanation. It also addresses presents some aspects of non-deductive reasoning in law, such as defeasible reasoning, including argumentation schemes and inductive reasoning.
We believe that the kind of background in formal logic we offer in this course can be a very powerful tool for use in legal theory, for developing doctrinal legal research, for working in legal informatics (the application of computer programs to the analysis of law), and, more generally, for the practice of law.
This is an innovative school about core issues and approaches in Artificial Intelligence and Law. For me, it will be an opportunity to connect with familiar colleagues, work with new ones, and find out what lawyers think about formal logic. In addition, some of the legal materials that we will be analysing will be new to me, so that will be instructive.
I hope that this school is the beginning of an integration of AI into law school education. Shortlink to this page.
By Adam Wyner
Two short papers appear in the proceedings of LREC Workshop on SPLeT 2012 – Semantic Processing of Legal Texts. The papers are available on the links. Problems and Prospects in the Automatic Semantic Analysis of Legal Texts – A Position Paper
Adam Wyner Abstract
Legislation and regulations are expressed in natural language. Machine-readable forms of the texts may be represented as linked documents, semantically tagged text, or translation to a logic. The paper considers the latter form, which is key to testing consistency of laws, drawing inferences, and providing explanations relative to input. To translate laws to a machine-readable logic, sentences must be parsed and semantically translated. Manual translation is time and labour intensive, usually involving narrowly scoping the rules. While automated translation systems have made significant progress, problems remain. The paper outlines systems to automatically translate legislative clauses to a semantic representation, highlighting key problems and proposing some tasks to address them. Semantic Annotations for Legal Text Processing using GATE Teamware
Adam Wyner and Wim Peters Abstract
Large corpora of legal texts are increasing available in the public domain. To make them amenable for automated text processing, various sorts of annotations must be added. We consider semantic annotations bearing on the content of the texts – legal rules, case factors, and case decision elements. Adding annotations and developing gold standard corpora (to verify rule-based or machine learning algorithms) is costly in terms of time, expertise, and cost. To make the processes efficient, we propose several instances of GATE’s Teamware to support annotation tasks for legal rules, case factors, and case decision elements. We engage annotation volunteers (law school students and legal professionals). The reports on the tasks are to be presented at the workshop. Shortlink to this page.
By Adam Wyner
A study in online, collaborative legal informatics Adam Wyner, University of Aberdeen Wim Peters, University of Sheffield Daniel Katz, Michigan State University — Introduction —
This is an academic research study on legal informatics (information processing of the law). The study uses an online, collaborative tool to crowdsource the annotation of legal cases. The task is similar to legal professionals’ annotation of cases. The result will be a public corpus of searchable, richly annotated legal cases that can be further processed, analysed, or queried for conceptual annotations.
Adam and Wim are computer scientists who are interested in language, law, and the Internet. Dan is an academic lawyer also interested in law and the Internet.
We are inviting people to participate in this collaborative task. This is a beta version of the exercise, and we welcome comments on how to improve it. Please read through this blog post, look at the video, and get in contact. — Highlighting, Annotations, and Legal Case Briefs —
In reading, analysing, and preparing a summary of a legal case, law students and legal professionals annotate cases by highlighting and colour coding elements of the case to make for easy identification. Different elements are annotated: the holding, the parties, the facts, and so on. A sample image of annotations is:
— Problem —
To analyse a legal case, legal professionals annotate the case into its constituent parts. The analysis is summarised in a case brief. However, the current approach is very limited:
Analysis is time-consuming and knowledge-intensive.
Case briefs may miss relevant information.
Case analyses and briefs are privately held.
Case analyses are in paper form, so not searchable over the Internet.
Current search tools are for text strings, not conceptual information. We want to search for concepts such as for the holdings by a particular judge and with respect to causes of action against a particular defendant. With annotated legal cases, we can enable conceptual search.
There is no capacity to systematically compare, contrast, and evaluate the work by different annotators. Consequently, the annotation task itself is not used as an opportunity to gain greater expertise in case analysis.
— Solution: Crowdsource Annotation —
We use an online legal case annotation tool and share the results to support:
Online search in legal cases for case details and concepts.
Semantic web applications and information extraction.
Crowd-source a legal case corpus.
Training and learning for legal case analysis.
The results of the study would be useful to:
Law school students learning case analysis.
Legal professionals in identifying relevant cases.
Researchers of legal informatics.
Law faculty in training students to analyse legal cases.
Broadly speaking, a corpus of analysed cases makes case law a public resource that democratises legal knowledge. — Annotations: types and features —
To crowdsource conceptual annotations of legal cases, we use the General Architecture of Text Engineering (GATE) Teamware tool. Teamware is a web-based application that provides an annotator with a text to annotate and a list of annotations to use. The task is a web-based version of what legal analysts of cases already do.
We use familiar annotations for legal cases, divided (for ease of reference) into types and features. For example, we have a type Legal Roles and various features to select among, e.g. defendant. We are counting on you to have learned and used these annotations in the course of your legal study and practice.
You do not need to memorise the types and features as they will appear in the GATE Teamware tool. It may be handy to keep this webpage open so you can consult it or you could also print out the page.
The annotations we use are: Argument For Party – arguments for a particular party, using the most general notion:
for Appellee, for Appellant, for Defendant, for Plaintiff.
Facts – legal and procedural facts:
Cause of Action – the specific legal theory upon which the plaintiff brings the suit.
Defenses raised by Defendant – the defendant defenses against the cause of action.
Legal Facts – the legally relevant facts of the case that are used in arguing the issues.
Remedy requested by Plaintiff – what the plaintiff asks the court to grant.
Indexes – various indicative information:
Case Citation – the citation of the particular case being annotated.
Court Address – the address of the court.
Hearing Date – the date of the hearing.
Judge Name – the names of the judge, annotated one at a time.
Jurisdiction – the legal jurisdiction of the case.
Issues – the issues before the court:
Procedural Issue – what the appellee claims that the lower court did wrong.
Substantive Issue – the point of law that is in dispute (legal facts have their own annotation).
Legal Roles – the role of the parties in the case:
General – buyer/seller, employer/employee, landlord/tenant, etc.
Other – relevant information not covered by the other annotations. Procedural History – the disposition of the case with respect to the lower court(s):
Appeal Information – who appealed and why they appealed.
Damages – the damages awarded by the lower court.
Lower Court Decision – the lower court’s decision.
Reasoning Outcomes – various parts of the legal decision:
Concurring Opinion.
Dicta – commentary about the judgement and holding, but not part of the rationale.
Dissenting Opinion.
Holding – the rule of law or legal principle that was applied in making the judgement. You can think about this as the new ground that the court is covering in this case. What legal rule(s) is the court developing or clarifying? The case can have more than one holding if there is more than one legal rule being considered. Note that a holding from a cited precedent is to be considered part of the rationale.
Judgement – Given the holding and the corresponding rationale for the holding, the judgement is the court’s final decision about the rights of the parties, the court’s response to a party’s request for relief, and bearing on prior decisions (e.g. affirmed, reversed, remanded, etc.).
Rationale – the court’s analysis of the issues and the reasons for the holding.
— Strategic Phases —
From previous experience and following discussions, we believe it is best if the annotations are grouped together and done in three phases. This allows the annotator to do simpler tasks first and to keep in mind a subset of the relevant annotations.
Phase I: Indexes and Legal Roles
Phase II: Procedural History and Reasoning Outcomes
Phase III: Facts and Issues
For the time being, we are not attending to annotations of Arguments for Party and Other. — Collaborate —
Take a look at the instructional video below. If you wish to collaborate on the task, send an email to Adam Wyner – adam@wyner.info
In the email, please include brief information for:
Your name
Your professional affiliation, e.g. institution, company, firm…
Your role where you work
Your background as a legal professional
This will help us know who we are collaborating with; from the pool of candidates, we will select participants for this early study.
You will be sent a user name and password so you can login to Teamware.
We respect your privacy. We are only interested in data in the aggregate and will not reveal any personal data to third parties. — Next —
We have an instructional video that you can open in a new tab or window and that uses QuickTime. It lasts about 14 minutes. This will give you a good idea of what you will be doing. The presenter is Adam Wyner. You can see this here:
Or follow the link on YouTube — Crowdsourcing Legal Case Annotation Instructional Video. Please view in a large (ok definition) or full screen (grainy definition) mode, which may need to be reloaded in YouTube.
There are additional points about using the tool in section below on questions, problems, and observations.
After reading this blog, viewing the instructional video, and receiving your username and password, you can login to begin annotating at — GATE Teamware — Survey —
When you are done with your task, please answer the questions on the survey to give us feedback on your experience using the annotation tool. The survey is available below. You can scroll down and answer the questions. Don’t forget to hit the “Done” button to submit your responses, which will be very useful in helping us understand your experience and thoughts about using the tool:
Create your free online surveys with SurveyMonkey, the world’s leading questionnaire tool.
— What Then? —
We analyse the annotations from several annotators, comparing and contrasting them (interannotator agreement). This will show us similarities and differences in the understanding of the annotations and cases. As well, the results will help us develop a Gold Standard Corpus of legal cases, which are annotations of cases that annotators agree on. A Gold Standard is essential for information extraction and the development of advanced processing. We will publicly report the analysis of the exercise and make the annotated cases publicly available for re-use.
Once we have a better sense of how this study goes, we plan to roll out a larger version with more cases. And this is only the start…. — Questions, Problems, and Observations — Thanks to participants for letting us know about their problems and sending their observations. How easy is it to learn to use the tool? Take a look at the video to get a sense of this. With a little bit of practice, it is rather straightforward. What if I don’t agree with some of your annotations or features? Write a comment or send us an email, and we will consider your comment. Try to be as clear and specific as you can. We are not lawyers, and we are dealing with a global community with local variation, so it is likely there will be some disagreement and variation. Can I get the results of my annotations? Our approach is to make individual contributions to the whole. So, you will be able to access annotated cases after the exercise. There will be further information on how to work with the material. How many cases must I do? You can do one or you can do as many as we have (not many in the beta project). How much time will it take? About as long as it would take you to do a similar highlighting and annotation task with paper and markers. What if I have a problem with using the tool or if the tool is buggy? Be patient and try to work with the tool. Sometimes things go wrong. Write a comment or send us an email, and we will try to advise. Note – we are only consumers of GATE Teamware, so are not responsible for the system. How thoroughly should I annotate the cases? The more cases that are annotated fully and accurately, the better. Apply the same diligence as you would to thoroughly and carefully analyse cases with pen and paper. As you will be the beneficiary of the work of others, so too should you work to benefit them. Do we track good annotators and bad annotators? We are interested in data in the aggregate, and are only interested in interannotator agreement and disagreement. This information will help us better understand differences in how the cases are understood and annotated. But, we can see how much time each person takes with each annotation task and measure how they perform against other annotators or a gold standard. If we have bad annotators, we will see this in the results; we would contact the annotator and see how best to improve the situation. As we noted above, we are not sharing information with third parties. I cannot login with the username and password. Please let me know if you have this problem, and I will look into it. I can login, but I cannot get the java webstart file to start. This is a tough problem to address over the internet. Some people have no problem, but some people are. Please let me know if you have this problem. Do check that you have followed the instructions (on blog and in movie). I can login and start the annotation tool, but I cannot get the task. Please let me know, and I will look into it. The text is too small and single spaced. At the moment, there is nothing we can do about this. We’ll try to keep this in mind for the future. The highlighting tool is not easy to use. When I want to move from one annotated text to some new text, the tool doesn’t move to the new text. This is bit of a problem with the tool, which is not entirely reliable in the functionality. Try to play around with this to see what works for you. One strategy that I have found that improves performance is to annotate something. Then the annotation types appears in the upper right hand corner window among the list of annotations. Sometimes it is a good idea, when the problem occurs, is to click the annotations in that upper right hand corner window off and on (toggle them on and off). This seems to clear the system a bit so that one can go on to the next annotation. Give this a try. If you have problems, please let me know. I found it very challenging. It is important to us to know this information to gauge how much text and the variety of annotations. We might reduce the number of annotations, breaking up the whole set into parts of the overall task. Decision date is more important than hearing date, or at least should be provided in addition to hearing date. Probably this will be added to future iterations. A participant, e.g. “Cone”, was originally a defendant, but was dismissed out before this appeal. I wonder if he should still be coded as “Defendant” or if he should be coded as an other role-holder. Good observation. I’ll have to consult with some lawyers further about this point. There are sentences where the court introduced a fact and also appeared to reason using it. Is it right to code the whole sentence both as a legal fact and as a rationale. Yes, this is the way to handle this. Double annotations are always possible. A similar problem occurred where the court offered a fact but also put a gloss on it as to its legal significance. Double annotations are always possible. Some of the names of the categories were confusing or unclear. For example, using “Holding” for the name of the legal rule or principle was confusing (“Legal Rule” might be more intuitive). This is another point that we will need to consult further with other lawyers. There may also be some variation in terminology. There is sometimes unclarity about role-players. A case involved a plaintiff, who was an appellee but also a cross-appellant, and a defendant who was thus an appellant and cross-appellee. These can be coded where on is plaintiff and appellee and the other defendant and appellant. But, they could have both been coded as appellee and appellant, given the existence of the cross appeal. Double (or more) annotating is fine. Procedural History/Damages might be better framed as Procedural History/Remedies, as courts often provide injunctive relief or, as in this case, an accounting, as a remedy. This is another point that we will need to consult further with lawyers about terminology. What if a case does not state any legal rules? Can implicit legal rules be annotated. For example, where novelty and non-obviousness are a sine qua non of a valid patent, one would not have known to mark some of the sentences as rationales. This isn’t a problem. If something is not in the case, then it is not annotated. We are not (yet) concerned with implicit information. But, if you know the implicit information, then annotate it. How can I automatically search for and annotate the same string with the same annotation? In the instructional video, we wanted to keep the material short and to the point, so there are aspects of the annotation tool we did not cover. However, it is tedious to manually search for the same string and annotate it with the same annotation. Teamware’s Annotation Editor has a tool to support automatic search and annotation. To see how to do this, we have the video here:
How should I annotate holdings which may appear as holdings in cited cases and as part of the procedural history, as holdings in the current case, or as part of the rationale in the current case? This is an interesting and subtle point for us, and we will have to have a full consultation with lawyers to decide. But, for the time being, there can be no harm in multiple annotations, which we can then look at and work with later. — Paper —
If you are interested in some of the ideas behind this project, please see our paper: Semantic Annotations for Legal Text Processing using GATE Teamware
The paper will appear in May 2012 in the Proceedings of the LREC Conference Workshop on Semantic Processing of Legal Texts, Istanbul, Turkey. The exercise here is a version of the exercise proposed in the paper.
A shortlink to this blog page is: http://wyner.info/LanguageLogicLawSoftware/?p=1315 — Thanks for collaborating! —
— If you have any questions, please submit a comment! — — Update Note —
July 29, 2013 to reflect Dan Katz’s amended definitions for Holding. Updated in various ways July 12, 2013. The previous blog post of July 28, 2012 has been updated to note the participation of Dan Katz and his students of Michigan State University. — Honour Role —
For the very first study, we would like to thank the following individuals who gave of their time and intelligence to carry out their tasks.
In conjunction with Language Resources and Evaluation Conference 2012 (LREC 2012)
27 May, 2012
Istanbul, Turkey REVISED SUBMISSION DEADLINE FOR WORKSHOP: 19 February 2012 Context
The legal domain represents a primary candidate for web-based information distribution, exchange and management, as testified by the numerous e-government, e-justice and e-democracy initiatives worldwide. The last few years have seen a growing body of research and practice in the field of Artificial Intelligence and Law which addresses a range of topics: automated legal reasoning and argumentation, semantic and cross-language legal information retrieval, document classification, legal drafting, legal knowledge discovery and extraction, as well as the construction of legal ontologies and their application to the law domain. In this context, it is of paramount importance to use Natural Language Processing techniques and tools that automate and facilitate the process of knowledge extraction from legal texts.
Since 2008, the SPLeT workshops have been a venue where researchers from the Computational Linguistics and Artificial Intelligence and Law communities meet, exchange information, compare perspectives, and share experiences and concerns on the topic of legal knowledge extraction and management, with particular emphasis on the semantic processing of legal texts. Within the Artificial Intelligence and Law community, there have also been a number of dedicated workshops and tutorials specifically focussing on different aspects of semantic processing of legal texts at conferences such as JURIX-2008, ICAIL-2009, ICAIL-2011, as well as in the International Summer School “Managing Legal Resources in the Semantic Web” (2007, 2008, 2009, 2010, 2011).
To continue this momentum and to advance research, a 4th Workshop on “Semantic Processing of Legal Texts” is being organized at the LREC-2012 conference to bring to the attention of the broader LR/HLT (Language Resources/Human Language Technology) community the specific technical challenges posed by the semantic processing of legal texts and also share with the community the motivations and objectives which make it of interest to researchers in legal informatics. The outcome of these interactions are expected to advance research and applications and foster interdisciplinary collaboration within the legal domain.
New to this edition of the workshop are two sub-events (described below) to provide common and consistent task definitions, datasets, and evaluation for legal-IE systems along with a forum for the presentation of varying but focused efforts on their development.
The main goals of the workshop and associated events are to provide an overview of the state-of-the-art in legal knowledge extraction and management, to explore new research and development directions and emerging trends, and to exchange information regarding legal language resources and human language technologies and their applications. Sub-events Dependency Parsing
The first sub-event will be a shared task specifically focusing on dependency parsing of legal texts: although this is not a domain-specific task, it is a task which creates the prerequisites for advanced IE applications operating on legal texts, which can benefit from reliable preprocessing tools. For this year our aim is to create the prerequisites for more advanced domain-specific tasks (e.g. event extraction) to be organized in future SPLeT editions. We strongly believe that this could be a way to attract the attention of the LR/HLT community to the specific challenges posed by the analysis of this type of texts and to have a clearer idea of the current state of the art. The languages dealt with will be Italian and English. A specific Call for Participation for the shared task is available in a dedicated page. Semantic Annotation
The second sub-event will be an online, manual, collaborative, semantic annotation exercise, the results of which will be presented and discussed at the workshop. The goals of the exercise are: (1) to gain insight on and work towards the creation of a gold standard corpus of legal documents in a cohesive domain; and (2) to test the feasibility of the exercise and to get feedback on its annotation structure and workflow. The corpus to be annotated will be a selection of documents drawn from EU and US legislation, regulation, and case law in a particular domain (e.g. consumer or environmental protection). For this exercise, the language will be English. A specific Call for Participation for this annotation exercise is available in a dedicated page. Areas of Interest
The workshop will focus on the topics of the automatic extraction of information from legal texts and the structural organisation of the extracted knowledge. Particular emphasis will be given to the crucial role of language resources and human language technologies.
Papers are invited on, but not limited to, the following topics:
Construction, extension, merging, customization of legal language resources, e.g. terminologies, thesauri, ontologies, corpora
Information retrieval and extraction from legal texts
Semantic annotation of legal text
Legal text processing
Multilingual aspects of legal text semantic processing
Legal thesauri mapping
Automatic Classification of legal documents
Logical analysis of legal language
Automated parsing and translation of natural language legal arguments into a logical formalism
Dialogue protocols for legal information processing
Author Guidelines:
Submissions are solicited from researchers working on all aspects of semantic processing of legal texts. Authors are invited to submit papers describing original completed work, work in progress, interesting problems, case studies or research trends related to one or more of the topics of interest listed above. The final version of the accepted papers will be published in the Workshop Proceedings.
Short or full papers can be submitted. Short papers are expected to present new ideas or new visions that may influence the direction of future research, yet they may be less mature than full papers. While an exhaustive evaluation of the proposed ideas is not necessary, insight and in-depth understanding of the issues is expected. Full papers should be more well developed and evaluated. Short papers will be reviewed the same way as full papers by the Program Committee and will be published in the Workshop Proceedings.
Full paper submissions should not exceed 10 pages, short papers 6 pages. See the style guidelines and files on the LREC site: Authors’ Kit and Templates Submit papers to:
Submission for the workshop uses the START submission system at: https://www.softconf.com/lrec2012/LegalTexts2012/
Note that when submitting a paper through the START page, authors will be asked to provide essential information about resources (in a broad sense, i.e. also technologies, standards, evaluation kits, etc.) that have been used for the work described in the paper or are a new result of your research. For further information on this new initiative, please refer to: http://www.lrec-conf.org/lrec2012/?LRE-Map-2012 Publication:
After the workshop a number of selected, revised, peer-reviewed articles will be published in a Special Issue on Semantic Processing of Legal Texts of the AI and Law Journal (Springer). Contact Information:
Address any queries regarding the workshop to:
lrec_legalWS@ilc.cnr.it Program Committee Co-Chairs:
Enrico Francesconi (National Research Center, Italy)
Simonetta Montemagni (National Research Center, Italy)
Wim Peters (University of Sheffield, UK)
Adam Wyner (University of Liverpool, UK) Program Committee (Preliminary):
Kevin Ashley (University of Pittsburgh, USA)
Johan Bos (University of Rome, Italy)
Daniele Bourcier (Humboldt Universitat, Germany)
Pompeu Casanovas (Universitat Autonoma de Barcelona, Spain)
Jack Conrad (Thomson Reuters, USA)
Matthias Grabmair (University of Pittsburgh, USA)
Antonio Lazari (Scuola Superiore S.Anna, Italy)
Leonardo Lesmo (Universita di Torino, Italy)
Marie-Francine Moens (Katholieke Universiteit Leuven, Belgium)
Thorne McCarty (Rutgers University, USA)
Raquel Mochales Palau (Catholic University of Leuven, Belgium)
Paulo Quaresma (Universidade de Evora, Portugal)
Tony Russell-Rose (UXLabs, UK)
Erich Schweighofer (Universitat Wien, Austria)
Rolf Schwitter (Macquarie University, Australia)
Manfred Stede (University of Potsdam, Germany)
Daniela Tiscornia (National Research Council, Italy)
Tom van Engers (University of Amsterdam, Netherlands)
Giulia Venturi (Scuola Superiore S.Anna, Italy)
Vern R. Walker (Hofstra University, USA)
Radboud Winkels (University of Amsterdam, Netherlands)
By Adam Wyner
Martyn Lloyd-Kelly and I have a forthcoming paper on arguing about emotions in legal cases where the ‘heat of passion’ plays a role. It appears in the proceedings of the Workshop on User Models for Motivational Systems the affective and the rational routes to persuasion. Arguing about Emotions
Martyn Lloyd-Kelly and Adam Wyner
Abstract
Emotions are commonly thought to be beyond rational analysis. In this paper, we develop the position that emotions can be the objects of argumentation and used as terms in emotional argumentation schemes. Thus, we can argue about whether or not, according to normative standards and available evidence, it is plausible that an individual had a particular emotion. This is particularly salient in legal cases, where decisions can depend on explicit arguments about emotional states.
By Adam Wyner