Later this summer, I’ll be participating in the summer school Managing Legal Resources in the Semantic Web, September 7 to 12 in San Domenico di Fiesole (Florence, Italy). This program will focus on several aspects of legal document management:
- Drafting methods, to improve the language and the structure of legislative texts
- Legal XML standards, to improve the accessibility and interoperability of legal resources
- Legal ontologies, to capture legal metadata and legal semantics
- Formal representation of legal contents, to support legal reasoning and argumentation
- Workflow models, to cope with the lifecycle of legal documentation
While I’m familiar with several of these areas, I’m using this opportunity to fill in my knowledge in these key areas.
I was invited to participate in an NSF Sponsored Workshop
Automated Content Analysis and Law, August 3 and 4 at NSF HQ in Arlington, VA and organised by Georg Vanberg (UNC).
There are two sessions planned. The first session will focus on identifying the theoretical/substantive puzzles in legal and judicial scholarship that might benefit from automated content analysis as well as what data and measurements are required. For the second session, the focus is on the state of automated content analysis/natural language processing, exploring the extent to which current technology is relevant to providing results with respect to issues raised in the first session and what might be needed.
There is an interesting mix of people, with a strong emphasis on legal scholarship bearing on the US Supreme Court and opinion mining. I had an email exchange with Georg, the workshop organiser about this, and we agree that attention ought to turn from the Supreme Court to lower levels of the legal system. I also suggested that participants consider some of the following points which bear on the motives and objectives of these lines of research in terms of who is being served and how the data or conclusions would be used.
Questions for Discussion
- What sorts of artifacts and technologies (if any) will emerge from the research?
- How does the research relate to the Semantic Web?
- What public service does the research provide or support?
- How does this research relate to:
- Textual legal case based reasoning
- Legislative XML Markup
- Other research communities e.g. ICAIL and JURIX
- Scott Barclay (NSF) – Barclay@uamail.albany.edu
- Cliff Carrubba (Emory) – firstname.lastname@example.org
- Skyler Cranmer (UNC) – email@example.com
- Barry Friedman (NYU)- firstname.lastname@example.org
- Susan Haire (NSF) – email@example.com
- Lillian Lee (Cornell) – firstname.lastname@example.org
- Jimmy Lin (Maryland) – email@example.com
- Stefanie Lindquist (Texas) – SLindquist@law.utexas.edu
- Will Lowe (Nottingham) – firstname.lastname@example.org
- Andrew Martin (Wash U) – email@example.com
- Wendy Martinek (NSF) – firstname.lastname@example.org
- Kevin McGuire (UNC) – email@example.com
- Wayne McIntosh (Maryland) – firstname.lastname@example.org
- Burt Monroe (Penn State) – email@example.com
- Kevin Quinn (Harvard) – firstname.lastname@example.org
- Jonathan Slapin (Trinity College) – email@example.com
- Jeff Staton (Emory) – firstname.lastname@example.org
- Georg Vanberg (UNC) – email@example.com
- Adam Wyner (University College London) – firstname.lastname@example.org
Next week I’m attending a week long summer school on General Architecture for Text Engineering (GATE). GATE is an open-source and extensible toolkit for text mining, which has been used in a variety of areas. After having worked with people who had their “hands on” the tools, I decided it would better suit me to be able to work the material myself. I’ve been looking forward to this summer school for some time and am excited at the prospect of applying GATE tools to a DB of legal cases as well as developing an ontology.
I just published an article in Legal Technology The Language of the Law on the Web. This article outlines XML for legal applications.