Center for Creation, Content and Technology (CCCT) Seminar Friday 24 May 2013

16.00-17.00 (followed by drinks), Science Park 904, room C1.112
Speakers:  Karina van Dalen-Oskam en Piek Vossen.

Under the CCCT umbrella, researchers from the humanities, the social and behavioral sciences, and the natural sciences collaborate in a multidisciplinary setting on information-rich research topics. CCCT is organizing a bi-monthly seminar in which one of the three faculties hosts speakers from the other two to report on research activities that are of shared interest.

Speaker: Karina van Dalen-Oskam (Huygens Institute for the History of the Netherlands / UvA)

Title: The Analysis of Literary Texts: Mixed Methods

Until recently, a large part of digital humanities research on literary texts dealt with authorship attribution. Since more and more literary scholars have started using computational methods, the attention is now shifting from authorship to stylistic analysis of literary works. Existing methods for authorship attribution can be used for stylistic analysis, but sometimes they need to be adapted and usually they have to be supplemented with other  methods and tools. And certain kinds of authroship attribution tools are not useful at all to find out in which ways authors but also texts, genres, or texts from certain time periods differ from each other. The talk will deal with these topics using as example the research project The Riddle of Literary Quality (http://literaryquality.huygens.knaw.nl/) from the Computational Humanities Program of the Royal Netherlands Academy of Arts and Sciences (KNAW, eHUmanities Group). An important part of this project (and reflecting one of it’s mixed methods) is a large online enquiry gathering readers’opnions on modern fiction at http://www.hetnationalelezersonderzoek.nl/ which all attendants are kindly invited to visit beforehand.

Speaker: Piek Vossen (Computational Lexicology, Faculty of Arts, VU)

Title: Do big data hide or reveal stories? Processing large streams of news in the NewsReader project.

Abstract: The FP7 project NewsReader develops technology to process vast amounts of daily news stream to determine who did what when and where. The result is stored in a Knowledge Store that accumulates sequences of events, adding all new information to whatever has been stored in the past. Over time, this knowledge store captures complex histories such as the financial crisis as it has been told in the news. Since the system also captures the provenance of the information, it is possible to demonstrate different versions of such histories according to the different sources.

Date and Time: Friday 24 May 2013, 16.00-17.00 (followed by drinks)

Location
Science Park, room: C1.112
Science Park 904
1098 XH Amsterdam

Karina van Dalen-Oskam is research leader of the department of Textual Scholarship & Literary Studies at the Huygens Institute for the History of the Netherlands and professor of Computational Literary Studies at the University of Amsterdam. Her research focuses on the analysis of literary style making use of digital corpora and computational methods. More information and a list of her publications can be found at http://www.huygens.knaw.nl/en/vandalen/

Piek Vossen has a B.A. and a Ph.D. at the University of Amsterdam in the field of computational lexicology and Natural Language Processing. He started his carrier in 1986 at the University of Amsterdam. In 1999, he joined Sail-Labs as a senior researcher and manager, where he worked for almost 2 years on the development of tools for exploiting multilingual wordnets in information retrieval and classification. In 2001, he became the CTO of Irion Technologies (Delft), where he was responsible for the development of advanced cross-lingual information systems based on language technology. Since April 2007, he is a Professor at the Faculty of Arts of the Vrije Universiteit van Amsterdam.

During the 90’s, Vossen coordinated the EuroWordNet project that resulted in a unique cross-lingual wordnet database for 8 European languages. He is also the co-founder and co-president of the Global Wordnet Association in which this model has been expanded to many more languages in the world. More recently, he leads projects that develop technology in the area of  knowledge-acquisition, text-mining and information systems that exploit the wordnet models. Typically, these projects aim at processing natural language texts in different languages using a shared semantic model resulting in compatible interpretations of text across these languages. In the most recent project, NewsReader, this approach is used to record long-term histories of events in news articles.