Research Meetings

Academic Year: 2011-2012

7 July- 10 September
Summer break

28 June
Symposium Computational Humanities projects

13.30-13.45         Welcome and introduction to the CH Symposium
13.45-14.30         Tunes and Tales
14.30-15.15         Riddle of Literary Quality
15.15-15.30         Tea
15.30-16.15         CEDAR
16.15-16.45         Elite Network Shifts
16.45-17.30         Panel discussion


Tunes and Tales

Tunes & Tales: Modeling oral transmission
Oral transmission is a fascinating aspect of the broader phenomenon of cultural transmission. In oral culture, artifacts such as songs and stories are passed on to next generations without written or technical reproduction media, just by voice and ear. Oral transmission implies alteration and variation to a considerable extent. Yet after several generations of oral transmission the artifacts are still ‘the same’ (in oral terms), or at least recognizable variants (from a literate point of view). How can this be? Are there convergent forces? How can we model the process of oral transmission?

We hypothesize that oral transmission of tales and tunes happens through the replication of sequences of motifs. In this view, motifs constitute the primary vehicles of cultural heritage in oral transmission of both artifacts. A prerequisite for building such a motif-based model of oral transmission is to formalize tunes and tales as sequences of motifs. In this presentation we will discuss the first steps taken to come to these formalizations.

(Presentation slides)

The Riddle of Literary Quality

Beauty or Power? The Riddle of Literary Quality
Most literary scholars now agree on the fact that literary quality is bestowed on works by publishers, critics, and other actors in the literary field (‘Power’). The idea that the literary works themselves (‘Beauty’) may also play a role is frowned upon. Still, the Beauty-side of the quarrel will be empirically tested in the project The Riddle of Literary Quality. We assume that formal characteristics of a text may also be of importance in calling a fictional text literary or non-literary, and good or bad – non-literary texts can also be good and literary text can also be bad. Many formal characteristics can be thought of as having a part in this, e.g. the use of difficult words, the number of adjectives and adverbs, or complex syntactic style. The project explores this assumption, integrating the analysis of low-level lexical-statistical features and high-level syntactic and narrative features.

One of the first objectives is a large survey of readers’ opinions on 400 recent works of fictions published in Dutch. The role of the survey will be explained in the presentation. Next, the first results of high-level pattern recognition will be shown. We present a method to analyze the style of an author through syntactic patterns. We present a system that is able to guess the author of a text by comparing recurring grammatical constructions and expressions occurring in the phrase-structures of texts. This use of high-level syntactic patterns is in contrast to the usual methods for stylometry which focus on the frequencies of more superficial features such as word frequencies. The high-level syntactic patterns that we find provide better opportunities for interpretation from a digital humanities perspective and are the first step towards a syntactical comparison of style.

(Presentation slides Andreas van Cranenburgh
Presentation slides Karina van Dalen)


From the roots of data to the leaves of social history: semantic technology for Historic Dutch Census (CEDAR)
Learning from (social-economic) history helps to understand the inter-relation between macro-economic change and individual lifestyles, policy regimes, labour markets, communities and national wealth. However, sources of historical information about the lives of individuals, communities, and nations are still scattered.

This project takes Dutch census data as its starting point to build a semantic data-web of historical information. With such a web, it will be possible to answer questions such as: What kind of patterns can be identified and interpreted as expressions of regional identity? How can patterns of changes in skills and labour be related to technological progress and patterns of geographical migration? How can changes of local and national policies in the structure of communities and individual lives be traced?

Census data alone are not sufficient to answer these questions. This project applies a specific web-based data-model – exploiting the Resource Description Framework (RDF) technology– to make census data inter-linkable with other hubs of historical socio-economic and demographic data and beyond. Pattern recognition appears on two levels: first to enable the integration of hitherto isolated datasets, and second to apply integrated querying and analysis across this new, enriched information space. Data analysis interfaces, visual inventories of historical data and reports on open-linked data strategies for digital collections will be some of the results of this project. The project will also produce generic methods and tools to weave historical and socio-economic datasets into an interlinked semantic data-web.

Elite Network Shifts

Apa kabar? Extracting sociological data from masses of Indonesian newspaper clippings.
Indonesia is the fourth largest nation in the world. It has a chaotic bureaucracy but a vibrant press. What if we could read its dozens of digital/ digitised newspapers automatically and extract vital sociological trends? Elite Network Shifts will develop techniques for doing this. It will focus on the problem of elite rotation during regime change.
(Presentation slides)

14 June
Anja Volk, Utrecht University- CANCELLED
[title & details to follow]

7 June
1. Alesia Zuccala, University of Amsterdam (UvA)

Book reviews as ‘mega-citations’: a fresh look at citation theory.
We review the history of citation theory to show how the book review fits the role of ‘megacitation’ due to its formal, public presence within the scholarly communication system.  The purpose of a book review is to alert readers to a newly published book and to cite different parts of the book in the assessment of its scholarly credibility-quality.  A book review rarely receives citations from other documents, but as a content-rich ‘mega-citation’ it has potential to join the ‘citation proper’ in the development of a composite indicator for evaluating the impact of a book.  Thomson Reuters’ Web of Knowledge (WoK) is currently in a strong position to compile data necessary for this composite indicator.  While the selection of books for the new Book Citation Index (BCI) is still underway, work needs to be done to ascertain the ‘language’ of reviewing across different fields.  Then, if database links between books, book reviews, journal articles and publishers become more accurate, bibliometricians may consider applying the ‘mega-citation’ to large-scale research evaluations.

Alesia Zuccala is a graduate (PhD) of the University of Toronto, Faculty of Information Studies.  Previously she has held research appointments with the Science System Assessment Unit of the Rathenau Institute in Den Haag, and the Center for Science and Technology Studies, Leiden University.  She is now a research fellow with the Faculty of Humanities/Institute for Logic, Language and Computation at the University of Amsterdam.  Dr. Zuccala’s work focuses primarily on scholarly communication and citation patterns in academic research and her current interest is to develop new approaches in the evaluation of research outputs in the humanities. She has contributed both quantitative and qualitative-oriented research to a variety of international journals including Scientometrics, Information Research, Journal of the American Society for Information Science and Technology, and the Annual Review of Information Science and Technology.

2. Peter van der Maas, Erasmus University Rotterdam

Croatian and Bosnian-Herzegovina Memories “Unveiling personal memories on war and detention”

The oral history projects are managed by the Erasmus University Rotterdam and the University of Twente in a consortium together with local partners; in Zagreb, Croatia, Center for Dealing with the past and in Sarajevo, Bosnia Herzegovina, with CIN, Center for Investigative Journalism and the HRC, Human Right Center of the University of Sarajevo.
The projects Unveiling Personal Memoirs on War and Detention is an initiative to create a collection of video-recorded testimonies on a wide range of war experiences in Croatia and BiH and proposes the use of oral history as a method to collect and open up individual memories on past traumatic events from a wide range of perspectives, including those of minorities, victims, women, war veterans… It is our contention that revealing and disseminating the variety of perspectives on how war affects an individual will contribute to a better understanding of what the circumstances are that can lead to the collapse of civil society. The project aims to combine the method of collecting personal memories, with innovative ICT tools in order to make these sources accessible and searchable through the Internet for a wide range of audiences.
The purpose is to record a set of 500 interviews and narratives on personal experiences on various wars and political violence in the period between 1941 and 1995 in Croatia; in Bosnia we will record 85 interviews. The interviews will be transcribed, elaborated, translated and indexed, and an appropriate subset of the material will be stored on an open internet platform with direct access possibilities, enabling the editing, publication and streaming of interviews with advanced search options in the Croatian, Bosnian and English language. A separate research platform is build; here the full unedited interviews are available for research purposes.
Our intention is to affirm personal memories of all interested witnesses of historical events in Croatia and BiH to preserve them from oblivion. We believe that, through recording and transferring of subjective experiences of people, it is possible to gain deeper insights into seemingly hidden aspects of political turmoil and war conflicts which had happened in these areas. Our aim is to support and strengthen personal and social processes of dealing with the past, which we find as a necessary pre-condition for building sustainable peace and stability in the society.
The CroMe and BiHMe projects are funded by the Matra fund of the Netherlands ministry of Foreign Affairs.

Peter van der Maas studied new- and theoretical history and international relations and law at the University of Amsterdam. He followed the training course Foreign Relations (Leergang Buitenlandse Betrekkingen) at Clingendael (The Hague) and also studied Computer Science and Knowledge Management at the University of Amsterdam. He took ICT courses at the University of Bergen (Norway), London (Kings College) and studied Information, Document and Records Management (IDRM) at the Koenen Baak & ICT Academy in Voorburg. Peter van der Maas was a research assistant and ICT project manager for 25 years at the Dutch Institute for War Documentation (NIOD) in Amsterdam. He has worked as project manager and project consultant in the former Yugoslavia. Since 2010 Peter van der Maas is, as a staff member of the Erasmus Studio of the Erasmus University Rotterdam, general project manager of CroMe and since 2011 of BiHMe.

(Presentation slides)

31 May
Dr. Rachael Pitt, La Trobe University Melbourne Australia

The Research and Innovation Leaders for Industry project
This presentation will outline a project that examined the impact of the Australian Cooperative Research Centre (CRC) Program on doctoral graduates’ experiences and outcomes. Of particular interest were the claims that the CRC Program equips graduates to be ‘industry ready’ through its collaborative organisation of government, industry and university partners in the production of end-user driven research in areas of national significance. The study surveyed PhD graduates 5-12 years post-graduation who had been involved in a CRC during their doctorate, along with a comparison sample of graduates from similar disciplines at three research intensive universities. The survey included: Questions about PhD experiences; post-graduation career experiences; perceptions on the development, use, and importance of graduate attributes; and demographic information. Australian employers of PhD graduates were also surveyed and asked about: Their organisation; their expectations and perceptions of recent PhD graduates’ possession and demonstration of various graduate attributes and skills; their provision of on-going training and mentoring for PhD graduates; and their perception of leadership characteristics in PhD graduates. Responses were received from 327 CRC graduates and 741 non-CRC graduates, along with 280 employers across the higher education, public-, and private-sectors. Key findings from this project will be outlined within a consideration of the importance of the development, utilisation, and translation of graduate attributes and skills for the diversity of post-graduation roles that doctoral holders engage in.

Dr Rachael Pitt is a registered psychologist who researches in higher education, research education, and doctoral studies. Her broad interests include survey methodologies, the operationalisation of graduate attributes at the doctoral level, PhD graduate employment outcomes and pathways, and the extent to which postgraduate research education prepares graduates for their ensuing diverse careers. In particular, she is interested in changing conceptualisations of the doctorate and academic careers, with a particular focus on the types of activities undertaken by academic staff in the Australian system who are categorised as being ‘research-only’.

Dr Pitt’s role within the Faculty of Humanities and Social Sciences includes assisting the Associate Dean (Research) by undertaking research into Faculty research initiatives and processes. She also works collaboratively with interdisciplinary colleagues to conduct research into the above areas of interest and provide supervision to research higher degree candidates.

17 May
No eHg meeting: Ascension Day.

10 May
Suzan van Dijk, Huygens ING

Working on “Women Writers In History”.
Since several years an international network is being created and extended, in view of preparing large-scale research on women’s presence in the European literary field before the beginning of the 20th century. Having noticed, thanks to previous research in the periodical press, to inventories provided by book history, and last but not least to research being done for some specific countries (England in particular), that women – authors, readers and intermediaries – did play a quite larger role than suggested by literary historiography, we had started accumulating, in a database entitled  WomenWriters,  empirical evidence for the contemporary reception of their works. We take these data, in principle, from large-scale sources which together provide a basis for research which can either be large-scale (comparing for instance currents of “female” influence from one country to others), or concern for instance the reception of one individual author in several countries (which can then be seen within its international context). The database now contains 4000 records for authors, and 20.000 for the reception of their writings.

At present we are in the process of proceeding from one version of the database WomenWriters (the third version actually, since 2001) to the next one: a Virtual Research Environment for which we will seek funding. For this new version we plan a number of important new features:

1.    Interconnectivity with other online projects in women’s literary history (in order to avoid duplication of work);

2.    The possibility of making the texts comparable by brief notations concerning (a) “scenario’s” presented (by the use of specific narrative topoi) by the female authors and (b) the ways in which these female “scenario’s” are appreciated, commented upon or rewritten;

3.    A new data model corresponding to a different perspective on literary communication.

I would be happy, during this meeting on 10th of May 2012, to discuss this passage from one version to another of our research tool – in particular to discuss the work previously to be carried out by others than the developers . Indeed, the development will be taken care of by Huygens Ing, but we are aware that a lot of other work will need to be done not only for checking and completing data, but also for standardizing certain parts of the information, in order to make them appropriate for faceted search and visualisation.

Most probably we will have the possibility of employing 5 or 6 assistants for these tasks, and the discussion would have as its principal, and very practical, objective to prepare a work program for their work to be done during one or two weeks this summer.

I am a specialist in French and comparative literature (18th-19th centuries), especially women’s writing. As a senior researcher at Huygens ING The Hague, I am currently chairing a European COST Action entitled “Women Writers In History“ (2009-2013), which is the present form of the NEWW network (New approaches to European Women’s Writing), created thanks to NWO funding (2007-2010). In COST-WWIH, 120 members from 26 countries are collaborating. See alsoNEWW website.

(Presentation slides)

3 May

No eHg meeting: Holiday period

26 April
Alexander Petersen, IMT Lucca Institute for Advanced Studies, Lucca, Italy

Persistence and Uncertainty in the Academic Career
Understanding how institutional  changes within academia may affect the overall potential of science requires a better quantitative representation of how careers evolve over time. Since knowledge spillovers, cumulative advantage, competition, and collaboration are distinctive features of the academic profession, both the employment relationship and the procedures for assigning  recognition and allocating funding  should be designed to account for these factors. In this talk I will present recent research on the annual production $n_{i}(t)$ of a given scientist $i$ by analyzing  longitudinal career data for 200 leading scientists and 100 assistant professors from the physics community  (Petersen et al., PNAS 2012). We introduce a model of proportional growth with variable appraisal systems to better understand the evolution of careers in competitive systems. This theoretical model shows that short-term contracts can amplify the effects of competition and uncertainty making careers more vulnerable to early termination, not necessarily due to lack of individual talent and persistence, but because of random negative production shocks. I will also discuss team efficiency and the relationship between fluctuations in scientific production and a scientist’s collaboration radius, as well as the results of large-scale analyses of  productivity, impact, and longevity using empirical career data from thousands of academic and professional athlete careers (Petersen et al., PNAS 2011). This study uncovers a remarkably simple statistical law which describes the frequencies of the extremely short careers of `one-hit wonders’ as well as the extremely long careers of the `iron-horses’. Our model highlights the importance of early career development, showing that many careers are stunted by the relative disadvantage associated with inexperience.

I am currently an assistant professor at the IMT Lucca, and a member of the Economics and Institutional Change division and the Laboratory for the Analysis of Complex Economic Systems (AXES) research unit. Before joining the IMT, I spent my doctoral years at Boston University where I received my Ph. D. in Physics in May 2011 (Advisor: H. Eugene Stanley). My curricula and research have focused on the analysis of stochastic phenomena in the social and economic sciences using concepts and methods from statistical physics. I am currently involved in analyzing “big data” comprising (i) high-frequency Trades and Quotes (TAQ) financial data, (ii) Google n-gram data, and (iii) measures for longevity, success, productivity and innovation in science and professional sports. In the most broad sense, I search for statistical regularities in empirical data which can be used to better understand patterns of growth in diverse complex systems.

(Presentation slides)

19 April
Katy Boerner, Indiana University

Bounce-Back follow-upWorkshop : Sci2 Tool: Temporal, Geospatial, Topical, and Network Analysis and Visualization – addressing humanities scholars.

The format will be lecture by Katy Boerner and “hands-on” training and a feedback option for your own explorations.

Please bring your laptop and pre-install the tool (free download at

The Science of Science Tool (Sci2) ( was designed for researchers and practitioners interested to study and understand the structure and dynamics of science. Today is used by major federal agencies in the US but also by researchers from more than 40 countries and from many different areas of research — including arts and humanities scholars.
Sci2 is a standalone desktop application that installs and runs on Windows, Linux x86 and Mac OSX and supports:

  • Reading and writing of 20 major file formats (e.g., ISI, Scopus, bibtex, nsf, EndNote, CSV, Pajek .net, XGMML, GraphML),
  • Easy access to more than 180 algorithms for the temporal, geospatial, topical, and network analysis and visualization of scholarly datasets at the micro (individual), meso (local), and macro (global) levels, and
  • Professional visualization of analysis results by means of large-format charts and maps.

Börner, Katy. (2010). Atlas of Science: Visualizing What We Know. The MIT Press. (

Tool Links:

Katy Börner is the Victor H. Yngve Professor of Information Science at the School of Library and Information Science, and Founding Director of the Cyberinfrastructure for Network Science Center at Indiana University. She is a curator of the Places & Spaces: Mapping Science exhibit. Her research focuses on the development of data analysis and visualization techniques for information access, understanding, and management. She is particularly interested in the study of the structure and evolution of scientific disciplines; the analysis and visualization of online activity; and the development of cyberinfrastructures for large scale scientific collaboration and computation. She is the co-editor of the book ‘Visual Interfaces to Digital Libraries’ and a special issue of PNAS on ‘Mapping Knowledge Domains’ (2004). She holds a MS in Electrical Engineering from the University of Technology in Leipzig, 1991 and a Ph.D. in Computer Science from the University of Kaiserslautern, 1997. Web site:

12 April
Hein van den Berg (VU University Amsterdam), Arianna Betti (VU University Amsterdam), Bettina Speckmann(Technical University of Eindhoven) & Kevin Verbeek (Technical University of Eindhoven)

Mapping Philosophy
Mapping Philosophy is an interdisciplinary pilot in philosophy and computer science involving programmers and librarians which aims at visualizing 5 sets of bibliographic metadata (author, title, place of publication, publisher, year) of two centuries of logic books (1700-1940) on a two-dimensional, computer-generated, interactive, web-based geographical map of Europe.

The working hypothesis behind Mapping Philosophy is twofold. We think that an effective visualization of the metadata of books published in Europe in logic within a certain period will enable any user to locate at a glance the centres of excellence in this discipline through that period. We also think that our visualization will enable scholars in the field to confirm or question existing research hypotheses as well as formulate new ones. We see this as a first step towards creating a new data-driven empirical methodology for the history of philosophy.

In this talk we present the results of the project, as well as the problems we encountered so far, some limitations and the challenges we expect to face in the future to expand it.

The project, led by Arianna Betti (VU Amsterdam) and Bettina Speckmann (TU Eindhoven), is funded by the Young Academy of the Royal Academy of Arts and Sciences of the Netherlands.

Hein van den Berg is postdoctoral researcher within the ERC Starting Grant Tarski’s revolution at VU University Amsterdam and lecturer in history of modern philosophy at the University of Groningen. As of September 2012, he will conduct research at the TU Dortmund on the project Axiomatics and the Emergence of Biology as a Science: philosophy of biology in the 18th century, for which he received the Hendrik Casimir – Karl Ziegler research stipend of The Royal Academy of Arts and Sciences of the Netherlands (KNAW).

Arianna Betti is lecturer in Logic, Metaphysics and their History and principal investigator of the ERC Starting Grant Tarski’s Revolution (2008-2013) at VU University Amsterdam. She is a member of De Jonge Akademie of The Royal Academy of Arts and Sciences of the Netherlands (KNAW), of the Global Young Academy, of the European Commission’s newly formed group “Voice of the Researchers” and of AcademiaNet, The Expert Database of Outstanding Female Scientists and Scholars in Europe.

Bettina Speckman is associate professor at the Department of Mathematics and Computer Science of the Technical University of Eindhoven and a member of the Algorithms group. Her Research Interests are: Algorithms and data structures; discrete and computational geometry; applications of computational geometry to geographic information systems; graph drawing. She is a member of The Young Academy (De Jonge Akademie) of the Royal Netherlands Academy of Arts and Sciences (KNAW) and also a member of the Global Young Academy. She won the first Netherlands Prize for ICT Research.

Kevin Verbeek is PhD student at the Department of Mathematics and Computer Science of the Technical University of Eindhoven and a member of the Algorithms group. His research interest are: Computational geometry, Graph drawing and Automated cartography. He is also involved in programming contests.

Peter Wittenburg, Max Planck Institute for Psycholinguistics

Potentials & limitations of interoperability
Data is the currency of science in the coming years due to a fundamental change in culture in science and due to the changed expectations. In natural sciences people speak about the opportunities of Data Intensive Science and finding unexpected patterns in Big Data. Like in a gold rush many researchers start digging in large integrated data sets using statistical methods. In climate modeling research for example the community agreed on data standards allowing them to easily integrate large data sets created in a distributed manner to do large scale pattern detection. In the humanities we face more severe challenges with “exploiting” integrated data sets since we need to bridge between a wide variety of formats and in particular semantic domains in particular when bridging between data from different cultures and languages. How can humanities participate in Data Intensive Science by efficiently integrating heterogeneous data sets?

With a background in Electrical Engineering and Computer Science I participated in establishing a center for process control applications at Technical University Berlin. In 1976 I started building up and leading the Technical Department at the newly founded Max Planck Institute for Psycholinguistics. In this function I first focused on developing methodologies and technologies in the areas of building experimental labs, digital signal and speech processing, and stochastic models for language acquisition and processing. In the last decade the focus shifted to life cycle management of increasingly large amounts of linguistic data and building tools and infrastructures to support access, integration and analysis of such data. This work amounted to founding The Language Archive (TLA) at the MPI and co-leading the infrastructure work in initiatives such as CLARIN (Common Language Resources and Tools Infrastructure) and EUDAT (European Data Infrastructure). To improve integration and interoperability I participated in developing standards in the realm of ISO TC37/SC4. In 2011 I received the Heinz Billing Award of the Max Planck Society for my work in advancing technology for research purposes.

29 March
Stef Scagliola, Erasmus University Rotterdam
Franciska de Jong, University of Twente and Erasmus Studio for e-research – Erasmus University Rotterdam
Roeland Ordelman, University of Twente

Audio-visual Collections and the User Needs of Scholars in the Humanities;
a Case for Co-Development
One of the key factors for success of e-Humanities projects is the model of collaboration between humanities  scholars and ICT developers. Experience in a series of multidisciplinary projects aiming at the advancement of access and use of audiovisual archives has shown some recurring patterns that impede a smooth integration of perspectives. A first attempt to describe and analyse these patterns can be found in the attached paper (published in the Proceedings of Supporting Digital Humanties 2011, Copenhagen) One of the observations is that ICT-researchers who design tools for humanities researchers are less inclined to take into account that each stage of the scholarly research process requires ICT-support in different ways. Likewise scholars in the humanities often have prejudices concerning ICT-tools, based on lack of knowledge and fears of technology-driven agendas. The paper argues that the gap between the mindset of ICT-researchers and that of archivists and scholars in the humanities can be bridged by offering a better insight into the variety of uses of digital collections and a user-inspired classification of ICT-tools. Such an overview is presented in the form of a typology for the audio-visual realm with examples of what role digital audio-visual archives can play at various stages of the research process. Readers are encouraged to give feedback and references to relevant literature so that this paper can develop into an article for a journal in the field of e-humanities.

link to the full paper:

Prof. dr. Franciska de Jong is professor of language technology at the University of Twente and director of the Erasmus Studio for e-research at the Erasmus University in Rotterdam. She has a background in theoretical linguistics but switched to the area of computational linguistics and human language technology in 1985. Her current research focus is the field of multimedia indexing, text mining, cross-language information retrieval and the disclosure of spoken word content and cultural heritage collections for scholarly use. She was principal investigator of the NWO-CATCH project CHoral (2006-2011) and involved in more than 10 EC funded project throughout FP4-FP7. She is currently coordinating FP7 project PuppyIR.

For more details:

Roeland Ordelman is a senior researcher Multimedia Retrieval at the University of Twente (PhD 2003). Manager R&D at the Netherlands Institute for Sound and Vision and founder of a start-up company for audio search technology, Cross Media Interaction (X-MI). He is co-organizer of the Rich Speech Retrieval task in the MediaEval benchmark evaluation series. The main focus of his work is deploying state-of-the-art access technology in real-life scenarios aiming to enhance the exploitability of audiovisual content for various types of user groups such as professional archivists, broadcast professionals, researchers, and home users. Recent projects are among others AXES (Access to Audiovisual Archives), LiWA (internetarchiving), Verteld Verleden (distributed access to Oral History) and COMMIT (rich speech retrieval).

Stef Scagliola holds a PhD in Contemporary History from the Erasmus University Rotterdam. From 2006-2011 she was the coordinator of an oral history project conducted at the Netherlands Institute for Veterans resulting in a collection of 1000 life-history interviews from among a representative sample of Dutch war and military mission veterans. Through this ‘digital born’ initiative she became involved in various ICT-projects which aim at developing adequate tools and data standards for researchers who work with qualitative data. Currently she is a researcher at the Erasmus School of History Culture and Communication where she is involved in the design and evaluation of the usability aspects for scholarly users in AXES and CroMe, projects aiming at the application of access technology for multimedia content. For the Erasmus Studio she will guide the development of courseware and methodological training for humanities researchers with an interest in multimedia content collections.

22 March
1. Peter Hook, Indiana University

The Structure of Law: Domain Maps from 40,000 Course-Coupling Events and a History of an Academic Discipline
The structure of the academic discipline of law in the United States has never been empirically determined and mapped spatially. While it has been described in essays and other writings on the history of law school education, it has never been revealed through the exploration of large datasets and determined through replicable, objective means. This work seeks to answer whether course-coupling analysis produces topic maps that are consistent with expert opinion and other indicia of the topical similarity of law school course subjects. Course-coupling is defined as the same professor teaching multiple, different courses over one academic year as reflected in the annual directories of the American Association of Law Schools (AALS). Multidimensional scaling (MDS) is used to distribute course subjects in a two-dimensional mapping so that they may be quickly perceived by the viewer using the distance-similarity metaphor. Five academic years, spaced roughly once a decade from 1931-32 until 1972-73, are mapped and these maps are compared to data produced by 18 experts that completed a card sort exercise of contemporary legal courses. The resulting visualizations lend support to the assertion that faculty members teach courses that are topically related and that course-coupling analysis is a valid technique to make maps of a domain.

Peter A. Hook is a doctoral student at Indiana University, Bloomington, where he is a member of Dr. Katy Börner’s Cyberinfrastructure for Network Science Center and Information Visualization Laboratory. He has a Juris Doctor (J.D.) from the University of Kansas (1997) and an M.S.LIS from The University of Illinois (2000). He is also an academic law librarian at the Indiana University Maurer School of Law. His primary research focus is information visualization and domain mapping. In general, he is interested in utilizing knowledge infrastructures to obtain big picture, global perspectives. These allow a novice to more quickly become familiar with a domain and experts to contextualize their research in a broader perspective. Domain maps also reveal avenues of inquiry previously unknown to a researcher as well as opportunities for collaboration. Hook’s particular interests include the visualization of knowledge organization systems, concept mapping, and the spatial navigation of bibliographic data in which the underlying structural organization of the domain is conveyed to the user. Additional interests include social network theory, knowledge organization systems, scientometrics, legal informatics, and legal bibliography.

(Presentation slides)

2. Susan Legêne, VU University Amsterdam

e-History: tools, tales or a philosophy of history
On 24 June 2010, the new KNAW Institute Huygens/ING or Huygens Institute for Netherlandic History was launched. In his introductory speech, the historian Wijnand Mijnhardt, chair of the ING Advisory Board, expressed his high expectations of e-History. Finally, he said, history would become an exact science, with exact repeats, and controlled outcomes. At another meeting in the context of the CATCH-programme, a brief discussion took place on the statement that any historian of e-History needs to know statistics. This presentation will discuss these views on e-History, and reflect on experiences and expectations in the context of two research programmes: Agora/the semantics of history (an NWO-Catch-based collaboration between computer science, computational linguistics, cultural studies and history) and the e-science Centre project BiographyNed. What is the relationship between computational humanities and digital hermeneutics; how and when will computational interaction with historical sources result into the writing of history? See also the Agorapublication on digital hermeneutics at:

Susan Legêne is professor of political history at VU University (Faculty of Arts, History department). Until 2008, she was head of the Curatorial department of Tropenmuseum in Amsterdam and closely involved in collection digitization policies. This has been a valuable input in her critical approach to existing collection documentation and its transformation into digitized metadata.  She has been a member of the VKS advisory board. See also:
Agora/ semantics of history  website:

15 March
Wolfgang Kaltenbrunner, University of Leiden

The new materiality of labor in digital collaboration in the humanities
The use of digital research technology in the humanities does not only encourage reflexivity with respect to disciplinary knowledge, but also self-awareness on the part of practitioners that digitally mediated research and the various work steps it involves constitute an economic value. Since very recently, there has been a new discourse in parts of the community that explicitly focuses on the labor conditions for digital humanists. Digital humanities work is frequently not recognized by traditional academic institutions, thus leading practitioners to examine the relative benefits of flexible employment. While some remain skeptical towards the new labor flexibility of ‘knowledge work’, others embrace it for the greater intellectual freedom they associate with it. In this presentation I relate the discussion about flexible employment in digital humanities research to a theoretical debate about the concepts of informational labor (Manuel Castells) and immaterial labor (Antonio Negri & Michael Hardt), which both suggest that flexible labor is the characteristic modality of employment in digitally mediated networks. I critically examines the concepts, arguing that they are not only overly inclusive and indebted to technological determinism, but also fail to properly address the aspect of self-management that I suggest is of great significance to understanding the inner workings of ‘knowledge-intensive’ labor. I then discuss findings from an empirical case study that exemplify the way self-management works in a digitally mediated collaborative project in the humanities. I use these results to criticize the notion of informational/immaterial labor as empowering researchers. Rather, it is related to the need for researchers to prospectively gear their work to the requirements of funding bodies, or find other ways for covering the expense of labor. This sort of anticipatory book keeping in digital collaboration draws attention to the definitional power of research grants in shaping the digital humanities. Wether digital humanities are going to consist of a thoroughly integrated ‘cyberinfrastructure’, or of dispersed ’boutique projects’, has a lot to do with how it is funded.

Wolfgang Kaltenbrunner is a PhD candidate at the Centre for Science & Technology Studies at the University of Leiden

8 March
Christophe Guéret, VU Amsterdam , e-Humanities Group

How to find and consume Open Data on the Web
This tutorial, organised as a mix of lectures and hands-on sessions, is about tapping into the massive amount of Open Data that can be found on the Web. Open Data is data that is “(…) open if anyone is free to use, reuse, and redistribute it — subject only, at most, to the requirement to attribute and share-alike.” ( Over the last couple of years, many governmental institutions, research communities and individuals have figured out that data is better off being shared rather that staying behind close doors. This change of mind set triggered a movement for publish Open Data on the Web, in different formats, that doesn’t seem close to loose any of its momentum. In this tutorial we will first discuss the motivations behind publishing Open Data, describe the tools making it easier to find Open data sets and briefly review ways to publish Open data. A second part of this tutorial will be dedicated to the publication and consumption of Open Data as Open Linked Data.

Open Linked Data is a set of good practices and recommendations supported by the W3C, the organism behind the Web itself, that is being increasingly used by data publishers.

Christophe Guéret is working on the CEDAR project. His research interests are the publication/consumption of Linked Data, the complex system analysis of data networks and the interplay between Semantic Web challenges and Computational Intelligence techniques. Before joining the eHumanities Group, he worked between 2007 and 2011 at the Vrije Universiteit in Amsterdam with Prof. Frank Van Harmelen on two project: “Self-Organizing Knowledge Systems” (SOKS) and “LOD Around the Clock” (LATC). During these two project he developped interest and skills around the publication of Linked Data and the analysis of its content. Besides CEDAR, he is actively putting this skills in use in doing consultency for and leading the SemanticXO contributor project for OLPC.

See also his LinkedIn account, personal web page and blog.

(Presentation Slides)

1 March
No eHg meeting: Holiday period

23 February
1. Jacob de Vlieg
CEO & Scientific Director Netherlands eScience Center (NLeSC)Head Computational Drug Discovery Group, CMBI, Radboud University Nijmegen Medical Centre

Science for Research & Innovation
Science, and the way we undertake research, is changing. The scale of information generation is now so great that science has to adapt or drown in a data deluge. This holds for all science areas, both fundamental and applied and covering alpha, beta and gamma disciplines. eScience, or enhanced Science, is an inherently multi-disciplinary pursuit concerned with the need to bridge the gap between high-powered computing and networking on one side and data-intensive science on the other hand. The challenge of eScience is to ensure that the most value can be gained from all new scientific endeavors and “Big Data” by using innovative ICT to improve experimental design, data analysis and communication. In particular the concept of converging technologies inspired by new scientific collaborations will have a significant impact on promoting new ways to undertake science and facilitate new discoveries and insight. Including examples of how eScience is applied in diverse scientific domain areas, I will discuss the strategy and goals of the newly established Netherlands eScience Center (NLeSC). NLeSC is a joint initiative of SURF and NWO and is charged with enabling data-intensive research and promoting knowledge-based interdisciplinary collaboration across all branches of science (alpha, beta, gamma). NLeSC computational sciences, informatics and ICT areas include the development of new techniques and concepts to manipulate and explore massive datasets including cross-type data integration, decision support systems, visualization technologies and data-driven simulations and any other areas that offer the potential to increase coherency across eScience innovation and the rationalization of e-infrastructure development.

Prof. Dr. Jacob de Vlieg studied biophysics at the State University of Groningen and graduated cum laude. During his Ph.D. research, he developed computational methods for 3D biostructure determination. Shortly thereafter, he joined the EMBL, Heidelberg to develop structural bioinformatics techniques. From 1990 until 2001, de Vlieg held a range of research and management positions at Unilever Research, in the fields of modeling, biophysics and ICT. Appointed in 2000, he is currently part-time professor, Computational Chemistry, at the CMBI, Radboud University Nijmegen Medical Centre. De Vlieg joined Organon in 2001, as head of the Department of Molecular Design and Informatics responsible for structure-based drug design, computational medicinal chemistry, bioinformatics and research information systems. In 2006, he was appointed as VP R&D IT (CIO R&D) to integrate IT –in all its manifestations- into the drug discovery process. In 2008, he was appointed as Global Head Molecular Design & Informatics, Schering-Plough (now MSD) to support the Discovery Research and Translational Medicine functions worldwide. In July 2011 he began serving as CEO and scientific director of the Netherlands eScience Center (NLeSC), a joint initiative of NWO and SURF. The ultimate goal of the NLeSC is to support and reinforce multidisciplinary and data-intensive research through creative and innovative use of ICT in all its manifestations. Prof. de Vlieg serves on a number of advisory boards and committees, including board member of the Netherlands National Computing Facilities Foundation and the program board for Computational Sciences of the Lorentz Center. He was also chair of the NWO Bioinformatics program committee (2001-08) and the Scientific Advisory Board of the Netherlands Bioinformatics Center (2003-06).

(Presentation slides)

2.  Frank van der Most, e-Humanities Group and Data Archiving and Networked Services (DANS)

The ‘impact’ of evaluations on academic careers: A conceptual frame, a research approach and pilot-interview data
‘Academic Careers Understood through MEasurements and Norms’ is the full title of the FP7 funded project in which I am participating. It aims to deliver improved guidelines for the evaluation of the work of individual researchers and a portfolio for researchers to present themselves in different contexts of evaluation. In this presentation, I will briefly introduce the ACUMEN project and introduce its sub-project that investigates the impact of evaluations on academic careers. A career in academia is not as straightforward as it perhaps was 50 years ago, and since the 1980s, researchers have had to cope with increasing numbers of evaluations and increasing diversity of evaluation types. Moreover, the notion of ‘impact’ in this context is problematic and deserves careful consideration. Finally, I will present some preliminary insights in data from pilot interviews. More information about ACUMEN can be found at

Frank van der Most started work on the ACUMEN project in the summer of 2011 at the e-Humanities Group. His research interests are research practices, the funding and organization of research, research policies and the interactions between these three. He studied Computer Science at the University of Twente, and Sciences and Arts at Maastricht University. From 1997 until 2005 he was involved in research projects in the history of technology, the policy and scientific developments surrounding mad cow disease, and an evaluation of the Norwegian Research Council. During these projects he developed an interest in digital tools for qualitative and historical research, for which he developed a database application. In 2009 he defended his doctoral thesis titled ‘Research councils facing new science and technology: The case of nanotechnology in Finland, the Netherlands, Norway and Switzerland’ at the University of Twente. From 2009 until 2011, he did a post-doctoral project on the use and effects of research evaluations at the CIRCLE institute for innovation studies at Lund University. Frank still has a keen interest in digital tools for research and keeps a blog on research policy and practices at

16 February
Katy Boerner, Indiana University
Sci2 Tool: Temporal, Geospatial, Topical, and Network Analysis and Visualization – addressing humanities scholars

!!Please note: This is a workshop and will take place at the e-Humanities Group of the Royal Netherlands Academy of Arts and Sciences, located at the Meertens Institute, Joan Muysenweg 25, Amsterdam on 16 February, 2012, 12.30-16.30. Space is limited and registration is required (see below); participants are to bring their own laptops, and are asked to install the tool (see In addition, participants will be requested to complete pre- and post-questionnaires (distributed on-site).

Tutorial description
The Science of Science Tool (Sci2) ( was designed for researchers and practitioners interested to study and understand the structure and dynamics of science. Today it is used by major federal agencies in the US and by researchers in more than 40 countries representing many different areas of research — including arts and humanities scholars.

Sci2 is a standalone desktop application that installs and runs on Windows, Linux x86 and Mac OSX and supports:
• Reading and writing of 20 major file formats (e.g., ISI, Scopus, bibtex, nsf, EndNote, CSV, Pajek .net, XGMML, GraphML),
• Easy access to algorithms for temporal, geospatial, topical, and network analysis and visualization of scholarly datasets,
• Professional visualization of analysis by means of large-format charts and maps.

The first hour of the tutorial provides a basic introduction; remaining time will be spent discussing sample workflows featured in the Sci2 Tutorial at ( and new functionality such as the Yahoo! geocoder, network clustering and backbone identification algorithms, and the analysis and visualization of evolving networks.

Börner, Katy. (2010). Atlas of Science: Visualizing What We Know. The MIT Press. (

Please use to register. Your email address will be used to confirm registration and share slides and software links.

Katy Börner is the Victor H. Yngve Professor of Information Science at the School of Library and Information Science, and Founding Director of the Cyberinfrastructure for Network Science Center at Indiana University. She is a curator of the Places & Spaces: Mapping Science exhibit. Her research focuses on the development of data analysis and visualization techniques for information access, understanding, and management. She is particularly interested in the study of the structure and evolution of scientific disciplines; the analysis and visualization of online activity; and the development of cyberinfrastructures for large scale scientific collaboration and computation. She is the co-editor of the book ‘Visual Interfaces to Digital Libraries’ and a special issue of PNAS on ‘Mapping Knowledge Domains’ (2004). She holds a MS in Electrical Engineering from the University of Technology in Leipzig, 1991 and a Ph.D. in Computer Science from the University of Kaiserslautern, 1997. Web site:

(Presentation slides)

9 February
No eHg meeting
;  SURF Onderzoeksdag, Bible workshop  Lorentz centre

2 February
Matthijs Kouw, University of Maastricht

Simulation and the Vulnerability of Technological Cultures
Simulations and models play a key role in technological cultures in terms of understanding, predicting, and countering risks. In this presentation, I will present my PhD dissertation in which I evaluate the possible consequences of social reliance on simulations and models. In my dissertation, I ask how simulations and models enable knowledge of risks, and to what extent their use makes technological cultures susceptible to risks, e.g. through assumptions, uncertainties, and blind spots.

To address these questions, I studied epistemological, historical, institutional, and socio-political aspects of simulation practice. The empirical focus of the dissertation is the use of simulations and models in water management in the Netherlands. I carried out an ethnographic study of simulation practice at various institutions working in the field of water management in the Netherlands. This resulted in case studies pertaining to the use of simulations and models in hydrology (hydraulic engineering and flood monitoring), geotechnical engineering (soil mechanics and dike failure mechanisms), and ecology (participatory water quality modeling and governance in the context of the Water Framework Directive).

Simulations and models play a Janus-faced role in technological cultures: although they are technological prostheses that (potentially) induce vulnerabilities by increasing dependence on technological practices, they may also have empowering effects due to their ability to provide knowledge of risks.

I am close to submitting my PhD thesis, which I wrote at the Maastricht Virtual Knowledge Studio within the Department of Technology and Society Studies, Faculty of Arts and Social Sciences, Maastricht University. More generally, my research interests are: simulations and models; risk; uncertainty and ignorance; technologies of participation and governance; engineering studies; data visualization; software studies; craft / craftsmanship; materialist philosophies (e.g. Deleuze-Guattari, DeLanda); ‘continental’ philosophy (e.g. Spinoza, Sloterdijk, Stengers, Badiou, etc.) I did my MA in Philosophy in Amsterdam and Berlin, which I concluded by writing a thesis on the concept of technology in the work of Simondon, Deleuze, and Latour. Subsequently, I obtained my MSc in Science and Technology Studies in Amsterdam (cum laude), where I wrote a thesis on the Internet of Things, RFID, and data visualization. During my studies and before embarking on the PhD journey, I worked in software development.

26 January
Dirk Roorda, DANS
Erik-Jan Bos, University of Utrecht

Letters from Descartes in Digital Format; Circulation of Knowledge Collaboratory at Huygens Institute
The Circulation of Knowledge project at the Huygens Institute is creating a super-digital edition of many letters of scholars that were active in the Dutch Republic of the 17th century. The letters are brought together in one environment with a unified metadata format.

There they are subjected collectively to analytical tools, and the result is a set of texts that can be searched by concept, time, place and person. Erik-Jan Bos will give an introduction into the Descartes corpus of letters. Which editions exists, what are their merits, and on what data is the CKCC representation based? Furthermore he will sketch expected benefits of the CKCC project for historians. Dirk Roorda will show what was needed to obtain a workable digital representation of this corpus. He is going to demonstrate the conversion from a basic Japanese digitisation into XML (Text Encoding Initiative), and Descartes’ mathematical formulas in TeX. He will reflect on what was easy and what was hard and why it is useful to codify the conversion in a repeatable script.

Biographical information

Erik-Jan Bos is research fellow at the Department of Philosophy at Utrecht University (Descartes Centre/Zeno). He is currently working on a grant by the National Endowment for the Humanities. He graduated PhD in 2002 on a critical edition of Descartes’ correspondence with Regius. Since 2002 he has been working on a new complete edition of Descartes’ letters, which will be published by OUP. He is a member of the steering committee of CKCC, collaborating with developers to design CKCC research tools.
(Presentation slides)

Dirk Roorda is doing research at Data Archiving and Networked Services at The Hague. His work is geared towards the processes of ingesting, storing and disseminating digital resources in ways that makes them optimally available for re-use by researchers. He is involved in the CKCC project as a member of its steering committee and as a designer of an archiving solution for its materials. He studied mathematics and computing science in Groningen in the eighties and got a PhD in mathematical logic at the University of Amsterdam in 1991. He has studied field linguistics and classical Hebrew at the Summer Institute of Linguistics and has worked for 10 years as a software engineer for Kluwer and related companies.

(Presentation slides)

19 January
Stephanie Steinmetz, University of Amsterdam
Clement Levallois, Erasmus University Rotterdam

Methods Workshop Series:
The goal of this workshop is to provide a general introduction to the topic (part 1: lecture) and to discover one tool of the domain (part 2: computer lab).
13-14:15pm: Part 1: Lecture
Two lecturers (Andrea Scharnhorst and Clement Levallois) will present key concepts, actors, tools and examples of applications to give a wide view of visual applications, and highlight the purpose they can serve in the humanities and social sciences (HSS). We choose not to focus on one particular discipline, but to show how practices born from  different fields (academic or not) can be translated for the use in HSS. A booklet will be handed out with the slides and references cited in the lecture.
14:15-14:45pm: Coffee break
14:45-16:30pm: Part 2: In the computer lab
In the computer lab, we will use VosViewer, a software for the visualization of networks or textual data created by Nees Jan van Eck and Ludo Waltman at the CWTS (University of Leiden). Ludo Waltman will be present. The  participants will be guided through the visualization of corpora, and will be invited to reflect critically on the output – what does the visualization show, and how is it achieved? For this exercise, no programming skills are required.
After the workshop…
This workshop is the first in a series. The following workshops will adopt a similar format: the introduction of a broad topic, presented by a lecturer followed by a hands-on PC session. Stay tuned to the mailing lists of the ehumanities group and the Erasmus Studio to receive the announcements!

Contact: Stephanie Steinmetz ( and Clement Levallois (

!!Note: this workshop starts at 13.00 hrs until 17.00 hrs. Contrary to announced earlier, this workshop will also take place in the Symposium room at the Meertens Institute.

12 January
No eHg meeting:
ACUMEN project meeting at Tallinn

22 December 2011 – 5 January 2012
No eHg Research Meeting; holiday period


15 December
Ralph Schroeder
Oxford Internet Institute at the University of Oxford

Digital Research and Styles of Knowing across the Disciplines
The idea of ‘styles of knowing’ has been proposed by Ian Hacking as a way to understand how research works across the sciences. His identification of six such styles fits current digital transformations of research in many ways, but raises a number of questions: which styles lend themselves to different ways of using digital tools and data? And if different styles are unevenly distributed across disciplines in digital research, what determines this unevenness: the content of disciplines – its objects of research? Or are digital tools and data, or they way they are socially organized, driving the research agenda? Or perhaps funding opportunities, or movements to computerize research, in the manner of social movements? Intriguingly, it seems that some of the styles exemplified in e-Science can also be found in e-Humanities and in areas of e-Social Science which are not normally thought of as scientific. Hence we can also ask: do styles pertain to science, or to ways of knowing beyond the sciences? How is digital research configured such that it promotes different patterns of advance within and between disciplines? These questions cannot be answered with certainty, as digital research is science-in-the-making. A number of illustrations and signposts can be used, however, to improve our understanding of this research front.

Ralph Schroeder is Professor at the Oxford Internet Institute at the University of Oxford. He is director of research at the Institute and director of its Master’s degree in ‘Social Science of the Internet’. His books include ‘Rethinking Science, Technology and Social Change’ (Stanford University Press 2007) and ‘Being there Together: Social Interaction in Virtual Environments’ (Oxford University Press 2010). Before coming to Oxford, he was Professor at Chalmers University in Gothenburg, Sweden. His current research is focused on the digital transformations of research.

(Presentation slides)

Sally Wyat
Manifesto for e-Humanities?
Presentation slides

8 December
No eHg Research Meeting; WTMC Annual Meeting, Amsterdam

1 December
Peter van Kranenburg,  Meertens Institute;
Dániel Biró,  University of Victoria

Peter van Kranenburg

On Computational Modeling of Melodic Variation among Folk Song Melodies – From WITCHCRAFT to Tunes & Tales
Within the WITCHCRAFT project, we performed a computational investigation of melodic similarity among a large collection of Dutch folk song melodies, hosted at the Meertens Institute (Amsterdam). We aim to relate our computational solutions to existing knowledge from Ethnomusicological studies on Western folk song melodies. The most important concept from the musical domain is the concept of tune family. It appears that the existing algorithm for sequence alignment (Needleman-Wunsch) with appropriate musicological knowledge incorporated is adequate to retrieve members of the same tune family from the full collection of melodies. Based on the automatic similarity assessments, the musicological collection specialists of the institute reconsidered the classification of several melodies. Furthermore, they could classify several melodies they were not able to recognize before. Although sequence alignment serves as a valuable algorithm for computing melodic similarity, it does not explain the intricacies of the melodic variation among the folk song melodies.

(presentation slides)

Dániel Péter Biró & Peter van Kranenburg

Computational Analysis of Jewish and Islamic Chant
The cantillation signs of Jewish Torah trope have been of particular interest to chant scholars interested in the gradual transformation of oral music performance into notation. Each sign, placed above or below the text, acts as a “melodic idea” which either connects or divides words in order to clarify the syntax, punctuation and, in some cases, meaning of the text. Unlike standard music notation, the interpretations of each sign are flexible and influenced by regional traditions, practices of given Jewish communities, larger musical influences beyond Jewish communities, and improvisatory elements incorporated by a given reader. In this talk we present our collaborative work in developing and using computational tools to assess the stability of melodic formulas of cantillation signs based on different performance traditions, including particular Dutch examples. We also show that a musically motivated alignment algorithm obtains better results than the more commonly used dynamic time warping method for calculating similarity between pitch contours. Using a participatory design process we developed an interactive web-based interface that enables researchers to explore aurally and visually chant recordings and explore the relations between signs, gestures and musical representations.

(presentation slides)

Peter van Kranenburg
studied Musicology at Utrecht University and Electrical Engineering at Delft, Technical University. From 2006 till 2010, he was Ph.D. student in the CATCH WITCHCRAFT project (Utrecht University and Meertens Insitute). In this project, he designed computational similarity measures for Dutch folk song melodies. Currently, he is post doctoral researcher in the Tunes & Tales project at the Meertens Institute.

Dániel Péter Biró is Associate Professor of Composition and Music Theory at the University of Victoria. Dr. Biró completed his PhD in composition and Judaic studies at Princeton University in 2004. His dissertation was a comparative study of early notational practices. He conducted research of Hungarian folk music at the Academy of Science in Budapest and of Jewish music in Israel and, most recently, of Jewsih and Islamic chant as practiced in the Netherlands. Awarded the Hungarian Government’s Kodály Award for Hungarian composers, his compositions have been performed around the world. He is currently Visiting Professor at Utrecht University. Dániel Péter Biró is co-editor of Search – Journal for New Music and Culture.

24 November
Patricia Alkhoven & Hennie Brugman, Meertens Instituut,  CATCHPlus

CATCHPlus: tools for better en permanent access to Cultural Heritage
CATCHPlus builds on the running program “Continuous Access To Cultural Heritage” (CATCH), ran by NWO (The Netherlands Organization for Scientific Research). In CATCH, computer scientists, the humanities and the cultural heritage sector have set up a unique collaboration with collection managing institutions. Innovation, interoperability and cooperation are central issues in this project.

CATCH started in 2004 and a number of subprojects have been finished. In CATCHPlus, the prototypes and demos from eight of the finished projects will be converted to reliable tools. Software that is solid and can be used in multiple institutions. From prototype to reliable application!

The main purpose of CATCHPlus is to valorize scientific research results to usable tools and services for the entire Dutch heritage sector. This software leads to better disclosure and larger accessibility of collections from heritage institutions. The unique cooperation between large heritage institutions, universities and companies in CATCHPlus, creates a new crossroad of IT and cultural heritage. The products from CATCHPlus promote cooperation and coordination in the information infrastructure of the heritage sector.

CATCHPlus will run till mid-2012. By that time, the products will need to have been implemented in heritage institutions. New business plans and forms of cooperation are developed to guarantee their sustainability for the future.

Bios: Patricia Alkboven & Hennie Brugman
Dr Patricia Alkhoven is Project Manager of the CATCHPlus project. She will set out the objectives and results of the project. Before Patricia joined the Meertens Institute she worked for several organizations in the field of ICT and Cultural Heritage. She has a PhD in Architectural History (The Changing Image of the City, 1993) and worked for twelve years at the Research & Development Department of the Koninklijke Bibliotheek, the National Library of the Netherlands. She was Director of the Universal Decimal Classification Consortium 2006-2007 and Head of Collections of the Netherlands Architecture Institute 2007-2010.

Drs Hennie Brugman is Technical Coordinator of the CATCHPlus project. During the presentation, he will provide details about the software tools in development and the services for the cultural heritage sector. Brugman received an MSc degree in theoretical physics at the University of Nijmegen in 1987. From 1991 until 2010 he has been working for the Max Planck Institute for Psycholinguistics in Nijmegen as scientific programmer and project coordinator. He has been involved in several national and European research projects, generally focusing on the construction and exploitation of tools and infrastructure for multimedial linguistic corpora. In 2002 he received the Heinz Billing Award for the Advancement of Scientific Computation. From 1993 until 2005 he has designed and developed annotation tools and models for time based media (video, audio) such as the widely used ELAN annotation tool. From 2005 until 2009 he worked as scientific programmer in one of the CATCH research projects. During that period he was coordinator for software development efforts of ten CATCH projects. Since 2009 Brugman is working as technical coordinator of the CATCHPlus valorization project.

17 November
Martin Doerr
Information Systems Laboratory
Centre for Cultural Informatics of the Institute of Computer Science
Foundation for Research and Technology – Hellas, Greece

The Dream of a Global Network of Knowledge
Digital Libraries develop more and more from collections of literatures similar to classical libraries into aggregation services of diverse forms of information assets for the support of research and interested public. Access to primary cultural information, as for instance provided by museums and archives, but also access to individual facts and statements contained in literature, has completely different requirements from traditional library access. Primary information is bits and pieces that, brought into appropriate relationships, allows for reconstructing a possible past and to give interpretations about the intellectual, social and psychological backgrounds. As such it is prior to having a typical subject, and if it has, it is often unrelated to the particular information users are interested in. Rather, users seek for contexts in which there is a probability to find evidence for the topic under investigation. In order to be attractive, a digital information service on primary cultural resources must be integrated and comprehensive and provide adequate answers to the research questions, i.e., it should present relevant relationships and contextual information, not just “the document you requested”. We regard as some of the grand challenges for digital research libraries a) provision of global, integrated and extensible ontologies of contextual relationships under which relevant data and metadata can be searched and explored, b) scalable, distributed co-reference resolution mechanisms that integrate disparate information assets into coherent networks of knowledge (Linked Open Data is a still naive step in this direction) and c) adequate query and exploration methods of the emerging huge and complex integrated information spaces. The presentation will describe solutions and new approaches to these goals, and conclude their general feasibility.

(presentation slides)

Bio: Dr. Martin Doerr is a Principal Researcher in the Information Systems Laboratory and the head of the Centre for Cultural Informatics of the Institute of Computer Science, Foundation for Research and Technology – Hellas, Greece. He has published extensively in the area of information sciences and computer sciences, also at interfaces with digital humanities and cultural heritage about topics such as “A Distributed Object Repository for Cultural Heritage” (2010) and “Ontology-Based Metadata Integration in the Cultural Heritage Domain”  (2007), to name only two examples.

10 November
Nicholas Jankowski
eHumanities Group Visiting Fellow

Doing Digital Scholarship: Principles & Practices, Tools & Resources
The tools and resources for conducting scholarship across disciplines in the humanities and social sciences (HSS) have expanded almost exponentially during the past decade. Instructional materials for ‘doing scholarship’ in a digital and networked environment have not kept pace with these developments. In this presentation I will outline a textbook being prepared for Polity Press about these developments. Sample materials from the textbook Doing Digital Scholarship (e.g., introductory chapter, tool descriptions, student exercises, overview of resources, outline of concluding epilogue) will be made available on a closed-access website prior to the presentation (the URL for this material will be distributed a week prior to this Research Meeting). I will also consider how this relatively conventional printed textbook will be complemented by a website reflecting some features of an enhanced publication.

(presentation slides)

3 November
Hildelies Balk (KB – Project Director)
Clemens Neudecker (KB – Technical Project Manager)
Staff member Instituut voor Nederlandse Lexicologie (INL)

IMPACT Centre of Competence in Text Digitisation
Over the last few years mass-digitisation has become one of the most prominent issues in the museum, library and archive world. A number of leading institutions in Europe are undertaking large scale-digitisation projects, scanning millions of pages each year. High costs, inefficient sharing of best practise and tools not optimised for processing historical text often results in digitised material becoming available too slowly and in too small quantities from too few sources. The IMPACT project (2008-2011) has worked on removing these obstacles and pushed innovative ideas in OCR and language technology for historical document processing and retrieval.

IMPACT will continue as a Centre of Competence, with the aim to make digitisation of historical printed text in Europe better, faster, cheaper and provide the tools, services and mechanisms for further advancement in this field. The IMPACT Centre of Competence (launched on 25 October 2011) brings together leading lights involved in all aspects of digitisation and give invaluable access to a network of experts, research and content institutions throughout the industry.

More information can be found on the project website or on the newly launched Centre of Competence website

Presentation slides IMPACT lexica

Presentation slides Clement Neudecker

Presentation slides Hildelies Balk

Presentation slides IBM CONCERT pilots

27 October
Clifford Tatum
Centre for Science and Technology Studies
Leiden University

Beyond Open Access: A framework for openness in scholarly communication
In spite of broad support across disciplines, only a small percentage of scientific and scholarly publications are available through open access. At the same time, increased openness among informal modes of scholarly communication is challenging normative conceptions of open science. The juxtaposition of widespread adherence to traditional publishing models, which are typically not open access, and increased openness among informal modes of scholarly communication raises some interesting questions about emerging new configurations of open science.

On one hand, academic publishing, also known as formal scholarly communication, is slow to change in the face of vast potentials for using digital media to increase openness. Publication of research output is a fundamental component in scientific progress. New knowledge builds on existing knowledge and publication of new knowledge creates possibilities for future knowledge. Publishing also plays a crucial role in the careers of individual researchers. Open Access has been shown to increase the dissemination of new knowledge, however full adoption seems to be stalled. On the other hand, there are presently a wide variety of openness initiatives within the realm of informal scholarly communication. Such projects include, for example, enhanced publications, draft-manuscript repositories, linked-data repositories, open lab notebooks, academic blogs, and implementation of structured content ontologies. These projects are demonstrating new possibilities of openness related to increased transparency and improved content interoperability. This is possible in part because the realm of informal scholarly communication is typically not included in the formal metrics of scientific impact and individual career advancement. Informality can facilitate innovation and experimentation, but it also complicates systematic analysis of whether, and in what ways, these new configurations of openness contribute to open science.

Openness has evolved differently in formal and informal communication contexts. In the existing research on scholarly communication, the formal component is typically privileged over the informal. This means we know considerably less about informal scholarly communication, which has become an interesting context for emerging forms of open science. To address this, I propose an analytical framework that foregrounds the relationship between human agency and social structures with regard to technological systems. As such, analytical focus is aimed at individual acts of openness framed as the result of interaction between human agency, social structure in the form of situated practices, and material structure in the form of digital media.

20 October
Ana Raus
Maastricht University

Interactivity and its discontents
Part of the newspeak of the digital wave, the concept of interactivity bundles together enthusiasm and uneasiness. The first comes from the high hopes and expectations around interactive possibilities brought forward by the digital medium. The latter stems from the difficulty of defining the concept and applying it, in a productive way, in practice. When looking for a definition, the complexity of the term and the many disciplines it feeds upon become apparent, as well as the myriad levels, types, and forms of interactivity in computer-mediated communication. Strongly connected to digital technologies, interactivity is also rooted in ‘traditional’ interaction. Therefore older theories are revisited together with new ideas to offer a better understanding of the concept. Besides investigating what interactivity means, in this research I also looked at how the concept could be used as a research tool. In particular, when applied to the publishing world it is interesting to use interactivity as a framework to analyze enhanced publications. The results of a study on the SURFfoundation Enhanced Publication projects will be presented, as well as a discussion on future trends in scholarly communication and the publishing world at large with regards to interactivity.

(Presentation slides)

Sally Wyatt
eHumanities Group

Enhancing Virtual Knowledge
MIT Press recently agreed to publish the book Virtual Knowledge and also expressed strong interest in collaborating with the editors and authors in preparing a Web-based complement to the book. This complement could extend the preliminary work undertaken to develop an ‘enhanced publication’ for this and three others book that was part of the recently-completed eHumanities Group Enhanced Publications Project. Part of the eHg Research Meeting on 20 October is intended to explore ways in which this extension might be undertaken. What forms of enhancement would the editors and authors like to include in the Website to complement the print publication? What site functionalities present on the individual four books and overall project Website do not seem suitable to continue? Everyone is invited to examine the tentative Website for Virtual Knowledge and to bring to the meeting suggestions for improving and expanding this preliminary endeavor.

13 October
Stephanie Steinmetz (UvA):

Improving web survey methodology for the social and cultural sciences: some reflections

This presentation will provide an overview of the achievements and further plans of the project “Improving web survey methodology for the social and cultural sciences” which started in September 2008 at the Erasmus Studio. The first part of the presentation will focus on findings with regard to methodological challenges. The increasing popularity of web surveys triggered a heated debate about the quality of web surveys for scientific use. The most obvious disadvantage of web surveys is that they may not be representative because the sub-population with Internet access is quite specific. Therefore, different weighting techniques, like post-stratification and propensity score adjustment (PSA) have been proposed, particularly with regard to non-probability-based web surveys. The main focus within the project was to examine the potentials and constraints of different weighting procedures for a continuous volunteer web survey.

The project also aims to explore the effect of web technologies on the community of sociologists and web survey researchers. Hence, the second part of the presentation will summarize the findings of a literature analysis undertaken to shed light on the discussion whether the technological changes in collecting and analyzing data has led to a crisis in empirical sociology (part of the chapter on ‘Sloppy data floods or precise social science methodologies?’ prepared for the book Virtual Knowledge). Finally, the presentation will briefly describe plans of the newly-created COST Action WEBDATANET.

(Presentation slides)

06 October
Discussion of e-Humanities group policy document.

29 September: no Research Meeting due to Alfalab Final Conference: eHumanities Tools and Resources Symposium – The Future of Humanities: eHumanities in Practice. More information:

22 September: no Research Meeting as many people are at the Oxford Internet Institute Symposium: ‘A Decade in Internet Time’.

15 September
At this Research Meeting eHg scholars attending the Oxford Internet Institute (OII) Symposium ‘A Decade in Internet Time’ will walk through their presentations. Persons from three symposium panels will be presenting:

· Panel: Scholarly Communication and the Internet

o Nick Jankowski: introduction to panel
o Clifford Tatum: Openness & the Formalization of Informal Scholarly Communication

· Panel: Virtual Knowledge

o Sally Wyatt: changing notions of expertise in knowledge production
o Matthijs Kouw: uncertainty in knowledge representations
o Clifford Tatum: Beyond Access: A Framework for Openness in Scholarly Communication

· Panel: Conceptualising Trust in Digital Environments

o Anna Harris & Sally Wyatt: Internet & trust

The full programme for the symposium is available here; information on the panel Scholarly Communication is available here.

In addition to the above presentations, ample time will be available for attendees at the meeting to share information on projects, publications and plans. This will allow us to ‘catch up’ with each other’s work.

8 September
Introduction to Computational Humanities projects:

• Louis Grijp, Meertens Institute: Tunes & Tales – Modelling Oral Transmission

• Rens Bod, UvA: The Riddle of Literary Quality

• Andrea Scharnhorst, DANS/e-Humanities Group: From Fragment to Fabric – Dutch Census Data in a Web of Global Cultural and Historic Information

Between 3-4pm,  participants were invited to work together to present their vision of what the e-Humanities Group will achieve over the next five years.

Elizabeth Losh rom the University of California San Diego (
Database Cinema and Scholarly Authorship: Rich Media Publishing and Media Visualization with Born Digital Online Video.

Abstract: Far too often the “digital humanities” really refers to “the digitized humanities” that shows how little progress has been made from the discipline’s text encoding roots and how disproportionate the role of print culture continues to be in the field. More attention must be paid to born-digital materials and the central role of media production and curation by non-scholarly users made possible by free or inexpensive tools for recording, editing, compositing, disseminating, and aggregating online video. Elizabeth Losh will highlight how her work with born-digital video materials that range from political pop music remixes commemorating the Arab Spring to U.S., British, and Israeli government video archives for state-sanctioned “Gov 2.0” public relations campaigns reflects a “humanities of the digital” in which the conflicts between regulation andcontent-creation invite new forms of media theorization. She will also discuss how Southern California functions as a hub of innovation in the digital humanities and describe new projects that she has been involved with from both USC’s Institute for Multimedia Literacy and UC San Diego’s Software Studies Initiative.

(Presentation slides)