ELAG 2019 takes place on four days (day 0 to day 3). Day 0 (May 7) is reserved for pre-conference Bootcamps. The main conference with talks and workshop is scheduled May 8 to May 10.
The match-making concept of "speed dating" is a structured networking activity to provide a fun way of making people meet. Within one hour, you will meet 10 people and get a chance to learn about them, their professional background, interests, and expertise – and they can meet you! We invite everyone to join in, whether this is your first time at ELAG or whether you have been attending before: come and meet the community.
The session will be moderated by Dr. Christina Riesenweber, Freie Universität Berlin, and Beate Rusch, Kooperativer Bibliotheksverbund Berlin Brandenburg
The ELAG open planning meeting is open to each interested participant.
We discuss the future of Elag, welcome new Program Committee participants and talk about Elag 2020.
You are very welcome to come and join and help us keep Elag alive.
The National Library of Sweden has developed a new library cataloguing system. In June 2018 this system went into production, thus transitioning Libris, the Swedish Union Catalogue, from a closed MARC21-system into an open source system based on linked data and BIBFRAME 2.0. By doing so, we are now prepared to leverage RDF vocabularies and entity linking for improved data integration and reuse.
During development, our ambitions have gone from promising visions to compromising and transitional pragmatism, weighing the risk of being stuck in old behaviours and structures with the disruptive change that a "linked entity"-based approach requires.
The dependency on MARC21 is so pervasive that everything from systems integration to cataloguing productivity has been contingent on its abundance of varying and overlapping details. Normalizing all this data and adjusting dependent behaviour by using the features of linked data is an ongoing challenge, which we've only begun to address.
This presentation will share our experience so far, including:
RERO, the Library Network of Western Switzerland, has a long-standing tradition of cooperation between libraries at a regional level. Since 2014, some key events induced a major RERO reorientation and brought it to propose an innovative project to reshape its business and organisation. The in-house development of an ILS was decided, motivated by the need of a modern software architecture and a flexible solution independent from commercial providers.
The works began 2018 and the first versions have been published on a freely accessible demo website. The software is based on the Invenio framework and is available in open source on GitHub. A specific team was built at the network central and organised according to scrum principles. After one year of development, an initial assessment can be made. Thus far, key achievements consist of an innovative check-in/check-out interface and the MEF service (standing for Multilingual Entity File), managing authorities in different languages coming from three reference libraries. Main challenges are the multilingualism for authority data, the metadata model definition at the intersection of several standards, the consortiality and how to assess and satisfy diverse user’s needs, the choice of programming tools among ever-changing technologies and, of course, the deadlines.
This presentation describes the context and motivation of such a project and how the development process within a small team in Switzerland works. It shows the main aspects of Invenio under a functional point of view, focusing on the faced challenges and giving detailed input about the solutions found to solve them.
FOLIO is a collaboration of libraries, developers and vendors building an open source library services platform. It supports traditional resource management functionality and can be extended into other institutional areas. FOLIO's development started in 2016 and has quarterly major releases since 2018.
FOLIO continuously evaluates and improves its software architecture and quality assurance processes to ensure that the open source library services platform offers a solution that scales, is secure and can adopt future needs. Key features are multi-tenancy, micro-service principles, and software-as-a-service capabilities. The presentation explains how the FOLIO community organizes the software development process and gives insight into fundamental architectural aspects.
Julian Ladisch: Julian Ladisch works as senior developer at the headquarters of GBV in Göttingen and is active in the FOLIO project since its beginning in 2016. He is a member of the FOLIO platform core team and the ERM team.
Maike Osters: Maike Osters is the FOLIO project lead in hbz headquarters, Cologne. She is member of Product Council and involved in the ERM Sub Group.
Five years ago we started a project (https://coli-conc.gbv.de) to facilitate creation and management of concordances between knowledge organization systems such as controlled vocabularies, classification schemes, and thesauri. Since then we have collected thousands of mappings, made them available with their diverse vocabularies, and developed the web application Cocoda to easily create and evaluate mappings (https://github.com/gbv/cocoda). The talk will focus less on the outcome of this project: the data is freely made available via Web APIs, the data formats are documented, and the software is open source both by license and by development process. Instead we will describe the paths that have been taken (we had to throw away two working prototypes), explain technical decisions that have been made (we developed a new data format to express more than SKOS in JSON), and show existing challenges (it can be hard to motivate librarians to try out new tools).
Having started as a traditional cataloguing cooperative for French universities, Abes has gradually expanded to support the needs of its users –the librarians and their institutions. Nowadays, given the scientific and political importance of research visibility, librarians have a new role to play. Experts regarding questions of author identification and accurate attribution for their publications become proactive, offering services such as institutional repositories, researchers' pages and bibliometric applications. Inevitably, this shift means new working processes, most of them still to be invented.
Accepting the fact that manual unitary cataloguing is no longer enough, we need to invent new environments, both multiscale (unitary record, set of records, databases) and mixed (humans plus algorithms) for which cooperation and quality are still the pillars.
For cataloguers of this new era we offer a new software dedicated to data curation, a full web application, which can be declined in order to accept various use cases over different types of entities : people, organizations, works. Furthermore, this modular software is not bound to any data sources or targets and can be connected to intelligent APIs that facilitate human work. We hope our experience will resonate with other institutions with similar needs.
I want to present the first version of this tool dedicated to link quality between person entity types in our authority database IdRef and the Sudoc Union Catalog. How do we connect this application to existing workflows ? How do we improve its usability ? How do we follow and encourage the community growing around it ? Eventually, we want cataloguers to make this tool their new favorite game, be the best players and get crowds’ cheers !
6 Lightning Talks, 5 minutes each
This talk presents the linked digital collection "Rainis and Aspazija" developed at the National Library of Latvia (NLL). The collection contains a wide variety of digital objects related to two famous Latvian poets and politicians Rainis and Aspazija. It is a result of a pilot project for exploring the use of Linked Data in cultural heritage collections.
The collection is different from "flat" digital object catalogues in that (1) its objects are interlinked and (2) information about objects and links between them is available as Linked Open Data.
The digital collection contains various types of content such as photos, archive documents and digitised first editions of poets' literary works. An important part of the collection is the correspondence between the two poets. It has been scanned, transcribed and enriched with experts' comments.
The interlinked nature of the collection comes from text annotation (performed manually by domain experts): textual content of a subset of objects is annotated with mentions of important entities (such as people, organisations and literary works). Information about these entities is added to the collection along with links between the digital objects and entities. These links provide users with richer options for exploring the collection.
In this talk I will share our experience in developing the collection and will
cover the different parts of
it: text annotation, the collection system itself and its Linked Data
component. I will also cover the
limitations of our initial text annotation approach and describe the new,
custom semantic annotation system
being integrated into the next revision of the digital collection.
The Bauhaus-Archive / Museum für Gestaltung carries out research on and presents the history and influence of the Bauhaus, the twentieth century’s most important college of architecture, design and art. Over the course of the decades the Bauhaus-Archive has built up the world’s largest collection of materials on the subject. To be able to present the collection more comprehensively, the Bauhaus-Archiv Berlin will receive a new building on the occasion of the centenary of the Bauhaus's founding.
Due to this, the museum itself will be closed in it’s 100th anniversary year, 2019. Although various events and exhibitions are planned in cooperation with colleagues in Weimar and Dessau, among others, public access to the collections and holdings is restricted. In this context, the expansion of the online presence of the collections and holdings of the Bauhaus-Archive is a great opportunity.
The particular strength of the institution lies in its double function as archive and museum - of course, it also has a library. This leads to a heterogeneous software landscape; MuseumPlus is used for Collection objects, Kalliope for the document archive and AllegroC for the library. In recent years, the ERDF-funded Open Archive Walter Gropius project has provided a basis for driving forward the museum's future digital strategy over the next few years. Within the scope of this project, a very specific online view of the collections was implemented, limited to documents from the archive.
The long term challenge of the Bauhaus-Archive’s digital strategy is to integrate data from the various sources into a coherent online presence. Currently, work is being carried out on the digitalisation of museum exhibits, the introduction of controlled vocabularies and the provisioning of LIDO metadata, amongst others. In order to overcome shortcomings of eMuseumPlus, a prototypical alternative based on Elasticsearch is being explored.
This presentation will give an overview of the current state of affairs, of lessons learned and obstacles encountered, as well as an outlook of what is to come
Kallías, the OPAC of the German Literature Archive in Marbach, is used by
scholars worldwide as an
information system and for access to literary sources. It provides five entry
points to the collections:
Manuscripts, library objects, images and objects, holdings and names; thus representing the high-quality cataloging in different divisions of the institution. Since 2017 a new discovery layer has been developed to integrate all sources into a cross-media, tailor-made online catalog. Although using a classic Solr based (non linked data) approach the new catalog makes productive use of authority data and relationships between works and special collections.
The new catalog is still in closed beta and is going to be released at the end of 2019. The presentation will focus on the custom data processing pipeline which is based on the Open Source tools Pandas (a Python library) and OpenRefine. 4 Million records are extracted from the local ILS, transformed into a tabular format, manipulated with custom rulesets, enriched with external data sources and loaded into a Solr index every day. The pipeline is orchestrated with simple Bash shell scripts that makes it easy to extend the workflow with other command line tools. By making legacy ILS data available in OpenRefine, library staff is enabled to use their data in other contexts (e.g. for digitization projects) and to publish their data in different formats (e.g. EAD-XML for the Kalliope union catalog).
Specialized information services – but also many libraries for their own data collections – face the problem that they lack the data sources to link magazines to related articles. As a result, scientists and students depend on databases or mega indices (such as Primo Central) to find relevant (technical) articles. Especially with the integration of mega indices in a (library) portal, it is more than complicated to offer an appropriate subject based filtering of articles.
To solve this problem, the Cooperative Network of Berlin and Brandenburg Libraries (KOBV) offer a new service: the holdings of specialized information services or other library portals can now be enriched with own data. To realize the data enrichment, a unique identifier is necessary. In the simplest case, there is a DOI through which information can be merged. Another possibility is the direct research in previously selected tray sets (via set search queries on external data sources). Here, KOBV prototypically accesses freely available article data stocks (e.g., Crossref). Using ISSN matching, suitable articles from the article data sets can be assigned to corresponding journals. Licenses and technical information from the Electronic Journals Library (EZB) can be added, allowing both - a subject based view on the institutions holdings and/or an excerpt of all licensed titles by the library itself. In addition, there is the possibility of using the EZB Linking Service by enriching the EZB ID. That means that users are offered links to journal content, taking locally available access rights into account. A cost-neutral offer is planned, which combines magazines and related articles. With this service related articles are quickly findable and the user is provided with license and subject based information at the same time. The reusability of the data is ensured by the use of MARC- XML format, and if necessary by a web-based export function.
The presentation will illustrate the different implementation approaches and illustrate the special benefits of this service for specialist information services.
6 Lightning Talks, 5 minutes each
Bye Bye Berlin; Hello X, Evaluation