Session recordingIn brief, this presentation will include:
- An intro to IIIF for AV manifests
- An intro and tool walk through of our application,
AudiAnnotate- A focus on documentation and workflows for users
Presentation SlidesFurther details:
How can linked data be accessible and useful for researching sound? Using IIIF and Web Annotations, which are both based on linked data, AudiAnnotate, a collaboration between HiPSTAS: High Performance Sound Technologies for Access and Scholarship and Brumfield Labs, creates a way for researchers to add annotations to audio files, and to preserve those annotations for research and sharing. This is an especially valuable resource for those doing research in audiovisual materials in archival collections at libraries, archives, and other cultural institutions. Such materials often present intimate, valuable, and distinctive audio, yet there are often restrictions regarding exportation and use of audiovisual materials outside of reading rooms. AudiAnnotate provides a workflow for users to annotate archival audio files using Audacity, producing files that can then be uploaded on AudiAnnotate's web application. Within the application, a IIIF manifest of annotations will be played using Universal Viewer. This IIIF environment allows for annotation data to target sound data and be rendered into a canvas where multiple sets of annotations can be compiled. The AudiAnnotate application also supports organization and tabulation of layers of annotations from Audacity, that are collected and organized within the application, which runs on top of a GitHub repository that stores annotation data for future use. This presentation will include an introduction to the AudiAnnotate project, and share the annotation process and workflow for creating a manifest and storing annotations. This presentation will be useful to those who work at cultural institutions and are interested in expanding resources for working with audiovisual materials, those who work with audiovisual materials in their research, and anyone curious about linked data and audio materials.