ACL-OCL / Base_JSON /prefixL /json /lr4sshoc /2020.lr4sshoc-1.9.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T11:59:31.062900Z"
},
"title": "Crossing the SSH Bridge with Interview Data",
"authors": [
{
"first": "Henk",
"middle": [],
"last": "Van Den Heuvel",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Radboud University",
"location": {
"addrLine": "Erasmusplein 1",
"settlement": "Nijmegen",
"country": "the Netherlands"
}
},
"email": "h.vandenheuvel@let.ru.nl"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Spoken audio data, such as interview data, is a scientific instrument used by researchers in various disciplines crossing the boundaries of social sciences and humanities. In this paper, we will have a closer look at a portal designed to perform speech-to-text conversion on audio recordings through Automatic Speech Recognition (ASR) in the CLARIN infrastructure. Within the cluster cross-domain EU project SSHOC the potential value of such a linguistic tool kit for processing spoken language recording has found uptake in a webinar about the topic, and in a task addressing audio analysis of panel survey data. The objective of this contribution is to show that the processing of interviews as a research instrument has opened up a fascinating and fruitful area of collaboration between Social Sciences and Humanities (SSH).",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Spoken audio data, such as interview data, is a scientific instrument used by researchers in various disciplines crossing the boundaries of social sciences and humanities. In this paper, we will have a closer look at a portal designed to perform speech-to-text conversion on audio recordings through Automatic Speech Recognition (ASR) in the CLARIN infrastructure. Within the cluster cross-domain EU project SSHOC the potential value of such a linguistic tool kit for processing spoken language recording has found uptake in a webinar about the topic, and in a task addressing audio analysis of panel survey data. The objective of this contribution is to show that the processing of interviews as a research instrument has opened up a fascinating and fruitful area of collaboration between Social Sciences and Humanities (SSH).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Spoken audio data, such as interview data, is a scientific instrument used by researchers in various disciplines. These disciplines span the social sciences and the humanities. An oral historian will typically approach a recorded interview as an intersubjective account of a past experience, whereas another historian might consider the same source of interest only because of the factual information it conveys. A social scientist is likely to try to discover common themes and similarities and differences across a whole set of interviews, whereas a computational linguist will rely on counting frequencies and detecting collocations and co-occurrences, for similar purposes. On the other hand sociologists who interview, often seek to understand their interviewees in the same way as (oral) historians (Scagliola et al., 2020) .",
"cite_spans": [
{
"start": 805,
"end": 829,
"text": "(Scagliola et al., 2020)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction: Cross Disciplinary Use of Interview Data",
"sec_num": "1."
},
{
"text": "Then the question arises how the various disciplines can benefit from the large amount of freely available transcription, annotation, linguistic and emotion recognition tools. We should take into account that most scholars are not familiar with each other's approaches, and hesitate to take up technology. When software is used, it is often proprietary and binds scholars to a particular set of practices.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction: Cross Disciplinary Use of Interview Data",
"sec_num": "1."
},
{
"text": "To clear the situation a multidisciplinary international community of experts organised a series of hands-on workshops with scholars who work with interview data, and tested the reception of a number of digital tools that are used at various stages of the research process. We engaged with tools for transcription, for annotation, for analysis and for emotion recognition. The workshops were held at Oxford, Utrecht, Arezzo, Munich, Utrecht and Sofia between 2016 and 2019, and were mostly sponsored by CLARIN. Participants were recruited among communities of historians, social science scholars, linguists, speech technologists, phonologists, archivists and information scientists. The website https://oralhistory.eu/ was set up to communicate across disciplinary borders. For a full account of experiences, we refer to Scagliola et al., (2020) .",
"cite_spans": [
{
"start": 821,
"end": 845,
"text": "Scagliola et al., (2020)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction: Cross Disciplinary Use of Interview Data",
"sec_num": "1."
},
{
"text": "Through these workshops it became ever clearer that, despite different scientific methods of analysis used by these researchers, core processing methods of this kind of data are cross-disciplinary. Creating transcriptions with appropriate level of detail is one of the initial and most important steps in the spoken audio data analysis, but this step can also be very time-consuming. This is why researchers can greatly benefit from at least partial automation of the transcription process. However, choosing high-quality tools and learning how to use them is not always a straightforward process, and researchers can quickly lose their enthusiasm for automation for the fear of that the automation process might be too complex or non-transparent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction: Cross Disciplinary Use of Interview Data",
"sec_num": "1."
},
{
"text": "In this paper, we will have a closer look at a portal designed to perform speech-to-text conversion on audio recordings of interviews through Automatic Speech Recognition (ASR) with an option to manually correct the text output (section 2). Then we will point to a number of options to apply NLP analysis tools to the resulting text (section 3), and finally we will address activities organized in the SSHOC project1 to set up a bridge spanning the SSH communities using ASR for recorded audio materials (section 4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction: Cross Disciplinary Use of Interview Data",
"sec_num": "1."
},
{
"text": "Automatic speech recognition (ASR) has reached a performance level where, under favorable acoustic conditions, a quality of transcriptions can be achieved that is a sufficient starting point for many researchers to start subsequent (domain specific) text analysis (labelling and encoding on). An additional advantage of using ASR for transcription purposes is that the output comes with time stamps of the words locating them in the original audio stream and permitting seamless subtitling of audio and video recordings. Draxler et al. (2020) describe a webportal developed for the CLARIN ERIC2 where researchers can upload audio recordings, use ASR engines for a variety of languages to obtain text transcriptions of the recordings, and to manually correct the transcriptions and realign the corrected transcripts with the audio files. The portal is accessible via a login at https://clarin.phonetik.unimuenchen.de/apps/oh-portal/. Upon entering the portal the user sees the screen depicted in Figure 1 , showing the three phases in the transcription process mentioned above. Draxler et al. (2020) gives a detailed account of the various processing steps, the user agreements for the available speech recognisers (also with respect to privacy issues), the technical limitations of the portal, the performance one may expect, and guidelines for making audio recordings that are suitable for ASR processing.",
"cite_spans": [
{
"start": 1077,
"end": 1098,
"text": "Draxler et al. (2020)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 995,
"end": 1003,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Interview Data and ASR",
"sec_num": "2."
},
{
"text": "As pointed out in Draxler et al. (2020) we are well aware of the relevance of tools for follow up analyses after the speech to text conversion in the portal. The current workflow implemented by the OH portal is derived from the requirements of speech technology development. However, the requirements of oral historians but also of humanities scholars and social scientists are different. Studying the interaction between two people who construct meaning via a dialogue, requires retrieving highlevel information from the recordings, it is not only about 'what is said' but also about 'how it is said'. Scholars want to know: what is the major topic of the recording, what emotions can be observed, what are the named entities, what can be said about the regional background of the speaker, what relationships exist between historical data and audio recordings, etc. Trained human transcribers may extract this information, but this is a time-consuming manual process. Topic modelling, sentiment analysis, named entity recognition, dialect modelling and information extraction or summarization are all active 2 http://clarin.eu/ research areas in computational linguistics and speech processing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "From ASR output to NLP",
"sec_num": "3."
},
{
"text": "In Scagliola et al. (2020) we presented an overview of NLP analysis packages used in the workshops. These include lemmatizers, syntactic parsers, named entity recognizers, auto-summarizers, tools for detecting concordances/n-grams and semantic correlations. Participants were then given a live demo of the software tools and then some step by step guided exercises with data. Linguistic tools introduced were",
"cite_spans": [
{
"start": 3,
"end": 26,
"text": "Scagliola et al. (2020)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "From ASR output to NLP",
"sec_num": "3."
},
{
"text": "\u2022 Voyant (https://voyant-tools.org/), a lightweight text analysis tool that yields output on the fly \u2022 the Stanford CoreNLP (https://stanfordnlp.github.io/CoreNLP/), a linguistic tool that can automatically tag words in a number of different ways, such as recognizing part of speech, type of proper noun, numeric quantities, and more \u2022 Autosummarizer (http://autosummarizer.com/), a website which uses AI to automatically produce summaries of texts. \u2022 TXM, a more complex tool for 'textometry', a methodology allowing quantitative and qualitative analysis of textual corpora, by combining developments in lexometric and statistical research with corpus technologies (http://textometrie.ens-lyon.fr/?lang=en). It allows for a more granular analysis of language features, requiring the integration of a specific language model, the splitting of speakers, the conversion of data into computer readable XML language, and the lemmatization of the data. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "From ASR output to NLP",
"sec_num": "3."
},
{
"text": "Within the EU SSHOC3 project (which is focused on cooperation of the SSH communities in sharing data and tools) the potential value of CLARIN's linguistic tool kit for processing spoken language recording has found uptake in general in organizing webinars about the topic, and more specifically in Task 4.4 which addresses Voice recorded interviews and audio analysis. In this task we aim to introduce specific questions LISS Panel surveys to which participants can respond with audio recordings. These questions typically relate to more general opinions on for instance finances, ethics, and politics. A use case will be started for Dutch in which the audio recordings will be transcribed using the Dutch ASR and processed using further NLP tools such as for summarization, topic detection and, possibly, automatic translation. Special care will be given to GDPR compliant data collection and processing (Emery et al., 2019) .",
"cite_spans": [
{
"start": 905,
"end": 925,
"text": "(Emery et al., 2019)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Interviews, ASR and SSHOC",
"sec_num": "4."
},
{
"text": "In order to raise awareness for the potential benefits of ASR for the transcription of audio recordings a webinar was organized by the dissemination team of SSHOC in which the background of the portal was addressed, followed by a tutorial on how to use it. A blogpost4 about the webinar was published together with a Youtube podcast5. There were 172 viewers of the webinar. The majority of participants came from the EU countries, but the webinar was also followed by some participants from countries outside Europe (i.e. the USA, several African countries, China, etc.). The great majority (approx. 70 %), belonged to categories \"Researchers, Research Networks and Communities\" and \"Universities and research performing institutions\". These two categories were followed by \"Research libraries and archives\", \"Research and einfrastructures\" and \"Private sector and industry players\". Their representation accounted to approximately 20 % of the entire audience. The remaining categories, \"Policy making organizations\", \"Research funding organizations\" and \"Civil society and citizen scientists\" were represented only by a few participants (approx. 10 %). These numbers show the enormous interest for and potential of spoken language processing in a wide variety of scientific disciplines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interviews, ASR and SSHOC",
"sec_num": "4."
},
{
"text": "As a follow up of the webinar we started organising four weekly QA sessions during which users of the OH portal can contact us in an interactive session based on presubmitted issues that they come up e.g. in using the portal for their research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interviews, ASR and SSHOC",
"sec_num": "4."
},
{
"text": "3 https://sshopencloud.eu/ 4 https://www.sshopencloud.eu/news/sshoc-webinarclarin-hands-tutorial-transcribing-interview-data 5 https://www.youtube.com/watch?v=X6bFGJpMjVQ&t=6 s",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interviews, ASR and SSHOC",
"sec_num": "4."
},
{
"text": "Our experiences with the various workshops and the webinar have convinced the Oral History working group (see section 1) that the processing of interviews as a research instrument has opened up a fascinating area of collaboration between humanities scholars and social scientists. Research tools such as the OH portal appear to appeal to great variety of researchers across academic disciplines. Building the appropriate tools requires a lot of \"overbridging\" talk by ICT developers in the Digital Humanities, but the fruits we see growing from that tree are certainly worth the efforts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5."
},
{
"text": "https://sshopencloud.eu/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A CLARIN Transcription Portal for Interview Data",
"authors": [
{
"first": "C",
"middle": [],
"last": "Draxler",
"suffix": ""
},
{
"first": "",
"middle": [
"H"
],
"last": "Van Den Heuvel",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Van Hessen",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Calamai",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Corti",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Scagliola",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 12th International Conference on Language Resources and Evaluation (LREC2020)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Draxler, C., Van den Heuvel. H., Van Hessen, A., Calamai, S., Corti, L., Scagliola, S. (2020). A CLARIN Transcription Portal for Interview Data. Proceedings of the 12th International Conference on Language Resources and Evaluation (LREC2020).",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Guidelines for the integration of Audio Capture data in Survey Interviews. D4.12 of the SSHOC project",
"authors": [
{
"first": "T",
"middle": [],
"last": "Emery",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Luijckx",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Vanden Heuvel",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emery, T., Luijckx, R., Vanden Heuvel, H., (2019). Guidelines for the integration of Audio Capture data in Survey Interviews. D4.12 of the SSHOC project. https://zenodo.org/record/3631169#.Xo2N3_0za70",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Cross disciplinary overtures with interview data: Integrating digital practices and tools in the scholarly workflow",
"authors": [
{
"first": "S",
"middle": [],
"last": "Scagliola",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Corti",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Calamai",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Karrouche",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Beeken",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Van Hessen",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Draxler",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Van Den Heuvel",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Broekhuizen",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings CLARIN Annual Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Scagliola, S., Corti, L., Calamai, S., Karrouche, N., Beeken, J., Van Hessen, A., Draxler, C., Van den Heuvel, H., and Broekhuizen M., (2020) Cross disciplinary overtures with interview data: Integrating digital practices and tools in the scholarly workflow. Proceedings CLARIN Annual Conference, Leipzig, October 2019.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A Transcription Portal for Oral History Research and Beyond. Digital Humanities",
"authors": [
{
"first": "H",
"middle": [],
"last": "Van Den Heuvel",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Draxler",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Van Hessen",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Corti",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Scagliola",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Calamai",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Karouche",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Van den Heuvel, H., Draxler, C., Van Hessen, A., Corti, L., Scagliola, S., Calamai, S., Karouche, N. (2019). A Transcription Portal for Oral History Research and Beyond. Digital Humanities 2019, Utrecht, 9-12 July 2019. https://dev.clariah.nl/files/dh2019/boa/0854.html",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Screenshot of OH Portal with three audio files. The files were uploaded and processed by ASR, and are now awaiting manual correction of the transcript.",
"uris": null,
"type_str": "figure",
"num": null
}
}
}
}