ACL-OCL / Base_JSON /prefixS /json /signlang /2020.signlang-1.11.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:22:11.141787Z"
},
"title": "Signing as Input for a Dictionary Query: Matching Signs Based on Joint Positions of the Dominant Hand",
"authors": [
{
"first": "Manolis",
"middle": [],
"last": "Fragkiadakis",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Leiden University",
"location": {
"addrLine": "Nonnensteeg 1-3 2311VJ, P.N. van Eyckhof 3 2311BV, Niels Bohrweg 1 2333CA"
}
},
"email": "m.fragkiadakis@hum.leidenuniv.nl"
},
{
"first": "Victoria",
"middle": [],
"last": "Nyst",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Leiden University",
"location": {
"addrLine": "Nonnensteeg 1-3 2311VJ, P.N. van Eyckhof 3 2311BV, Niels Bohrweg 1 2333CA"
}
},
"email": ""
},
{
"first": "Peter",
"middle": [],
"last": "Van Der Putten",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Leiden University",
"location": {
"addrLine": "Nonnensteeg 1-3 2311VJ, P.N. van Eyckhof 3 2311BV, Niels Bohrweg 1 2333CA"
}
},
"email": "p.w.h.van.der.putten@liacs.leidenuniv.nl"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This study presents a new method to search sign language lexica, using a full sign as input for a query. Thus, a dictionary user can look up information about a sign by signing the sign to a webcam. The recorded sign is then compared to potential matching signs in the lexicon. As such, it provides a new way of searching sign language dictionaries to complement existing methods based on (spoken language) glosses or phonological features, like handshape or location. The \"find the sign\" method analyzes the recorded sign using OpenPose to extract the body and finger joint positions. To compare the recorded sign with the signs in the database, the variation in trajectories of the dominant hand and of the fingers is quantified and compared, using Dynamic Time Warping (DTW). The method was tested with ten people with various degrees of sign language proficiency. Each subject viewed a set of 20 out of 100 total signs from the newly compiled Ghanaian Sign Language lexicon and was asked to replicate the signs. The results show that our method can predict the matching sign with 87% and 74% accuracy at the Top-10 and Top-5 ranking level respectively by using only the trajectory of the dominant hand. Additionally, more proficient signers obtain 90% accuracy at the Top-10 ranking. The methodology has the potential to be used also as a variation measurement tool to quantify the difference in signing between different signers or sign languages in general.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "This study presents a new method to search sign language lexica, using a full sign as input for a query. Thus, a dictionary user can look up information about a sign by signing the sign to a webcam. The recorded sign is then compared to potential matching signs in the lexicon. As such, it provides a new way of searching sign language dictionaries to complement existing methods based on (spoken language) glosses or phonological features, like handshape or location. The \"find the sign\" method analyzes the recorded sign using OpenPose to extract the body and finger joint positions. To compare the recorded sign with the signs in the database, the variation in trajectories of the dominant hand and of the fingers is quantified and compared, using Dynamic Time Warping (DTW). The method was tested with ten people with various degrees of sign language proficiency. Each subject viewed a set of 20 out of 100 total signs from the newly compiled Ghanaian Sign Language lexicon and was asked to replicate the signs. The results show that our method can predict the matching sign with 87% and 74% accuracy at the Top-10 and Top-5 ranking level respectively by using only the trajectory of the dominant hand. Additionally, more proficient signers obtain 90% accuracy at the Top-10 ranking. The methodology has the potential to be used also as a variation measurement tool to quantify the difference in signing between different signers or sign languages in general.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In most sign language dictionaries, users can search a sign through a written gloss, a unique identifier that by definition refers to a sign. In some cases, the lexica offer the possibility to specify formal parameters of the target sign, for instance, its handshape and location ( Figure 1 ). The Flemish Sign Language (VGT) dictionary (Van Herreweeghe et al., 2004) , the Swedish Sign Language (Institutionen for Lingvistik, 2009) and the Danish Sign Language (Center for Tegnsprog, 2008) are some examples of such dictionaries. After the input, the user is offered a set of signs that match the selected properties which can be then viewed individually. Although sign search functionality on the basis of a sign parameter value is a useful attribute of sign language lexica, dictionary compilers still have to link these values to the videos. Also, as Zwitserlood discusses, the users of such dictionaries must \"abstract away from the sign as a whole\" if they want to use the parameter search functionality (Zwitserlood, 2010). Even then, only signs that match the query 100% are returned, and there is no concept of an ordered set of results that match to some degree. A thorough overview of sign language lexica and their features can be found in Zwitserlood's review (2010) . In this paper we describe our \"find the sign\" methodology that allows inputting a full video-recorded sign to search for entries in a dictionary. This method requires no training of any kind of model such as the ones used for sign language recognition tasks. In its core, it is a comparison method to quantify the difference in the movement between signs. As a result, it can be used for any sign language. By utilizing a pre-trained pose estimation framework we extract the body and hand joint positions from users using their webcam. Subsequently, by employing Dynamic Time Warping we find the closest matching signs from a compiled lexicon.",
"cite_spans": [
{
"start": 337,
"end": 367,
"text": "(Van Herreweeghe et al., 2004)",
"ref_id": "BIBREF20"
},
{
"start": 415,
"end": 432,
"text": "Lingvistik, 2009)",
"ref_id": "BIBREF8"
},
{
"start": 474,
"end": 490,
"text": "Tegnsprog, 2008)",
"ref_id": null
},
{
"start": 1252,
"end": 1279,
"text": "Zwitserlood's review (2010)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 282,
"end": 290,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "To date, this methodology has only been applied to sign language classification tasks (Jangyodsuk et al., 2014; Schneider et al., 2019; Ten Holt et al., 2007) and not as a mode to complement sign search possibly solving the problem of ordering retrieval previously discussed. Additionally, we have developed a visualization tool to allow researchers to view the rendered paths of the dominant hand to further explore the overall difference in signing movements.",
"cite_spans": [
{
"start": 86,
"end": 111,
"text": "(Jangyodsuk et al., 2014;",
"ref_id": "BIBREF11"
},
{
"start": 112,
"end": 135,
"text": "Schneider et al., 2019;",
"ref_id": "BIBREF16"
},
{
"start": 136,
"end": 158,
"text": "Ten Holt et al., 2007)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The paper is structured as follows: in Section 2 we give an overview of methods that utilize Dynamic Time Warping in the gestural and sign language domain. In Section 3 we describe our methodology regarding the extraction of the body joint coordinates as well as the experimental setup, analysis, and visualization tool. In Section 4 we present the results of our experiments. We discuss them in Section 5 and conclude and motivate future research in Section 6. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Dynamic Time Warping (DTW) is a dynamic programming based time series comparison algorithm to produce a distance metric between two inputs. It has been widely used in the speech recognition domain since the early 1970's (Abdulla et al., 2003; Axelrod and Maison, 2004; Myers et al., 1980) . While the original algorithm can be computationaly expensive, different variations have been developed over the years to reduce the overall complexity, with most notably the works of Itakura-Parallelogram (Itakura, 1975 ), Ratanamahatana-Koegh-Band (Ratanamahatana and Keogh, 2004) and Sakoe-Chiba-Band (Sakoe and Chiba, 2013) . As a technique, it has been long-established in the gesture and sign language recognition domain as well (Ahmed et al., 2016; Jambhale and Khaparde, 2014; Jangyodsuk et al., 2014) . Due to the fact that it is a distance metric it requires no training and it is a perfect choice for applications where limited training samples are available. Ten Holt and her colleagues presented an algorithm for Dynamic Time Warping (DTW) on multi-dimensional time series (MDDTW) to perform classification on 121 gestures recorded with two cameras in stereo position (Ten Holt et al., 2007) . In Jangyodsuk et al. (2014) the authors investigated the use of DTW and Histogram of Oriented Gradient (HOG) to compare a query sign with those in a database of ASL signs using Kinect data. Their results showed an accuracy of 82% in a Top-10 ranking level. Recent developments in the field of machine and deep learning have lead to advances in sign language and gesture recognition. However, these approaches pose restrictions to their overall applicability as they require large amount of data and computational power in order to be trained. Furthermore, proposed methods for sign language classfication have been based on special sensor hardware, such as Microsoft's Kinect presenting additional challenges in their duplicability as well difficulty in their technical set-up. Our proposed method does not require the use of depth data to extract the pose key-points as this is being held by the pretrained pose estimation framework OpenPose. This makes our approach suitable for any kind of sign language lexicon. Most recently, Schneider et al. (2019) used Dynamic Time Warping in conjuction with One-Nearest-Neighbor algorithm and OpenPose to perform classification on six gestures. Their results suggested an accuracy of 77.4%. A major advantage of their methodology is the necessity for very little training data. However, a considerable drawback of their study is that they have only tested a small amount of gestures. As a result, such as pipeline shows a major deterioration of the overall accuracy when an additional gesture is added into the classification task. Our study repurposes the work of Schneider et al. by:",
"cite_spans": [
{
"start": 220,
"end": 242,
"text": "(Abdulla et al., 2003;",
"ref_id": "BIBREF0"
},
{
"start": 243,
"end": 268,
"text": "Axelrod and Maison, 2004;",
"ref_id": "BIBREF2"
},
{
"start": 269,
"end": 288,
"text": "Myers et al., 1980)",
"ref_id": "BIBREF13"
},
{
"start": 474,
"end": 510,
"text": "Itakura-Parallelogram (Itakura, 1975",
"ref_id": null
},
{
"start": 540,
"end": 572,
"text": "(Ratanamahatana and Keogh, 2004)",
"ref_id": "BIBREF14"
},
{
"start": 594,
"end": 617,
"text": "(Sakoe and Chiba, 2013)",
"ref_id": "BIBREF15"
},
{
"start": 725,
"end": 745,
"text": "(Ahmed et al., 2016;",
"ref_id": "BIBREF1"
},
{
"start": 746,
"end": 774,
"text": "Jambhale and Khaparde, 2014;",
"ref_id": "BIBREF10"
},
{
"start": 775,
"end": 799,
"text": "Jangyodsuk et al., 2014)",
"ref_id": "BIBREF11"
},
{
"start": 1171,
"end": 1194,
"text": "(Ten Holt et al., 2007)",
"ref_id": null
},
{
"start": 1200,
"end": 1224,
"text": "Jangyodsuk et al. (2014)",
"ref_id": "BIBREF11"
},
{
"start": 2228,
"end": 2251,
"text": "Schneider et al. (2019)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2."
},
{
"text": "\u2022 considering signs instead of gestures as inputs in DTW",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2."
},
{
"text": "\u2022 extending significantly the number of signs used in the experiment",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2."
},
{
"text": "\u2022 adding the finger joints extracted by OpenPose as additional data",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2."
},
{
"text": "\u2022 testing whether signing proficiency infuences the accuracy of the method",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2."
},
{
"text": "In this section we describe the pose estimation framework (i.e. OpenPose) as well as the apparatus and materials used in this study.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3."
},
{
"text": "OpenPose is a real-time, open source for academic purposes library for multi-person 2D pose estimation (Cao et al., 2017) . It can detect body, foot, hand and facial keypoints. It is a bottom-up approach meaning that it does not recognize first where a person is in an image and then extract the body joints but from the detection of the various key-points predicts the overall pose. In general, it exceeds in performance similar 2D body pose estimation libraries like Mask R-CNN (He et al., 2017) and Alpha-Pose (Li et al., 2018) . Its major advantage lies in its high accuracy regardless of the number of people in an image or video.",
"cite_spans": [
{
"start": 103,
"end": 121,
"text": "(Cao et al., 2017)",
"ref_id": "BIBREF3"
},
{
"start": 480,
"end": 497,
"text": "(He et al., 2017)",
"ref_id": "BIBREF7"
},
{
"start": 513,
"end": 530,
"text": "(Li et al., 2018)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pose Estimation",
"sec_num": "3.1."
},
{
"text": "OpenPose is able to run on different operating systems and hardware architectures while providing all the necessary tools for acquisition, visualization and output file generation. Its output consists of multiple json formatted files containing the pixel x, y coordinates of the body, hand and face joints. In this study only the body and hand predictions were used as the face joints were irrelevant for our purposes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pose Estimation",
"sec_num": "3.1."
},
{
"text": "The output of OpenPose consists of x,y pixel coordinates. As the people in each frame can potentially be in different locations, it is important to normalize their keypoints. Rotational invariance is omitted in this study as most people are expected to be in an upright position in front of the web camera. The normalization is done in two steps. Firstly, all the key points are translated in such way so that the neck key point shifts to the origo at (0,0). To accomplish the shift, the neck key points coordinates are substracted from all other key points. Secondly, the key points are scaled in such way so that the distance between the left and the right shoulder key point becomes 1. This is achieved by dividing all key points' coordinates by the distance between the left and right shoulder key point.The scale normalization method is based on previous studies by Celebi et al. One additonal step added to the pipeline is the horizontal flip of the videos when a participant was left-handed. This step is achieved by measuring the average velocity of each hand. In cases where the left hand's velocity is greater than the respective of the right hand, a horizontal flip is applied. Such a process allows an independent handedness feature of the overall methodology.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "3.2."
},
{
"text": "Ten people were asked to participate in the research. Four of them have no experience with sign language whatsoever while the rest are experienced signers. Additionally, they were all informed about the general purpose of the research and gave their consent to participate. This study was approved by the Faculty ethics committee. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Participants",
"sec_num": "3.3."
},
{
"text": "Each participant viewed only once a selection of 20 signs from the newly compiled Ghanaian Sign Language lexicon (HANDS!Lab, 2020). While the overall lexicon has more than 1300 signs we selected randomly 100 of them to be used in our experiments due to time limitations. The order was randomized for each participant to avoid potential biases. A full list can be seen in Table 1 . Each video had a 1000 by 580 pixel resolution at 30 frames per second and lasted approximately 5 (\u00b12) seconds. Recordings were made with a Macbook Pro's webcam at 1280 by 720 pixel resolution and 30 frames per second. We employ the soft DTW method by Cuturi and Blondel (2017) deployed by the tslearn python package (Tavenard et al., 2017) to perform DTW on the normalized trajectories of the dominant hand. Their work takes advantage of a smoothed formulation of DTW that computes the softminimum of all alignment costs. In a pilot test we observed that soft DTW performed better compared to other DTW variants, and was thus used in the rest of the experiment. Furthermore, a DTW variant created by Sakoe and Chiba (2013) used by the same python module was utilized to measure the distance of the trajectories of all finger coordinates. Most signs in our lexicon are one-handed where the left hand is inert either by being \"absent\" or passively fixed at a location. In the two-handed signs, the left hand mostly copies the movement of the right hand. As a result, we empolyed DTW only on the dominant hand features as the left hand would either be less informative or equally informative. Finally, the limited resolution of the output from OpenPose had an undesired effect producing sudden spikes in the signal. This attribute has been previously acknowledged by Figure 2 : Overview of the overall pipeline of our methodology. Schneider et al. (2019) and was present here too. The videos in the lexicon were blurry when the hand was moving fast making OpenPose to mispredict the proper joint locations between consecutive frames. As such, failed to create a smooth path. To compensate for this behavior we included two additional steps. Firstly, all the dominant hand's wrist x,y coordinates that had a confidence level lower than 0.3 were deleted. Additionally, we used a median filter with radius r = 3 for smoothing the remaining signal. Moreover, we noticed that due to the good lighting conditions in the GSL lexicon there was a mismatch on the body joint's coordinates predicted by OpenPose. The lighting conditons of the videos captured with the participants were of poor quality making it hard for the DTW algorithm to operate properly. To solve that problem we decided to include in the lexicon the data from a random participant every time we tested the methodology. This step seems to add the necessary noise in the database that is nevertheless similar to the noise in the participants' data. As a result, the data of each participant's sign was compared with 120 signs in our database (100 from the GSL lexicon and 20 from another random participant). The overall pipeline can be seen in Figure 2 .",
"cite_spans": [
{
"start": 632,
"end": 657,
"text": "Cuturi and Blondel (2017)",
"ref_id": "BIBREF5"
},
{
"start": 697,
"end": 720,
"text": "(Tavenard et al., 2017)",
"ref_id": "BIBREF17"
},
{
"start": 1081,
"end": 1103,
"text": "Sakoe and Chiba (2013)",
"ref_id": "BIBREF15"
},
{
"start": 1809,
"end": 1832,
"text": "Schneider et al. (2019)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 371,
"end": 378,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 1745,
"end": 1753,
"text": "Figure 2",
"ref_id": null
},
{
"start": 3083,
"end": 3091,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "3.4."
},
{
"text": "To futher explore the outputs of OpenPose and how they are rendered in our methodology, we have created an interactive visualization tool. Developed with the python module \"bokeh\" (Bokeh Development Team, 2014), the user is able to view the extracted dominant hand trajectories from the participants as a whole or individually. As all participants started and ended each sign in the same position, we have color coded as red the preparation and retraction phase and as green the stroke of each sign. While the motivation behind the creation of this tool was to solely verify the output of openpose and the normalization part in our method, its potential reaches beyond the scope of this study. Such a tool, in combination with the DTW output, can potentially be used as a metric to quantify the variation in the movement and location of signers and sign languages in general. An example of the trajectories of two participants for the same sign can be seen in Figure 3a . It is evident that one participant produced the sign in a larger space with more distinctive movements. Moreover, it can be deduced that the location parameter is different as Participant 2 made the sign at a higher plane (almost in front of the face) while Participant 1 in front of the torso. Table 2 presents the overall accuracy of our methodology. Top-k refers to the number of signs a user must look up before finding a correct match. Accuracy indicates whether the target sign is present in the Top-k retrieved signs and is avereged accross all participants and signs. It is evident that the highest accuracy is apparent at a Top-10 rank level at 87%. Furthermore, Top-5 rank shows an adequate accuracy at 74%. Contrary to expectations, using DTW in the joints of fingers extracted by OpenPose did not yield significant results with a highest accuracy at the Top-10 rank at approximatelly 52%. Merged DTW distances from the dominant hand trajectories and the finger joints also did not generate compelling results.",
"cite_spans": [],
"ref_spans": [
{
"start": 960,
"end": 969,
"text": "Figure 3a",
"ref_id": "FIGREF2"
},
{
"start": 1267,
"end": 1274,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Visualization",
"sec_num": "3.5."
},
{
"text": "If only the experienced signers' data is considered then the accuracy at the Top-10 rank raises at 90% and the Top-5 at 78% (Table 2 row 4) . On the other hand, the accuracy on the non-experienced signers drops at 82% and 0.67% at the Top-10 and 5 rank respectively (Table 2 row 5). Moreover, DTW on the finger's trajectories shows a significant drop at the Top-10 rank between the experienced and nonexperienced signers of approximately 22% (Table 2 column 7) . The most striking observation to emerge from the analysis was that four out of 20 signs were consistently recognized with almost 100% accuracy at the Top-1 level rank. These signs were: CAPTAIN, DREAM, ELEPHANT and OPEN. Such behavior is justified as these signs have large, distinctive movements and locations that are hard to misinterpreted by the DTW.",
"cite_spans": [],
"ref_spans": [
{
"start": 124,
"end": 139,
"text": "(Table 2 row 4)",
"ref_id": "TABREF2"
},
{
"start": 442,
"end": 461,
"text": "(Table 2 column 7)",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4."
},
{
"text": "In this study we have investigated the use of OpenPose and Dynamic Time Warping as a ranking pipeline to retrieve matching signs from a sign language dictionary. Our results demonstrated that such a task can be achieved with an adequate accuracy rate. This is in good agreement with the results obtained by Jangyodsuk et al. (2014) . Although the accuracy rate does not match the one from Schneider et al. 2019we have tested a larger vocabulary and lexicon. Additionally, we are not aiming at classifying each sign but rather create a suggestion ranking system. As such, our results suggest that approximately 9 out of 10 times the matched sign will be present in the first 10 retrieved signs. Moreover, the results have further strengthened our hypothesis that signing proficiency is an influencing factor for classification efforts. Although our sample size was limited there was a significant drop in the accuracy rates between the experienced and non-experienced signers. The former, produced well structured signs matching more approprietely the ones from the lexicon, which made DTW perform in a more excellent matter. Our research failed to account for the low values of accuracy on the finger joints. This was probably as a result of the low performance of OpenPose in accurately predicting the finger joints due to low lighting conditions in the videos. It was often the case that joint predictions would dissapear between frames or mis-predicted in wrong locations. Thus, caution must be excercised when OpenPose is being used for such trivial tasks.",
"cite_spans": [
{
"start": 307,
"end": 331,
"text": "Jangyodsuk et al. (2014)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5."
},
{
"text": "To sum up, we have obtained satisfactory results demonstrating the use of OpenPose and Dynamic Time Warping for a new, sign-based search functionality in reduced sign language dictionaries. We showed that our \"find the sign\" methodology can be used as a suggestion tool for sign retrieval in a small lexicon by using only the trajectory of the dominant hand. Additionally, our research has highlighted the importance of considering the level of signing proficiency when it comes to classification tasks. The significance of this study lies on the fact that the methodology in question can be easily used in any kind of sign language lexicon, irrespective of its quality and language. Additionally, no prior training of any kind of model is required. As such, this approach, in combination with the developed visualization module, has the potential to be used also as a metric tool to quantify the variation between signers and overall languages. Furhtermore, a number of things is left for future work; first and foremost, to investigate how extracted finger joints can be utilized more efficiently in the overall pipeline. Moreover, different variants of the original DTW algorithms need to be tested. Finally, we intend to evaluate the use of other pose estimation frameworks, such as PoseNet, to further enhance the web and mobile user-friendliness of the method used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6."
},
{
"text": "We would like to thank all the people who participated in the study, without whose help this work would have never been possible.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": "7."
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Cross-words reference template for dtw-based speech recognition systems",
"authors": [
{
"first": "W",
"middle": [],
"last": "Abdulla",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Chow",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Sin",
"suffix": ""
}
],
"year": 2003,
"venue": "TENCON 2003. Conference on Convergent Technologies for Asia-Pacific Region",
"volume": "4",
"issue": "",
"pages": "1576--1579",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abdulla, W., Chow, D., and Sin, G. (2003). Cross-words reference template for dtw-based speech recognition sys- tems. In TENCON 2003. Conference on Convergent Technologies for Asia-Pacific Region, volume 4, pages 1576-1579, Bangalore, India. Allied Publishers Pvt. Ltd.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Vision based hand gesture recognition using dynamic time warping for indian sign language",
"authors": [
{
"first": "W",
"middle": [],
"last": "Ahmed",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Chanda",
"suffix": ""
},
{
"first": "Mitra",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2016,
"venue": "2016 International Conference on Information Science (ICIS)",
"volume": "",
"issue": "",
"pages": "120--125",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ahmed, W., Chanda, K., and Mitra, S. (2016). Vi- sion based hand gesture recognition using dynamic time warping for indian sign language. In 2016 International Conference on Information Science (ICIS), pages 120- 125, Kochi, India. IEEE.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Combination of hidden Markov models with dynamic time warping for speech recognition",
"authors": [
{
"first": "S",
"middle": [],
"last": "Axelrod",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Maison",
"suffix": ""
}
],
"year": 2004,
"venue": "2004 IEEE International Conference on Acoustics, Speech, and Signal Processing",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Axelrod, S. and Maison, B. (2004). Combination of hidden Markov models with dynamic time warping for speech recognition. In 2004 IEEE International Conference on Acoustics, Speech, and Signal Processing, volume 1, pages I-173-6, Montreal, Que., Canada. IEEE. Bokeh Development Team. (2014). Bokeh: Python li- brary for interactive visualization. http://bokeh. pydata.org.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Realtime multi-person 2d pose estimation using part affinity fields",
"authors": [
{
"first": "Z",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Simon",
"suffix": ""
},
{
"first": "S.-E",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Sheikh",
"suffix": ""
}
],
"year": 2017,
"venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR)",
"volume": "",
"issue": "",
"pages": "1302--1310",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cao, Z., Simon, T., Wei, S.-E., and Sheikh, Y. (2017). Re- altime multi-person 2d pose estimation using part affinity fields. In IEEE Conference on Computer Vision and Pat- tern Recognition (CVPR), pages 1302-1310, Honolulu, HI. IEEE.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Gesture recognition using skeleton data with weighted dynamic time warping",
"authors": [
{
"first": "S",
"middle": [],
"last": "Celebi",
"suffix": ""
},
{
"first": "A",
"middle": [
"S"
],
"last": "Aydin",
"suffix": ""
},
{
"first": "T",
"middle": [
"T"
],
"last": "Talha",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Tarik",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the International Conference on Computer Vision Theory and Applications",
"volume": "",
"issue": "",
"pages": "620--625",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Celebi, S., Aydin, A. S., Talha, T. T., and Tarik, A. (2013). Gesture recognition using skeleton data with weighted dynamic time warping. In Proceedings of the Interna- tional Conference on Computer Vision Theory and Ap- plications, pages 620-625, Barcelona, Spain. SciTePress -Science and and Technology Publications. Center for Tegnsprog. (2008). Ordbog over Dansk Tegn- sprog. http://www.tegnsprog.dk/.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Soft-dtw: a differentiable loss function for time-series",
"authors": [
{
"first": "M",
"middle": [],
"last": "Cuturi",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Blondel",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 34th International Conference on Machine Learning",
"volume": "70",
"issue": "",
"pages": "894--903",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cuturi, M. and Blondel, M. (2017). Soft-dtw: a differ- entiable loss function for time-series. In Proceedings of the 34th International Conference on Machine Learning- Volume 70, pages 894-903. JMLR. org.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Ghanaian Sign Language",
"authors": [
{
"first": "",
"middle": [],
"last": "Hands!lab",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "HANDS!Lab. (2020). Ghanaian Sign Language. https: //play.google.com/store/apps/details? id=com.ljsharp.gsldictionary&hl=es_US.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Mask r-cnn",
"authors": [
{
"first": "K",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Gkioxari",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Doll\u00e1r",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Girshick",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the IEEE international conference on computer vision",
"volume": "",
"issue": "",
"pages": "2961--2969",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "He, K., Gkioxari, G., Doll\u00e1r, P., and Girshick, R. (2017). Mask r-cnn. In Proceedings of the IEEE international conference on computer vision, pages 2961-2969.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Svenskt teckensprakslexikon",
"authors": [
{
"first": "Lingvistik",
"middle": [],
"last": "Institutionen For",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Institutionen for Lingvistik. (2009). Svenskt teckensprak- slexikon. https://teckensprakslexikon.su. se.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Minimum prediction residual principle applied to speech recognition",
"authors": [
{
"first": "F",
"middle": [],
"last": "Itakura",
"suffix": ""
}
],
"year": 1975,
"venue": "IEEE Transactions on acoustics, speech, and signal processing",
"volume": "23",
"issue": "1",
"pages": "67--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Itakura, F. (1975). Minimum prediction residual princi- ple applied to speech recognition. IEEE Transactions on acoustics, speech, and signal processing, 23(1):67-72.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Gesture recognition using DTW & piecewise DTW",
"authors": [
{
"first": "S",
"middle": [
"S"
],
"last": "Jambhale",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Khaparde",
"suffix": ""
}
],
"year": 2014,
"venue": "2014 International Conference on Electronics and Communication Systems (ICECS)",
"volume": "",
"issue": "",
"pages": "1--5",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jambhale, S. S. and Khaparde, A. (2014). Gesture recogni- tion using DTW & piecewise DTW. In 2014 Inter- national Conference on Electronics and Communication Systems (ICECS), pages 1-5, Coimbatore. IEEE.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Sign language recognition using dynamic time warping and hand shape distance based on histogram of oriented gradient features",
"authors": [
{
"first": "P",
"middle": [],
"last": "Jangyodsuk",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Conly",
"suffix": ""
},
{
"first": "Athitsos",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 7th International Conference on PErvasive Technologies Related to Assistive Environments -PETRA '14",
"volume": "",
"issue": "",
"pages": "1--6",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jangyodsuk, P., Conly, C., and Athitsos, V. (2014). Sign language recognition using dynamic time warping and hand shape distance based on histogram of oriented gra- dient features. In Proceedings of the 7th International Conference on PErvasive Technologies Related to As- sistive Environments -PETRA '14, pages 1-6, Rhodes, Greece. ACM Press.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Crowdpose: Efficient crowded scenes pose estimation and a new benchmark",
"authors": [
{
"first": "J",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Mao",
"suffix": ""
},
{
"first": "H.-S",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1812.00324"
]
},
"num": null,
"urls": [],
"raw_text": "Li, J., Wang, C., Zhu, H., Mao, Y., Fang, H.-S., and Lu, C. (2018). Crowdpose: Efficient crowded scenes pose estimation and a new benchmark. arXiv preprint arXiv:1812.00324.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Performance tradeoffs in dynamic time warping algorithms for isolated word recognition",
"authors": [
{
"first": "C",
"middle": [],
"last": "Myers",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Rabiner",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Rosenberg",
"suffix": ""
}
],
"year": 1980,
"venue": "Speech, and Signal Processing",
"volume": "28",
"issue": "",
"pages": "623--635",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Myers, C., Rabiner, L., and Rosenberg, A. (1980). Perfor- mance tradeoffs in dynamic time warping algorithms for isolated word recognition. IEEE Transactions on Acous- tics, Speech, and Signal Processing, 28(6):623-635.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Making time-series classification more accurate using learned constraints",
"authors": [
{
"first": "C",
"middle": [
"A"
],
"last": "Ratanamahatana",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Keogh",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 2004 SIAM International Conference on Data Mining",
"volume": "",
"issue": "",
"pages": "11--22",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ratanamahatana, C. A. and Keogh, E. (2004). Making time-series classification more accurate using learned constraints. In Proceedings of the 2004 SIAM Interna- tional Conference on Data Mining, pages 11-22. Society for Industrial and Applied Mathematics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Gesture recognition using skeleton data with weighted dynamic time warping",
"authors": [
{
"first": "H",
"middle": [],
"last": "Sakoe",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Chiba",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the International Conference on Computer Vision Theory and Applications",
"volume": "",
"issue": "",
"pages": "620--625",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sakoe, H. and Chiba, S. (2013). Gesture recognition us- ing skeleton data with weighted dynamic time warping. In Proceedings of the International Conference on Com- puter Vision Theory and Applications, pages 620-625, Barcelona, Spain. SciTePress -Science and and Tech- nology Publications.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Gesture recognition in rgb videos using human body keypoints and dynamic time warping",
"authors": [
{
"first": "P",
"middle": [],
"last": "Schneider",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Memmesheimer",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Kramer",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Paulus",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1906.12171[cs].arXiv:1906.12171"
]
},
"num": null,
"urls": [],
"raw_text": "Schneider, P., Memmesheimer, R., Kramer, I., and Paulus, D. (2019). Gesture recognition in rgb videos us- ing human body keypoints and dynamic time warping. arXiv:1906.12171 [cs]. arXiv: 1906.12171.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "tslearn: A machine learning toolkit dedicated to time-series data",
"authors": [
{
"first": "R",
"middle": [],
"last": "Tavenard",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Faouzi",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Vandewiele",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Divo",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Androz",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Holtz",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Payne",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Yurchak",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Russwurm",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Kolar",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Woods",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tavenard, R., Faouzi, J., Vandewiele, G., Divo, F., Androz, G., Holtz, C., Payne, M., Yurchak, R., Russwurm, M., Kolar, K., and Woods, E. (2017). tslearn: A machine learning toolkit dedicated to time-series data. https: //github.com/rtavenar/tslearn.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Multi-dimensional dynamic time warping for gesture recognition",
"authors": [],
"year": null,
"venue": "Thirteenth annual conference of the Advanced School for Computing and Imaging",
"volume": "300",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Multi-dimensional dynamic time warping for gesture recognition. In Thirteenth annual conference of the Ad- vanced School for Computing and Imaging, volume 300, page 1.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Digitaal Vlaamse Gebarentaal-Nederlands/Nederlands-Vlaamse Gebarentaal woordenboek",
"authors": [
{
"first": "M",
"middle": [],
"last": "Van Herreweeghe",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Slembrouck",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Vermeerbergen",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Van Herreweeghe, M., Slembrouck, S., and Vermeer- bergen, M. (2004). Digitaal Vlaamse Gebarentaal- Nederlands/Nederlands-Vlaamse Gebarentaal woordenboek. https://woordenboek. vlaamsegebarentaal.be.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Sign language lexicography in the early 21st century and a recently published dictionary of sign language of the netherlands",
"authors": [
{
"first": "I",
"middle": [],
"last": "Zwitserlood",
"suffix": ""
}
],
"year": 2010,
"venue": "International Journal of Lexicography",
"volume": "23",
"issue": "4",
"pages": "443--476",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zwitserlood, I. (2010). Sign language lexicography in the early 21st century and a recently published dictionary of sign language of the netherlands. International Journal of Lexicography, 23(4):443-476.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Visual iconicity across sign languages: Large-scale automated video analysis of iconic articulators and locations. Frontiers in Psychology",
"authors": [
{
"first": "R",
"middle": [],
"last": "Ostling",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "B\u00f6rstell",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Courtaux",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "9",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ostling, R., B\u00f6rstell, C., and Courtaux, S. (2018). Visual iconicity across sign languages: Large-scale automated video analysis of iconic articulators and locations. Fron- tiers in Psychology, 9:725.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Traditional search functionality as seen in the online Danish Sign Language dictionary (Center forTegnsprog, 2008)."
},
"FIGREF1": {
"num": null,
"uris": null,
"type_str": "figure",
"text": ", Schneider et al. (2019) and\u00d6stling et al. (2018)."
},
"FIGREF2": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Visualization of the dominant hand trajectories between two participants (a) for the sign ABOUT (b)."
},
"FIGREF3": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Dominant hand trajectory Fingers' trajectories Merged trajectories Top-k Top 1 Top 5 Top 10 Top 1 Top 5 Top 10 Top 1 Top 5 Top 10 Accuracy of all participants 0"
},
"TABREF1": {
"html": null,
"type_str": "table",
"num": null,
"text": "",
"content": "<table/>"
},
"TABREF2": {
"html": null,
"type_str": "table",
"num": null,
"text": "",
"content": "<table/>"
}
}
}
}