ACL-OCL / Base_JSON /prefixS /json /sltu /2020.sltu-1.20.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:33:47.751778Z"
},
"title": "Phoneme Boundary Analysis using Multiway Geometric Properties of Waveform Trajectories",
"authors": [
{
"first": "Parabattina",
"middle": [],
"last": "Bhagath",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Indian Institute of Technology Guwahati",
"location": {
"addrLine": "Assam state",
"postCode": "2014",
"country": "India bhagath"
}
},
"email": ""
},
{
"first": "Pradip",
"middle": [
"K"
],
"last": "Das",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Indian Institute of Technology Guwahati",
"location": {
"addrLine": "Assam state",
"postCode": "2014",
"country": "India bhagath"
}
},
"email": "pkdas@iitg.ac.in"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Automatic phoneme segmentation is an important problem in speech processing. It helps in improving the recognition quality by providing a proper segmentation information of phonemes or phonetic units. Inappropriate segmentation may lead to recognition accuracy falloff. The problem is essential not only for recognition but also for annotation purpose. In general, segmentation algorithms rely on large datasets for training where data is observed to find the patterns among them. But this process is not straight forward for languages that are under resourced because of less availability of datasets. In this paper, we propose a method that uses geometrical properties of waveform trajectory where intra signal variations are studied and used for segmentation. The method does not rely on large datasets for training. The geometric properties are extracted as linear structural changes in a raw waveform. The methods and findings of the study are presented.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Automatic phoneme segmentation is an important problem in speech processing. It helps in improving the recognition quality by providing a proper segmentation information of phonemes or phonetic units. Inappropriate segmentation may lead to recognition accuracy falloff. The problem is essential not only for recognition but also for annotation purpose. In general, segmentation algorithms rely on large datasets for training where data is observed to find the patterns among them. But this process is not straight forward for languages that are under resourced because of less availability of datasets. In this paper, we propose a method that uses geometrical properties of waveform trajectory where intra signal variations are studied and used for segmentation. The method does not rely on large datasets for training. The geometric properties are extracted as linear structural changes in a raw waveform. The methods and findings of the study are presented.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Speech recognition is a well-known area that deals with the understanding of spoken units (words, sentences) that has been spoken. It is fair to say that a speech recognition system should be equipped with a good segmentation procedure. A segmentation algorithm essentially identifies the boundaries between two consecutive phonemes in a word or sentence. For an input signal S[n], a segmentation algorithm provides a set of points b 0 , b 1 , ..., b n such that the regions separated by these points belong to different phonemes. Phoneme segmentation has to be looked carefully to improve recognition accuracy. This problem has been studied by researchers in different ways. A conventional segmentation procedure relies on features that can help to understand the changes in speech signals. This information is further processed by any modeling technique of choice to identify the required boundaries. So it is a common practice that a boundary detection involves some feature extraction methods. In literature, a variety of these techniques have been used for this purpose. They are generally categorized as temporal and spectral. Temporal features (Ali et al., 1999) like energy, ZCR (Zero Crossing Rate), Pitch period, LPCCs (Linear Predictive Cepstral Coefficients) are useful in understanding temporal changes in a speech signal. Spectral features like MFCCs (Mel Frequency Cepstral Coefficients), formants, etc. are used to analyze frequency components in a signal. In addition to these, phonetic studies are proven to be helpful in the segmentation task. Research has shown that HMM based systems alone are not sufficient to understand the temporal changes effectively (Yan et al., 2006) . It is understood from the studies that structural processing methods are superior to conventional methods in capturing temporal patterns of the signals (Deng and Strik, 2007) . Modeling speech trajec-tory properties are useful to capture the temporal dynamics over the signal which can help to develop dynamic speech models (Liu and Sim, 2012) . Even though these methods are effective in capturing temporal dynamics, computational cost and the need for a vast dataset are not relaxed. The present work aimed to develop a reasonable method for phoneme segmentation by incorporating the structural properties of a waveform which can work well on smallsized datasets. The proposed method uses attributes of waveform trajectories to identify the appropriate boundary points using Canonical Correlation Analysis (CCA). The paper is organized as follows: The next section describes trajectory methods that were used for pattern analysis. Section 3 gives an overview of the CCA method. Section 4 explains the proposed approach for segmentation. The data and experimental setup is described in Section 5. Section 6 explains the results found in the study and Section 7 concludes the paper.",
"cite_spans": [
{
"start": 1151,
"end": 1169,
"text": "(Ali et al., 1999)",
"ref_id": "BIBREF0"
},
{
"start": 1677,
"end": 1695,
"text": "(Yan et al., 2006)",
"ref_id": "BIBREF14"
},
{
"start": 1850,
"end": 1872,
"text": "(Deng and Strik, 2007)",
"ref_id": "BIBREF4"
},
{
"start": 2022,
"end": 2041,
"text": "(Liu and Sim, 2012)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "In an Euclidean space, a trajectory is defined as a curve that is formed by the observation of the path that a moving object makes. The points in the path are characterized as ordered positional points. Trajectories that were initially known as Linear Trajectory Segmental Models (LTSMs) have been used to analyze speech signals for past 3 decades (Russell and Holmes, 1997) . The need for LTSMs point back to the independence assumption in HMM systems. The basic underlying idea in these systems is understanding and equipping models with the knowledge of temporal patterns across segments of a signal. These dynamic features help in overcoming the problem of independence assumption in HMM systems. In LTSMs, each segment is treated as a homogeneous unit that helps in capturing the inter-segmental dependencies too (Yifan Gong, 1997) . Trajectories are suitable in pattern analysis for two reasons (Siohan and Yifan Gong, 1996): 1. A speech trajectory is also influenced by the context 2. Trajectories formed by different phonetic units can create independent clusters based on the contextual information However the models that are based on HMM are suitable for large vocabulary speech recognition (Mitra et al., 2013) .",
"cite_spans": [
{
"start": 280,
"end": 287,
"text": "(LTSMs)",
"ref_id": null
},
{
"start": 348,
"end": 374,
"text": "(Russell and Holmes, 1997)",
"ref_id": "BIBREF10"
},
{
"start": 825,
"end": 836,
"text": "Gong, 1997)",
"ref_id": "BIBREF15"
},
{
"start": 901,
"end": 931,
"text": "(Siohan and Yifan Gong, 1996):",
"ref_id": "BIBREF11"
},
{
"start": 1202,
"end": 1222,
"text": "(Mitra et al., 2013)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Trajectories for Pattern Analysis",
"sec_num": "2."
},
{
"text": "Trajectories are not only used for speech signal analysis, but also for pattern analysis in different areas like road network (Atev et al., 2010) , databases (Jeung et al., 2008) , traffic management, etc. In general, a trajectory contains vital information like spatiality and temporal patterns of an object. There can be different ways of treating trajectories as segments sequence and points sequence. The similarities in these entities can contribute to crucial knowledge. The similarity metrics to measure the affinity vary on the kind of trajectory. The effectiveness of the comparison method depends on the underlying components that the trajectory represents. Huanhuan et al. proposed a fusion based similarity method for traffic flow patterns (Li et al., 2018) . The method combines different techniques like Merge Distance (MD), Multi Dimensional Scaling (MDS) and Density Based Spatial Clustering of applications with noise (DBSCAN) to identify traffic flow patterns and customary routes from vehicle movements. One of the fusion techniques is given by Equation 1.",
"cite_spans": [
{
"start": 126,
"end": 145,
"text": "(Atev et al., 2010)",
"ref_id": "BIBREF1"
},
{
"start": 158,
"end": 178,
"text": "(Jeung et al., 2008)",
"ref_id": "BIBREF5"
},
{
"start": 752,
"end": 769,
"text": "(Li et al., 2018)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Trajectories for Pattern Analysis",
"sec_num": "2."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "M M T D(t 1 , t 2 ) = 1 \u2212 (w 1 , w 2 ) dist 1 (t 1 , t 2 ) dist 2 (t 1 , t 2 )",
"eq_num": "(1)"
}
],
"section": "Trajectories for Pattern Analysis",
"sec_num": "2."
},
{
"text": "where dist 1 and dist 2 are different similarity measurements and each measure is treated with unequal weightages. MMTD is maximum-minimum trajectory distance (Xiao et al., 2019 ) (Lin et al., 2019 . The present work uses CCA as measurement metric which is described in the next section.",
"cite_spans": [
{
"start": 159,
"end": 177,
"text": "(Xiao et al., 2019",
"ref_id": "BIBREF13"
},
{
"start": 178,
"end": 197,
"text": ") (Lin et al., 2019",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Trajectories for Pattern Analysis",
"sec_num": "2."
},
{
"text": "Canonical correlation analysis (CCA) was introduced by Hoteling for multi-variate analysis. It helps to find the relation between multiple variables simultaneously that makes analysis easy. The fundamental step in CCA is to find a set of transforming variables that can transform variables such that the transformation in the corresponding new coordinates is maximally correlated. In the process, a set of variables called as canonical weights are used. The solution to this is computationally expensive and time consuming. Therefore, it is convenient to solve the problem as an eigen value problem. The objective function to solve CCA for two variables x and y can be expressed by Equation 2:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Canonical Correlation Analysis (CCA)",
"sec_num": "3."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "C = 0 C xy C yx 0 a b = \u03c1 2 C xx 0 0 C xx",
"eq_num": "(2)"
}
],
"section": "Canonical Correlation Analysis (CCA)",
"sec_num": "3."
},
{
"text": "where C xy and C yx are the covariances between variables x and y where as C xx , C yy are auto covariances of variables x and y respectively. There are various applications for CCA in the signal processing domain. It has been useful in finding relations which can help for multi-view learning (Liu et al., 2018) . Heycem et.al. applied the technique for feature selection for the problem of depression recognition from speech signals (Kaya et al., 2014) . Wang et.al. used CCA to learn acoustic features that can improve phonetic recognition (Wang et al., 2015) . Apart from the above mentioned applications, CCA is also useful in areas like Blind Source Separation (BSS). The problem aims to recover the original signal when an unknown linear mixture of statistically independent signals are available (Borga and Knutsson, 2001 ). Another approach based on CCA focuses to improve the signal to noise ratio (SNR) in EEG data that is recorded from multiple channels (de Cheveign\u00e9 et al., 2019) .",
"cite_spans": [
{
"start": 294,
"end": 312,
"text": "(Liu et al., 2018)",
"ref_id": "BIBREF7"
},
{
"start": 435,
"end": 454,
"text": "(Kaya et al., 2014)",
"ref_id": "BIBREF6"
},
{
"start": 457,
"end": 477,
"text": "Wang et.al. used CCA",
"ref_id": null
},
{
"start": 543,
"end": 562,
"text": "(Wang et al., 2015)",
"ref_id": "BIBREF12"
},
{
"start": 804,
"end": 829,
"text": "(Borga and Knutsson, 2001",
"ref_id": "BIBREF2"
},
{
"start": 970,
"end": 993,
"text": "Cheveign\u00e9 et al., 2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Canonical Correlation Analysis (CCA)",
"sec_num": "3."
},
{
"text": "In the present work, knowledge from a set of multiple features is used to detect boundary points in a word. The complete procedure is explained in Section 4..",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Canonical Correlation Analysis (CCA)",
"sec_num": "3."
},
{
"text": "The proposed method uses cumulative knowledge of multiple geometric features and use that to form a multi-view trajectory feature vector. The feature vector is then analyzed dynamically to extract the phonetic boundaries. There are",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Approach for Segmentation",
"sec_num": "4."
},
{
"text": "1. Basic feature set (\u03c4 ) 2. Derived features (\u03c4 D )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Approach for Segmentation",
"sec_num": "4."
},
{
"text": "Each component is explained in next subsequent subsections. Basic and derived features are defined in the next subsection. The segmentation algorithm is explained in Section 4.2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-view boundary detection algorithm",
"sec_num": "3."
},
{
"text": "A speech signal records the nature of vibrations when the vocal chord moves for uttering a sound. The resultant waveform consists of peaks and valleys which helps to understand salient features of the spoken unit and person who has uttered. Thus the waveform records different acoustic events which can be used for various purposes like classification, segmentation, etc. One of the crucial properties of a trajectory is its shape. Each event that is recorded in a speech signal can be distinct in structure. The structural properties of phonetic units have become an interesting area of study (Minematsu, 2005) . The reason for this is that the features corresponds to phonetic characteristics with variations in a lucid way. And also the structural properties of waveform trajectories are useful in understanding the dynamic nature of different phonetic units. In the present work, a set of geometric features are proposed to capture the transitional behavior of the waveform that can be further used in identifying boundary points between different phonetic units. The feature set as a whole contains two different classes i.e. primitive and derived properties. The primitive properties are those characteristics that are inherent in a waveform. They are listed as follows: Definition 1 A data point p i is said to be as peak if",
"cite_spans": [
{
"start": 594,
"end": 611,
"text": "(Minematsu, 2005)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Trajectory Features",
"sec_num": "4.1."
},
{
"text": "p i\u22121 < p i > p i+1 where \u2200i \u2208 Z Definition 2 A data point p i is said to be a valley if p i\u22121 > p i < p i+1 where \u2200i \u2208 Z",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Trajectory Features",
"sec_num": "4.1."
},
{
"text": "Definition 3 Peak position is any integer k, such that 0 < k < m where peak is found at k th location Definition 4 Valley position is any integer k, such that 0 < k < m where valley is found at k th location",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Trajectory Features",
"sec_num": "4.1."
},
{
"text": "Definition 5 The data point p k being a peak point between the valleys v q and v r , the difference r \u2212 q is defined as peak width for the peak p k \u2200k, q, r \u2208 Z and q < k < r",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Trajectory Features",
"sec_num": "4.1."
},
{
"text": "Definition 6 The data point v k being a valley point between two peaks p q and p r , the difference r \u2212 q is defined as Valley width of valley v k \u2200k, q, r \u2208 Z and q < k < r",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Trajectory Features",
"sec_num": "4.1."
},
{
"text": "The slope between two points x = (x 1 , y 1 ) and y = (x 2 , y 2 ) is defined by Equation 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition 7",
"sec_num": null
},
{
"text": "Slope(x, y) = y 2 \u2212 y 1 x 2 \u2212 x 1 (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition 7",
"sec_num": null
},
{
"text": "Definition 8 The Disparity between two points p i and p k is given by Equation 4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition 7",
"sec_num": null
},
{
"text": "Disparity(p i , p k ) = (p i \u2212 p k ) 2 , \u2200i, k \u2208 Z (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition 7",
"sec_num": null
},
{
"text": "To understand the terms, let us consider Figure 1 . In the figure, peaks and valleys are indicated as P i and V i respectively where i represents the sequence in which they occur in a waveform. The next term, peak-width is the width of the curve in a waveform between two valley positions. In the same way, valley width is the distance between two peaks in which a valley is present. Slope is the general gradient between two points in a geometric space. The points that are considered here are a pair of peaks (or valleys) . This feature gives information of two adjacent peaks (or valleys). In the segmentation algorithm, the average slope between peaks (and valleys) of each frame in the source signal is studied. Finally, the property 'Disparity' between two points (peaks or valleys) is the continuous variation between the heights of peaks and depth of valleys. The property 'slope' considers the position at which the peaks (or valleys) occur whereas 'Disparity' does not regard this property. The derived features of the word \"Zero\" are shown in Figure 2 . ",
"cite_spans": [
{
"start": 511,
"end": 523,
"text": "(or valleys)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 41,
"end": 49,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 1054,
"end": 1062,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Definition 7",
"sec_num": null
},
{
"text": "The features that are described in the previous section are analyzed to understand the boundaries of the phonetic units. The algorithm observes the dynamic changes of the waveform over the entire signal by capturing the variations with the extracted features. First, the given speech signal is divided into equal-sized frames and a set of basic features (\u03c4 ) are extracted from each signal. From the basic features, a set of derived features are drawn. Thus the complete feature set is a matrix in which each set of derived features are present. This is a multi-view representation of the waveform trajectory features that will be processed to find the segmentation points. The segmentation procedure comprises of two stages: In the first stage, the feature matrix is analyzed by the CCA procedure which will give a set of coefficients for each feature set simultaneously. These coefficients represent the correlation between the subsets of each feature set which will be used next. In the second stage, a pair of sequential frames that are adjacent will be used to generate correla-tion coefficients. Finally, the coefficients generated in first and second stages are then compared to get the variance between them. The crucial steps in the segmentation procedure can be summarized as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-View Boundary Detection Algorithm",
"sec_num": "4.2."
},
{
"text": "1. The input signal S[n] is divided into a set of frames f 0 , f 1 , ..., f n of equal size.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-View Boundary Detection Algorithm",
"sec_num": "4.2."
},
{
"text": "2. Each frame is then transformed to a set of primitive features :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-View Boundary Detection Algorithm",
"sec_num": "4.2."
},
{
"text": "S p , S v , S pi , V vi , where: \u2022 S p is set of peaks \u2022 S v is set of valleys",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-View Boundary Detection Algorithm",
"sec_num": "4.2."
},
{
"text": "\u2022 S pi is set of integers that represent peak positions",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-View Boundary Detection Algorithm",
"sec_num": "4.2."
},
{
"text": "\u2022 V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-View Boundary Detection Algorithm",
"sec_num": "4.2."
},
{
"text": "vi is set of integers that represent valley positions",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-View Boundary Detection Algorithm",
"sec_num": "4.2."
},
{
"text": "Step 2 are then transformed to a set of trajectory features \u03c4 = \u03c4 sv , \u03c4 sp , \u03c4 dpv , \u03c4 dp .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The features obtained in",
"sec_num": "3."
},
{
"text": "4. The feature sets \u03c4 are analyzed using CCA which gives a set of coefficients represented by CCA \u03c4 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The features obtained in",
"sec_num": "3."
},
{
"text": "5. The features sets belonging to subsequent frames are correlated to get the new coefficients. Each set consists of features belonging to 3 adjacent frames. The number of frames is empirically chosen so that variations can be captured in the corresponding CCA coefficients.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The features obtained in",
"sec_num": "3."
},
{
"text": "Step 5 are compared. The peaks in this set forms the boundary points. Thus the peaks in each set are combined to identify the boundary points using the CCA \u03c4 computed by Equation 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variance between coefficients computed in Step 4 and",
"sec_num": "6."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "B p = {CCA \u03c4 dp \u222a CCA \u03c4 dpv \u222a CCA \u03c4sp \u222a CCA \u03c4sv }",
"eq_num": "(5)"
}
],
"section": "Variance between coefficients computed in Step 4 and",
"sec_num": "6."
},
{
"text": "The final variances obtained for each derived feature set are shown in Figure 3 . From the diagram, it can be observed that the changes needed for identifying the phonemic variations are recorded in as peak points in the final variances. But different varieties of variations can be seen separately from features. Therefore it is required to combine the points obtained from each features to get the final boundary points. The detailed algorithm and the flowchart are given in Algorithm 1 and Figure 4 respectively. In the next section, the background setup used for the experiments is described.",
"cite_spans": [],
"ref_spans": [
{
"start": 71,
"end": 79,
"text": "Figure 3",
"ref_id": "FIGREF1"
},
{
"start": 493,
"end": 501,
"text": "Figure 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Variance between coefficients computed in Step 4 and",
"sec_num": "6."
},
{
"text": "The algorithms were implemented using Python platform. The CCA implementation that is available in Pyrcca (Bilenko and Gallant, 2016) library was used in the algorithm. The data used in present work is English digits belong to the Indian accent. The speakers belong to different regions (states) in India. They include male and female speakers. We used 50 speakers data in the analysis. Each English digit was recorded 15 times for all speakers. The digits were recorded using the Cool Edit software with 16KHz sampling rate, mono channel and 16 bits resolution. The behaviour of the algorithm for different cases are discussed in the next section. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "5."
},
{
"text": "In the present study, a set of trajectory features are considered to be useful after conducting experiments on various properties. The properties that were observed are shown in Table 1 . Figure 5 gives an idea of the nature of these features. They were not used as part of feature set in the segmentation process rather they are useful in understanding the characteristics of regions belonging to different phonetic units. Some observations are presented in each subsequent subsections separately. The analysis of the algorithm's nature for peaks and valleys are presented separately in subsequent subsections.",
"cite_spans": [],
"ref_spans": [
{
"start": 178,
"end": 185,
"text": "Table 1",
"ref_id": null
},
{
"start": 188,
"end": 196,
"text": "Figure 5",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "6."
},
{
"text": "To understand meaningful cues from speech, an analysis of the nature of peaks in different classes of sounds like vowels, fricatives and stops are done. These clues are further used to find the boundaries of phonemes. It is helpful to know the regions where changes are occurring corresponding to the behaviour of attributes. Peaks can be classified into different types based on height and width. Vowels like /i/ and /e/ have the regions with higher peaks and vowels /a/, /o/ and /u/ have wider peaks. Figure 5 shows different statistics of peaks. We can understand that the vowel regions have comparatively more wider peaks than nonvowel regions. The analysis of slope was carried in two ways:",
"cite_spans": [],
"ref_spans": [
{
"start": 503,
"end": 511,
"text": "Figure 5",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Peak Attributes Analysis",
"sec_num": "6.1."
},
{
"text": "Algorithm 1: Boundary detection algorithm Input: S[n]: Speech segment of length n k: Size of the frame Output: BP: Boundary points of phonetic units 1 begin 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Peak Attributes Analysis",
"sec_num": "6.1."
},
{
"text": "Step 1:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Peak Attributes Analysis",
"sec_num": "6.1."
},
{
"text": "Normalize S[n] 3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Peak Attributes Analysis",
"sec_num": "6.1."
},
{
"text": "Step 2: Divide S[n] into frames with equal size k 4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Peak Attributes Analysis",
"sec_num": "6.1."
},
{
"text": "Step 3: Let F n be number of frames",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Peak Attributes Analysis",
"sec_num": "6.1."
},
{
"text": "5 for i \u2190 0 to F n do 6",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Peak Attributes Analysis",
"sec_num": "6.1."
},
{
"text": "Step 3.1: Find peaks using Definition 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Peak Attributes Analysis",
"sec_num": "6.1."
},
{
"text": "Step 3.2: Find valleys using Definition 2 8",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "7",
"sec_num": null
},
{
"text": "Step 4: for i \u2190 0 to F n do 9 for j \u2190 0 to M ax(n peaks , n valleys ) do",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "7",
"sec_num": null
},
{
"text": "Step 4.1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "10",
"sec_num": null
},
{
"text": "T sp \u2190 Slope(peaks j , peaks j+1 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "11",
"sec_num": null
},
{
"text": "Step 4.2 T sv \u2190 Slope(valleys j , valleys j+1 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "12",
"sec_num": null
},
{
"text": "Step 4.3 T dp \u2190 Disparity(peaks j , peaks j+1 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "13",
"sec_num": null
},
{
"text": "Step 4.4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "14",
"sec_num": null
},
{
"text": "T dv \u2190 Disparity(valleys j , valleys j+1 ) 15 \u03c4 i \u2190 {T spi , T svi , T dpi , T dvi } 16",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "14",
"sec_num": null
},
{
"text": "Step 5:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "14",
"sec_num": null
},
{
"text": "17 for i \u2190 0 to F n do 18 canonicalcoef i \u2190 CCA(\u03c4 i ) 19",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "14",
"sec_num": null
},
{
"text": "Step 6:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "14",
"sec_num": null
},
{
"text": "20 for i \u2190 0 to F n do 21 coeffnew i \u2190 CCA validate ((\u03c4 i , ..., \u03c4 i+3 ), (\u03c4 i+3 , ..., \u03c4 i+6 )) 22 variance i \u2190 CCA V ariance (canonicalcoef i , coeffnew i ) 23",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "14",
"sec_num": null
},
{
"text": "Step 7: BP\u2190 peaks(variance sp ) \u222a peaks(variance sv ) \u222a peaks(variance dp ) \u222a peaks(variance dv )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "14",
"sec_num": null
},
{
"text": "24 return BP S.No. Attribute 1 Peak 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "14",
"sec_num": null
},
{
"text": "Peak width 3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "14",
"sec_num": null
},
{
"text": "Peak position 4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "14",
"sec_num": null
},
{
"text": "Average difference between adjacent peak values 5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "14",
"sec_num": null
},
{
"text": "Average slope between adjacent peak values 6 Valley 7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "14",
"sec_num": null
},
{
"text": "Valley width 8",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "14",
"sec_num": null
},
{
"text": "Valley position 9",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "14",
"sec_num": null
},
{
"text": "Average difference between adjacent valley values 10",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "14",
"sec_num": null
},
{
"text": "Average slope between adjacent valley values Table 1 : Attributes used for analysis 1. Slope between adjacent peaks in the same frame",
"cite_spans": [],
"ref_spans": [
{
"start": 45,
"end": 52,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "14",
"sec_num": null
},
{
"text": "This attribute is used for understanding structural significance at phoneme boundaries. Slope between adjacent peaks in the same frame does not have much variations.The difference between frames belonging to the same phonetic unit is small. But it is observed that this value is more at the phoneme boundaries. Slope between peaks of vowel regions and non-vowel regions give enough variations that helps in understanding the boundary points. Figure 6 and Figure 7 show slope and disparity between peaks of adjacent frames for the words \"Zero\" to \"Nine\". It can be observed that the changes in the wave forms are evident so that structural clues can be captured by features. There has been an interesting phenomena observed especially in vowel regions. There is a linear growth of the slope and disparity at the beginning of the vowel region and they start decaying at the middle part and continuing till the boundary is reached. This nature is observed both in intra-frame and inter-frame situations. There is a sudden increase in the slope value at the boundaries of different phonemes. The average disparity between peaks within vowel region is more than non-vowel regions. Figure 7 shows the disparity between peaks for the word \"Zero\". We can observe that there are prominent changes at boundary frames. The distance between inter frame analysis is to understand the nature of the peak values with their neighbouring frames. This distance is more at the phoneme boundaries when compared to interior regions of phonemes. Anyhow this value is high in vowel regions similar to intra-frame difference. The difference between two frames is stable in the regions belonging to the same phoneme. Therefore it is inferred that intra-frame difference can be used to identify the syllable boundaries whereas inter frame difference is useful in identifying phoneme boundaries. Figure 12 shows distance between peaks in adjacent frames for the word \"Zero\". It also shows that changes can be observed clearly at boundary frames of phoneme or syllable.",
"cite_spans": [],
"ref_spans": [
{
"start": 442,
"end": 450,
"text": "Figure 6",
"ref_id": "FIGREF4"
},
{
"start": 455,
"end": 463,
"text": "Figure 7",
"ref_id": null
},
{
"start": 1176,
"end": 1184,
"text": "Figure 7",
"ref_id": null
},
{
"start": 1869,
"end": 1878,
"text": "Figure 12",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Slope between peaks of adjacent frames",
"sec_num": "2."
},
{
"text": "The second crucial feature of waveform in the framework is valley attributes. In this class, the nature of valley was studied by understanding the properties of deeper valleys, higher valleys, positive valleys, negative valleys, etc. Figure 10 shows the statistics of these attributes. The mean and standard deviation of these properties of valleys are shown in each sub figure. These graphs suggest that there is a temporal variation across the frames in these statistics which implies that the properties are significant for phoneme boundary analysis. We can understand variations in valleys for different segments of the speech sub-units. Useful observations from the analysis are listed below:",
"cite_spans": [],
"ref_spans": [
{
"start": 234,
"end": 240,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Valley Attributes Analysis",
"sec_num": "6.2."
},
{
"text": "1. Deeper valleys and shallow over valleys are found more in vowel regions than non-vowel regions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Valley Attributes Analysis",
"sec_num": "6.2."
},
{
"text": "2. Valleys in vowels are wide.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Valley Attributes Analysis",
"sec_num": "6.2."
},
{
"text": "3. Standard deviation in vowel regions are comparatively higher than non-vowel regions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Valley Attributes Analysis",
"sec_num": "6.2."
},
{
"text": "These qualities mean that the structural variation can be achieved from valley features also. For example, vowels /i/ and /o/ have differences in the properties in terms of valleys. Vowel /i/ has deeper valleys compared to vowel /o/. It shows that there is more deviation between vowel and non-vowel regions. These statistics suggest that it is meaningful to use valley properties for understanding structural significance. The two properties Slope and disparity of the words \"Zero\" to \"Nine\" are shown in Figure 8 and Figure 9 respectively. We can see the structural consistency in different utterances of the same digit for a speaker. ",
"cite_spans": [],
"ref_spans": [
{
"start": 506,
"end": 514,
"text": "Figure 8",
"ref_id": null
},
{
"start": 519,
"end": 528,
"text": "Figure 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "Valley Attributes Analysis",
"sec_num": "6.2."
},
{
"text": "The method was also evaluated in the presence of noise in input signals. Here, the white noise up to 20dB SNR was considered. Figure 11 shows a source speech signal along with the CCA coefficients of each feature vector. A comparison between Figure 3 and Figure 11 helps in understanding the nature of the algorithm in noisy signals. The first point to understand is that there is a variation in structure of same feature vectors. In this example, the disparity vector differs in variance of CCA coefficients. The noise presence makes the adjacent frames belonging to two different phonetic units much higher in their variation that is reflected in the CCA coefficients. The multi-view analysis enables the method to learn necessary clues from different vectors. Therefore, the failure of capturing the boundary points in one case does not influence much in the final boundary points. So the results suggest that the proposed approach can be effective in noise conditions also.",
"cite_spans": [],
"ref_spans": [
{
"start": 126,
"end": 135,
"text": "Figure 11",
"ref_id": "FIGREF0"
},
{
"start": 242,
"end": 250,
"text": "Figure 3",
"ref_id": "FIGREF1"
},
{
"start": 255,
"end": 264,
"text": "Figure 11",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Characteristics of Method in Noisy Conditions",
"sec_num": "6.3."
},
{
"text": "The proposed approach is successful in identifying the boundary points in 90% of the cases. The mis-identification of boundary points are influenced by speaker's characteristics in failure cases. This include accent, pauses between the phonetic units, etc. The time complexity of the approach includes two major parts including feature extraction step and CCA. Time complexities of different steps are as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Performance of the Algorithm",
"sec_num": "6.4."
},
{
"text": "1. Peak and valley computation: O(n).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Performance of the Algorithm",
"sec_num": "6.4."
},
{
"text": "2. Finding the trajectory properties need constant time O(1) for each elementary operation which constitutes a linear time complexity O(n) for n samples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Performance of the Algorithm",
"sec_num": "6.4."
},
{
"text": "3. Lastly, CCA algorithm requires O(n 3 ) time complexity equivalent to eigen value decomposition method (Uurtio et al., 2017).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Performance of the Algorithm",
"sec_num": "6.4."
},
{
"text": "Therefore total time complexity of the approach works out to [ O(n) + 4 x O(n) + 2 x O(n 3 )]. The run time requirement of the method is approximately 470 milli seconds. The method was tested on a system with the following configuration:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Performance of the Algorithm",
"sec_num": "6.4."
},
{
"text": "-Processor : i5 (3.20 GHz)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Performance of the Algorithm",
"sec_num": "6.4."
},
{
"text": "-Memory : 8 GB Figure 7 : CCA of different features for the word \"zero\" (Noisy signal)",
"cite_spans": [],
"ref_spans": [
{
"start": 15,
"end": 23,
"text": "Figure 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Performance of the Algorithm",
"sec_num": "6.4."
},
{
"text": "In this paper, a phoneme segmentation approach based on multi-view geometrical features is proposed. The structural properties of speech trajectories are used to find the boundaries between phonetic units using the CCA method. The dissimilarities in geometrical features across a speech trajectory are used as parameters to identify boundary points.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "7."
},
{
"text": "To prove the approach, Indian accented spoken English digits data was used in the experiments. The experiments gave reasonable results from which we can infer that the method is effective in identifying the boundary points. Since the approach does not require a training process, the requirement of large data sets are dispensed with. Also as the complexity of the method is reasonable, the run time is less and hence the method is very suitable for low or zero resource languages. The dataset is shared in 1 for the future use of the researchers. The method is being studied at the sentence level for the Hindi language that is spoken in India. Figure 8 : Slope between peaks of the words \"Zero\" to \"Nine\" for a speaker Figure 9 : Disparity between peaks of the words \"Zero\" to \"Nine\" for a speaker Figure 10 : Slope between valleys of the words \"Zero\" to \"Nine\" for a speaker ing based on data mapping and density. IEEE Access, 6:58939-58954.",
"cite_spans": [],
"ref_spans": [
{
"start": 646,
"end": 654,
"text": "Figure 8",
"ref_id": null
},
{
"start": 721,
"end": 729,
"text": "Figure 9",
"ref_id": null
},
{
"start": 800,
"end": 809,
"text": "Figure 10",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "7."
},
{
"text": "Lin, Z., Zeng, Q., Duan, H., Liu, C., and Lu, F. (2019) . A semantic user distance metric using gps trajectory data.",
"cite_spans": [
{
"start": 19,
"end": 55,
"text": "Duan, H., Liu, C., and Lu, F. (2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "7."
},
{
"text": "IEEE Access, 7:30185-30196.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "7."
},
{
"text": "Liu, S. and Sim, K. C. (2012). Implicit trajectory modelling using temporally varying weight regression for automatic speech recognition. In 2012 IEEE International",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "7."
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "An acoustic-phonetic featurebased system for automatic phoneme recognition in continuous speech",
"authors": [
{
"first": "A",
"middle": [
"A"
],
"last": "Ali",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Van Der Spiegel",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Mueller",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Haentjens",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Berman",
"suffix": ""
}
],
"year": 1999,
"venue": "Circuits and Systems, 1999. IS-CAS'99. Proceedings of the 1999 IEEE International Symposium on",
"volume": "3",
"issue": "",
"pages": "118--121",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ali, A. A., Van der Spiegel, J., Mueller, P., Haentjens, G., and Berman, J. (1999). An acoustic-phonetic feature- based system for automatic phoneme recognition in con- tinuous speech. In Circuits and Systems, 1999. IS- CAS'99. Proceedings of the 1999 IEEE International Symposium on, volume 3, pages 118-121. IEEE.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Pyrcca: regularized kernel canonical correlation analysis in python and its applications to neuroimaging",
"authors": [
{
"first": "S",
"middle": [],
"last": "Atev",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "N",
"middle": [
"P"
],
"last": "Papanikolopoulos",
"suffix": ""
},
{
"first": "N",
"middle": [
"Y"
],
"last": "Bilenko",
"suffix": ""
},
{
"first": "J",
"middle": [
"L"
],
"last": "Gallant",
"suffix": ""
}
],
"year": 2010,
"venue": "IEEE Transactions on Intelligent Transportation Systems",
"volume": "11",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Atev, S., Miller, G., and Papanikolopoulos, N. P. (2010). Clustering of vehicle trajectories. IEEE Transactions on Intelligent Transportation Systems, 11(3):647-657, Sep. Bilenko, N. Y. and Gallant, J. L. (2016). Pyrcca: regular- ized kernel canonical correlation analysis in python and its applications to neuroimaging. Frontiers in neuroin- formatics, 10:49.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A canonical correlation approach to blind source separation",
"authors": [
{
"first": "M",
"middle": [],
"last": "Borga",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Knutsson",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Borga, M. and Knutsson, H. (2001). A canonical correla- tion approach to blind source separation. Report LiU- IMT-EX-0062 Department of Biomedical Engineering, Linkping University.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Multiway canonical correlation analysis of brain data",
"authors": [
{
"first": "A",
"middle": [],
"last": "De Cheveign\u00e9",
"suffix": ""
},
{
"first": "G",
"middle": [
"M D"
],
"last": "Liberto",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Arzounian",
"suffix": ""
},
{
"first": "D",
"middle": [
"D"
],
"last": "Wong",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Hjortkjaer",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Fuglsang",
"suffix": ""
},
{
"first": "L",
"middle": [
"C"
],
"last": "Parra",
"suffix": ""
}
],
"year": 2019,
"venue": "NeuroImage",
"volume": "186",
"issue": "",
"pages": "728--740",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "de Cheveign\u00e9, A., Liberto, G. M. D., Arzounian, D., Wong, D. D., Hjortkjaer, J., Fuglsang, S., and Parra, L. C. (2019). Multiway canonical correlation analysis of brain data. NeuroImage, 186:728 -740.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Structure-based and template-based automatic speech recognition -comparing parametric and non-parametric approaches",
"authors": [
{
"first": "L",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Strik",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Deng, L. and Strik, H. (2007). Structure-based and template-based automatic speech recognition -compar- ing parametric and non-parametric approaches. In IN- TERSPEECH.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Convoy queries in spatio-temporal databases",
"authors": [
{
"first": "H",
"middle": [],
"last": "Jeung",
"suffix": ""
},
{
"first": "H",
"middle": [
"T"
],
"last": "Shen",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2008,
"venue": "IEEE 24th International Conference on Data Engineering",
"volume": "",
"issue": "",
"pages": "1457--1459",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeung, H., Shen, H. T., and Zhou, X. (2008). Convoy queries in spatio-temporal databases. In 2008 IEEE 24th International Conference on Data Engineering, pages 1457-1459, April.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Cca based feature selection with application to continuous depression recognition from acoustic speech features",
"authors": [
{
"first": "H",
"middle": [],
"last": "Kaya",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Eyben",
"suffix": ""
},
{
"first": "A",
"middle": [
"A"
],
"last": "Salah",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Schuller",
"suffix": ""
}
],
"year": 2014,
"venue": "2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",
"volume": "",
"issue": "",
"pages": "3729--3733",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kaya, H., Eyben, F., Salah, A. A., and Schuller, B. (2014). Cca based feature selection with application to contin- uous depression recognition from acoustic speech fea- tures. In 2014 IEEE International Conference on Acous- tics, Speech and Signal Processing (ICASSP), pages 3729-3733. IEEE.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A complete canonical correlation analysis for multiview learning",
"authors": [
{
"first": "H",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "R",
"middle": [
"W"
],
"last": "Liu",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Xiong",
"suffix": ""
}
],
"year": 2018,
"venue": "Disparity between valleys of the words \"Zero\" to \"Nine\" for a speaker Conference on Acoustics, Speech and Signal Processing (ICASSP)",
"volume": "11",
"issue": "",
"pages": "3254--3258",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li, H., Liu, J., Wu, K., Yang, Z., Liu, R. W., and Xiong, N. (2018). Spatio-temporal vessel trajectory cluster- 1 IITG DIGITS: https://drive.google.com/drive/folders/ 1px1p2p5QRNNvFvLJT9hgkA93N7U twzs Figure 11: Disparity between valleys of the words \"Zero\" to \"Nine\" for a speaker Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 4761-4764, March. Liu, Y., Li, Y., and Yuan, Y.-H. (2018). A complete canon- ical correlation analysis for multiview learning. In 2018 25th IEEE International Conference on Image Process- ing (ICIP), pages 3254-3258. IEEE.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Mathematical evidence of the acoustic universal structure in speech",
"authors": [
{
"first": "N",
"middle": [],
"last": "Minematsu",
"suffix": ""
}
],
"year": 2005,
"venue": "IEEE International Conference on Acoustics, Speech, and Signal Processing",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minematsu, N. (2005). Mathematical evidence of the acoustic universal structure in speech. In Proceedings. (ICASSP '05). IEEE International Conference on Acous- tics, Speech, and Signal Processing, 2005., volume 1, pages I/889-I/892 Vol. 1, March.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Articulatory trajectories for large-vocabulary speech recognition",
"authors": [
{
"first": "V",
"middle": [],
"last": "Mitra",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Stolcke",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Nam",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Richey",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Liberman",
"suffix": ""
}
],
"year": 2013,
"venue": "2013 IEEE International Conference on Acoustics, Speech and Signal Processing",
"volume": "",
"issue": "",
"pages": "7145--7149",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitra, V., Wang, W., Stolcke, A., Nam, H., Richey, C., Yuan, J., and Liberman, M. (2013). Articulatory trajec- tories for large-vocabulary speech recognition. In 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pages 7145-7149, May.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Linear trajectory segmental hmms",
"authors": [
{
"first": "M",
"middle": [
"J"
],
"last": "Russell",
"suffix": ""
},
{
"first": "W",
"middle": [
"J"
],
"last": "Holmes",
"suffix": ""
}
],
"year": 1997,
"venue": "IEEE Signal Processing Letters",
"volume": "4",
"issue": "3",
"pages": "72--74",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Russell, M. J. and Holmes, W. J. (1997). Linear trajec- tory segmental hmms. IEEE Signal Processing Letters, 4(3):72-74, March.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A semi-continuous stochastic trajectory model for phoneme-based continuous speech recognition",
"authors": [
{
"first": "O",
"middle": [],
"last": "Siohan",
"suffix": ""
},
{
"first": "Yifan",
"middle": [],
"last": "Gong",
"suffix": ""
},
{
"first": ";",
"middle": [],
"last": "Uurtio",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Monteiro",
"suffix": ""
},
{
"first": "J",
"middle": [
"M"
],
"last": "Kandola",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Shawe-Taylor",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Fernandez-Reyes",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Rousu",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 1996,
"venue": "IEEE International Conference on Acoustics, Speech, and Signal Processing Conference Proceedings",
"volume": "1",
"issue": "",
"pages": "1--33",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Siohan, O. and Yifan Gong. (1996). A semi-continuous stochastic trajectory model for phoneme-based contin- uous speech recognition. In 1996 IEEE International Conference on Acoustics, Speech, and Signal Process- ing Conference Proceedings, volume 1, pages 471-474 vol. 1, May. Uurtio, V., Monteiro, J. M., Kandola, J., Shawe-Taylor, J., Fernandez-Reyes, D., and Rousu, J. (2017). A tutorial on canonical correlation methods. ACM Computing Sur- veys (CSUR), 50(6):1-33.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Unsupervised learning of acoustic features via deep canonical correlation analysis",
"authors": [
{
"first": "W",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Arora",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Livescu",
"suffix": ""
},
{
"first": "J",
"middle": [
"A"
],
"last": "Bilmes",
"suffix": ""
}
],
"year": 2015,
"venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",
"volume": "",
"issue": "",
"pages": "4590--4594",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wang, W., Arora, R., Livescu, K., and Bilmes, J. A. (2015). Unsupervised learning of acoustic features via deep canonical correlation analysis. In 2015 IEEE In- ternational Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 4590-4594, April.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Approximate similarity measurements on multi-attributes trajectories data",
"authors": [
{
"first": "P",
"middle": [],
"last": "Xiao",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Ang",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Jiawei",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Lei",
"suffix": ""
}
],
"year": 2019,
"venue": "IEEE Access",
"volume": "7",
"issue": "",
"pages": "10905--10915",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiao, P., Ang, M., Jiawei, Z., and Lei, W. (2019). Approx- imate similarity measurements on multi-attributes trajec- tories data. IEEE Access, 7:10905-10915.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Automatic speech segmentation combining an hmm-based approach and recurrence trend analysis",
"authors": [
{
"first": "R",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Zu",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings. 2006 IEEE International Conference on",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yan, R., Zu, Y., and Zhu, Y. (2006). Automatic speech segmentation combining an hmm-based approach and re- currence trend analysis. In Acoustics, Speech and Sig- nal Processing, 2006. ICASSP 2006 Proceedings. 2006 IEEE International Conference on, volume 1, pages I-I. IEEE.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Stochastic trajectory modeling and sentence searching for continuous speech recognition",
"authors": [
{
"first": "Yifan",
"middle": [],
"last": "Gong",
"suffix": ""
}
],
"year": 1997,
"venue": "IEEE Transactions on Speech and Audio Processing",
"volume": "5",
"issue": "1",
"pages": "33--44",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yifan Gong. (1997). Stochastic trajectory modeling and sentence searching for continuous speech recognition. IEEE Transactions on Speech and Audio Processing, 5(1):33-44, Jan.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "Figure 2-a shows the normalized source signal,Figure 2-b andFigure 2-c give slope and disparity of peaks respectively. Slope and disparity of valleys are shown inFigure 2-dandFigure 2-e respectively. The procedure used for segmentation is explained in next subsection. Peaks and valleys of a speech segmentFigure 2: Peak attributes for the word \"zero\"",
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"text": "CCA of different features for the word \"zero\"",
"num": null,
"type_str": "figure"
},
"FIGREF2": {
"uris": null,
"text": "Flowchart for the boundary detection algorithm",
"num": null,
"type_str": "figure"
},
"FIGREF3": {
"uris": null,
"text": "Peak statistics of the word \"zero\"",
"num": null,
"type_str": "figure"
},
"FIGREF4": {
"uris": null,
"text": "Valley statistics for the word \"zero\"",
"num": null,
"type_str": "figure"
},
"TABREF0": {
"text": "In the second stage, the aforementioned features are transformed further to obtain derived attributes. This set contains the following elements:",
"num": null,
"type_str": "table",
"content": "<table><tr><td>4. Valley position</td></tr><tr><td>1. Peak width</td></tr><tr><td>2. Valley width</td></tr><tr><td>3. Slope of peaks and valleys</td></tr><tr><td>4. Disparity of peaks and valleys</td></tr><tr><td>For a segment of speech signal S[n] with size m, the terms</td></tr><tr><td>are defined in Definitions 1 to 8.</td></tr><tr><td>1. Peak</td></tr><tr><td>2. Valley</td></tr><tr><td>3. Peak position</td></tr></table>",
"html": null
}
}
}
}