bvghostwriter / eval_dataset.jsonl
bvand086's picture
Upload 2 files
0aee944 verified
{"note": "Frequency and Amplitude Perturbation Measures (Jitter and Shimmer)"}
{"note": "Measures of signal perturbation form another well-researched area of voice science with the objective of estimating the cycle-to-cycle differences in the fundamental period (Hillenbrand 1987). Small differences, or perturbations, between adjacent cycles are always present, even when trying to produce a steady sound-state such as that which occurs with a sustained vowel (Maryn, Corthals, De Bodt, Van Cauwenberge, & Deliyski, 2009). These variations are reflective of the imperfect, quasi-periodic mechanism of voice production (Baken & Orlikoff, 2000). Two common measures termed jitter and shimmer, are used to calculate pitch and amplitude variations, respectively (Kitajima & Gould, 1976; Koike, 1969; Lieberman, 1963). In a perfectly stable production system, jitter and shimmer would equal zero (Baken & Orlikoff, 2000). That is, no differences in either the frequency or amplitude domain would exist with a pure tone due to the fact that each adjacent vibratory cycle is identical. Consequently, an increase in jitter and/or shimmer is a measurable representation of inconsistent vibratory patterns (Baken & Orlikoff, 2000). Typically, increases in measures of perturbation are associated with worsening overall voice quality, an acoustic phenomenon that is perceived as dysphonia (Maryn et al., 2009)."}
{"note": "As noted, frequency perturbation is more commonly known as jitter (Baken & Orlikoff, 2000) and this measure represents frequency variability of the fundamental period. It a measurement of the frequency changes in the periods that are immediately next to each other, and in this way is measured irrespective of cycles in non-adjacent vocal sounds, which may represent voluntary changes to the frequency (Baken & Orlikoff, 2000; Boersma, 2009; Parsa & Jamieson, 2014). Jitter is typically presented as a percentage of the fundamental frequency. Higher measured levels of jitter will correspond to increased levels of vocal abnormality that is represented in the frequency domain."}
{"note": "Amplitude perturbation is termed vocal shimmer and this measure quantifies short term variability in signal amplitude (Baken & Orlikoff, 2000; Wendahl, 1966). Calculation of shimmer is reliant upon the peak amplitude of each adjacent vocal period (Baken & Orlikoff, 2000; Horii, 1980). Shimmer can be calculated for two or more adjacent periods and the result is typically presented in decibels (dB) (Baken & Orlikoff, 2000). Similar to jitter, increasing variation in the amplitude of a vocal signal will negatively impact perceptions of vocal quality and it has been found to be an important contributor to the perception of hoarseness (Eskenazi et al., 1990; Wendahl, 1966, 2009)."}
{"note": "Jitter and shimmer are used as one important part of an acoustic assessment of voice. Increased values of either have been shown to represent increased disorder in the vibratory pattern of voice production at the level of the glottis or vocal folds. In a perfect mechanism, there would be no cycle-to-cycle variability. In the same way, as the vocal mechanism behaves in a more orderly fashion, which we attribute to normal function, measures of perturbation decrease. However, as mentioned above, even when producing relatively steady state sounds (e.g., a sustained vowel), there are always variabilities between adjacent cycles due to the imperfect mechanics of the larynx. Therefore, these measurements have been shown to increase in pathologic voices and a combination of these measures can be used as an overall correlate of vocal quality (Werth, Voigt, D\u00f6llinger, Eysholdt, & Lohscheller, 2010). This capacity allows for these measures to serve as a valuable means of monitoring and indexing vocal change over time."}
{"note": "Harmonic-to-Noise Ratio (HNR)\n\u201cHoarseness\u201d, while generically non-descript, is a common symptom of altered voice quality, including a perceptual judgment that includes changes to pitch, loudness, and quality (Schwartz, Rosenfeld, Dailey, & Cohen, 2009). It is a term to describe a rough or noisy voice that is frequently associated with poor voice quality; hoarseness has often been used to broadly categorize the presence of dysphonia (Schwartz et al., 2009). Harmonics-to-noise ratio (HNR) was developed by Yumoto et al (1982) as a measure to identify pathology based on the detection of hoarseness. It is well established that the sound of hoarseness results from the replacement of harmonic energy with noise energy (Isshiki, Yanagihara, & Morimoto, 1966; Yanagihara, 1967). Hoarseness signifies an increase in the aperiodic nature of the vibrations, as well as serving to quantify associated turbulence replacing the quasi-periodic nature of harmonic voice (Baken & Orlikoff, 2000; Eadie & Doyle, 2005; Isshiki et al., 1966; Yanagihara, 1967). The development of this ratio as an acoustic measure of voice seeks to quantify the spectrographic features of the voice (i.e., all energy components that comprise the vocal signal) that accompany hoarseness (Baken & Orlikoff, 2000; H. Kojima, Gould, & Lambiase, 1979; Hisayoshi Kojima, Gould, And, & Isshiki, 1980; Yumoto et al., 1982)."}
{"note": "The theory behind the HNR measure relies on the idea that there are two major components to voice: near periodic and aperiodic (Baken & Orlikoff, 2000). The near periodic or quasi-periodic component makes up the harmonic aspect of voice, whereas aperiodicity contributes random noise into the signal through sound pressure variation (Baken & Orlikoff, 2000; Eadie & Doyle, 2005). With increasing hoarseness, the periodic signal is subsequently contaminated by this random noise (Baken & Orlikoff, 2000; Kim, Kakita, & Hirano, 1982; Hisayoshi Kojima et al., 1980; Yumoto et al., 1982). The degree of contamination is expressed as a ratio of the harmonic energy to the noise components (Baken & Orlikoff, 2000). This can be calculated by taking the averaged waveform, which is the harmonic component, and subtracting it from the non-averaged or absolute waveform (Baken & Orlikoff, 2000). This results in the noise component being left over. A ratio of the averaged waveform and the noise waveform values results in the harmonic-to-noise ratio (Baken & Orlikoff, 2000)."}
{"note": "Increased turbulence at the level of the glottis leads to a diminished harmonics-to-noise ratio as the turbulence elevates the noise component in the signal (Ferrand, 2002). From an auditory-perceptual perspective, HNR represents a broad measure of voice quality (Ferrand, 2002). Studies evaluating vocal quality have found HNR to be an important correlate as an acoustic measure (Eskenazi et al., 1990). The voice is a complex waveform, made up of the quasi-periodic and aperiodic components. The quasi-periodic nature of the voice allows the subtraction of the averaged waveform, which represents the harmonic sound, from the complex signal leaving only the noise component. This component is the aperiodic vibration that contributes noise to the signal. Increasing noise in the signal decreases the ratio, representing a decrease in overall vocal quality. In this way, HNR can be used as a correlate of overall voice quality."}
{"note": "Cepstral Peak Prominence (CPP)"}
{"note": "An ideal measure of voice is one that is reliable in the context of aperiodicity (Heman-Ackah et al., 2003). In that context, a measure can evaluate even the most chaotic vibratory patterns that are associated with severely disordered voices. Measures of perturbation, jitter and shimmer, rely on the accurate detection of the fundamental period, which becomes increasingly difficult in severely aperiodic voices (Heman-Ackah et al., 2003). A powerful additional method of calculating the fundamental frequency was first described in 1964 by Noll et al; a measure relying on Fourier analysis of the vocal signal (Baken & Orlikoff, 2000; Hillenbrand & Houde, 1996; Noll, 1964). Using these concepts, Hillenbrand et al (1994) developed an acoustic measure to predict dysphonia severity, termed cepstral peak prominence (Heman-Ackah et al., 2003; Hillenbrand, Cleveland, & Erickson, 1994; Hillenbrand & Houde, 1996)."}
{"note": "Cepstral peak prominence (CPP) is a somewhat newer acoustic measure, one which evaluates the vocal signal without relying on identifying the fundamental period (Heman-Ackah et al., 2003, 2014). A spectrographic representation of the vocal signal is a result of a Fourier transform of the vocal signal\u2019s frequency wave (Heman-Ackah et al., 2003). This takes the signal from the amplitude (dB) and time (s) domains and produces a waveform of the intensity (dB) and the frequency (Hz) (Heman-Ackah et al., 2003). In doing so, the converted signal is a spectral representation, which results in a logarithmic representation of the amplitudes, increasing the visibility of small differences in amplitude (Heman-Ackah et al., 2003). With respect to voice, the fundamental frequency typically represents the largest frequency amplitude, followed by multiples of harmonic frequencies achieved in the resonant tract of the upper airway (Heman-Ackah et al., 2003). A cepstral representation, called a cepstrum, is achieved by performing a second Fourier transform; the signal is now presented in a waveform of magnitude (dB) and quefrency (ms) (Heman-Ackah et al., 2003; Noll, 1964). \u201cQuefrency\u201d is a term coined by Noll to avoid confusion of transforming the signal back into the time domain, where the unit of measurements is seconds (Heman-Ackah et al., 2003; Noll, 1964). The Fourier transformation of the spectral wave transforms the spectrum from the frequency domain to the time domain (Heman-Ackah et al., 2003). It serves to further clarify the harmonic components of the voice, whereby the most prominent peak corresponds to the fundamental frequency (Heman-Ackah, Michael, & Goding, 2002)."}
{"note": "Voice production generates a complex waveform where the most prominent peak of a spectral graph represents the fundamental frequency. The subsequent, less prominent, peaks represent harmonic frequencies which are typically multiples of the fundamental frequency (Heman-Ackah et al., 2003). When converted to the cepstrum, these harmonic peaks are referred to as \u201crhamonic\u201d peaks, which occur at multiples of the fundamental period (Heman-Ackah et al., 2003; Noll, 1964, 1967). A linear regression line is then calculated to represent the average sound energy through the graphic representation, which is used to calculate the cepstral peak prominence (Heman-Ackah et al., 2003). This normalizes the variability between amplitude in voice, accounting for variations in the volume the sample was captured at by determining the magnitude of the peak against the loudness at which it was produced (Heman-Ackah et al., 2003). The linear regression line is calculated from the quefrency to cepstral magnitude together, and the cepstral peak prominence is calculated as the amplitude from the peak to the corresponding value on the regression line (Heman-Ackah et al., 2002). Evaluating this measure is uncomplicated, in that the higher the prominence, the more periodic the voice sample that is being analyzed. For this reason, a highly periodic voice will result in a high CPP, which indicates highly organized harmonic structure (Heman-Ackah et al., 2003, 2002). Similarly, voices with a significant noise component will have poorly organized harmonic structures resulting in a flat appearing cepstrum, the peak from the linear regression line will be lower accordingly (Heman-Ackah et al., 2002)."}
{"note": "Cepstral peak prominence has been shown to reliably correlate with perceptual measures of dysphonia (Awan, Roy, & Dromey, 2009; Heman-Ackah et al., 2003, 2002, 2014). CPP holds some of the benefits required of an optimal measure of voice in that it is calculated independent of the fundamental period, which in turn can become increasingly unreliable in severely dysphonic voices (Heman-Ackah et al., 2003, 2014; Noll, 1967). Furthermore, it is unimpacted by the recording technique and volume (Heman-Ackah et al., 2014). That is, it is performed without requiring detection of periodicity, and by using a linear regression approach the value is independent to the loudness the sample is captured at (Heman-Ackah et al., 2003, 2002). Similar to HNR, CPP evaluates the harmonic sound in the vocal signal and compares this to the noise component (Heman-Ackah et al., 2003, 2002). This work by Hillenbrand has produced a reproducible measure for detection of dysphonic voices and a normative range of values has since been established (Awan et al., 2009; Heman-Ackah et al., 2003; Hillenbrand et al., 1994; Hillenbrand & Houde, 1996). For normal voices, a CPP will be greater than 4.0 (Heman-Ackah et al., 2014) . CPP has been established as an acoustic measure that reliably predicts dysphonia severity (Heman-Ackah et al., 2003, 2002, 2014). It does this by quantifying the harmonic organization of a voice sample in a way that is not reliant on identification of the fundamental period, making it reliable in the setting of increasingly aperiodic voices (Heman-Ackah et al., 2003, 2014; Hillenbrand et al., 1994; Hillenbrand & Houde, 1996)."}
{"note": "Smartphone Voice Analysis"}
{"note": "Acoustic voice analysis has long been seen as an important part of a thorough voice assessment. Objective measures can be collected as part of a well-rounded approach that is complemented by rigorous auditory-perceptual evaluation (Eadie & Doyle, 2005). However, barriers exist to collecting adequate voice samples for analysis, including sound-controlled environments, costly set ups, analysis software, and requirement for a trained observer to collect the sample (Kisenwether & Sataloff, 2015; Parsa & Jamieson, 2001; Parsa, Jamieson, & Pretty, 2001; Zraick et al., 2011). With a substantial increase in the widespread availability of smartphones, new avenues for data collection and mobile analysis are available to potentially alleviate some of these concerns (Burdette, Herchline, & Oehler, 2008; Manfredi et al., 2017; Park & Chen, 2007; A. Smith, 2012; Steinhubl et al., 2015). Over the past several years, research has been completed to indicate the validity of these recordings for assessment and the analyses that can be completed on a smartphone (Uloza et al., 2015)."}
{"note": "Much of the research has been directed toward evaluating the quality of smartphone voice recordings for the purposes of micro-acoustic analysis (Guidi et al., 2015; Lin et al., 2012; Uloza et al., 2015). Lin et al evaluated the iPhone for use in recording voice samples for acoustic analysis, finding good reliability when compared to a traditional set up for some acoustic measures (Lin et al., 2012). Uloza et al specifically evaluated the correlation of the results when recorded with a high-quality microphone (AKG Perception 220) compared to a smartphone microphone (Samsung Galaxy Note 3); their data showed high correlation between the analysis results from both groups, indicating reliability in the smartphone captured samples (Uloza et al., 2015). The findings of these two studies show that these types of recordings are now of high enough quality that valid acoustic analysis can be performed and using these analyses could improve early diagnosis of laryngeal diseases (Uloza et al., 2015). This alone may reduce some of the barriers to acoustic analysis by providing a low-cost option for recording the voice sample and analyzing the data through existing software programs."}
{"note": "Beyond the capabilities of voice capture and offline analysis, there have been several studies that assessed the ability to perform on-device analysis (Fujimura et al., 2019; Mat Baki et al., 2013, 2015; Siau, Goswamy, Jones, & Khwaja, 2017). With significant increases in computing power, smartphone devices are now able to carry out analysis without first exporting the voice samples (Manfredi et al., 2017). That is, they now have the computational abilities to perform the complex mathematics without the use of external software programs. The capacity for smartphone voice sample capture and analysis could improve access to ongoing, acoustic voice analysis by minimizing the steps required to offer acoustic analysis (Manfredi et al., 2017). This advantage is of particularly importance for patients who may be distant from their otolaryngology health care provider or come from resource constrained areas. By decreasing barriers to access, repeated measures (i.e., multiple ongoing recordings and analysis) are a notable benefit, leading to increased data points for evaluation in clinical and research settings (Manfredi et al., 2017). Furthermore, Manfredi et al went on to test the limits of currently available smartphones and their respective microphones (low cost vs. high cost) and found limited difference between the two levels of devices (Manfredi et al., 2017)."}
{"note": "Smartphone technology has increased over the past decade and is now frequently an area of clinical interest (Burdette et al., 2008; Steinhubl et al., 2015). Within voice science, there has been similar interest and much of the groundwork for validation has been completed (Fujimura et al., 2019; Lin et al., 2012; Manfredi et al., 2017; Mat Baki et al., 2015; Uloza et al., 2015). The introduction of new validated tools may lead to wider adoption and decrease the aforementioned barriers to acoustic voice analysis."}
{"note": "Materials and Methods"}
{"note": "This study was carried out in two phases. First, the microphones and environments were evaluated using pre-recorded samples. Second, the analysis capabilities of the smartphone were evaluated using prospectively collected samples. The study procedures will be discussed in detail in the following sections."}
{"note": "Power Calculation"}
{"note": "A power calculation for analysis was carried out a priori (Appendix B). For comparison of means, in a two-level between groups independent variable analysis, 17 persons per group was required. A total of 51 samples were collected to achieve this requirement."}
{"note": "Volunteer Recruitment"}
{"note": "The study design and rationale were described in detail to volunteers at the Otolaryngology \u2013 Head and Neck Surgery Clinic at London Health Sciences Centre. A 5-page study-information package was provided to interested persons to review independently prior to agreeing to participate (Appendix A). Informed consent was collected from all participants who volunteered to provide a voice sample. The demographic data of included participants can be found in Table 1."}
{"note": "Eligibility"}
{"note": "Inclusion criteria included English-speaking persons over the age of 18."}
{"note": "Exclusion criteria included pediatric patients and non-English speaking persons."}
{"note": "Ethics Approval"}
{"note": "This study was approved by the Health Sciences Research Ethics Board at Western University on February 28, 2019. The letter of information and consent can be reviewed in Appendix A."}
{"note": "Sources of Funding"}
{"note": "This study was supported by the Peter Cheski Innovative Resident Research Fund as well as through internal funding from the Department of Otolaryngology \u2013 Head and Neck Surgery."}
{"note": "Application Development"}
{"note": "The VOICES application was developed for the proprietary purposes of this study by Dr. Benjamin van der Woerd, Min Wu, and Dr. Vijay Parsa. It was developed using XCode 10.0+ running on MacOS 10.13.6 High Sierra or higher. The programming language was Swift 5.0. Analysis algorithm development was completed using MATLAB. Validation of the algorithm relied on Praat to generate acoustic analysis values for comparison."}
{"note": "Recording Hardware Specifications and Settings"}
{"note": "Two microphones were used for the collection of the audio samples: 1) an iPhone 7 Plus Internal Microphone and 2) a Blue Yeti Ultimate USB Microphone. The iPhone 7 plus internal microphone is an omnidirectional transducer, quantized at 16 bits per sample, and sampling rate of 44,100 Hz. Samples were recorded in mono. The Blue Yeti Ultimate USB Microphone records at a sampling rate of 48,000 Hz and quantized at 16 bits per sample. The polar patterns for audio capture on the Yeti microphone was set to cardioid for each sample, which consisted of a sustained vowel and continuous speech segment. Each sample was transferred to the secure S-Drive at London Health Sciences Centre for storage."}
{"note": "Pre-recorded Voice Samples"}
{"note": "To evaluate the impact of the microphone and the recording environment on the calculation of microacoustic measures, pre-recorded, high-quality samples were used. For the vowel analysis, twenty (n=20) sustained vowel (/a/) samples were included. This set of samples included normal (n=10) and dysphonic (n=10) voices; the quality of dysphonic samples ranged from mild to severe. The sustained vowel sample set included male and female voices. The continuous speech samples were based on informal perceptual evaluation of the experimenter. The sentence samples were comprised of the second sentence of The Rainbow Passage: \u201cThe rainbow is a division of white light into many beautiful colors.\u201d (Fairbanks, 1960). This set of voices included a series of both male (n=12) and female (n=12) voices for analysis. Similar to the vowel samples, sentence samples were also judged to range in perceptual quality from those that were normal to those that were severely pathologic."}
{"note": "Mannequin Recording Process"}
{"note": "A standardized process was used to capture samples of the pre-recorded voice recordings under varying conditions: Blue Yeti Microphone Soundproof Room (Yeti AB), Blue Yeti Microphone Non-Soundproof Room (Yeti NAB), iPhone Microphone Soundproof Room (iPhone AB), and iPhone Microphone Non-Soundproof Room (iPhone NAB). A calibrated mannequin speaker, which is standard practice in telecommunications and audiology research, was used to present the pre-recorded vocal signals, as seen in Figure 1. This was completed at a fixed distance of 15 inches (38.1 cm). The signal volume was calibrated to 69 dBA using a sound level meter. This resulted in four identical sets of recordings: Yeti AB, Yeti NAB, iPhone AB, and iPhone NAB."}
{"note": "Recording Environment Impact"}
{"note": "In the same phase, we evaluated the recording environment\u2019s impact on calculation of the same seven variables. A repeated measures ANOVA was used. A statistically significant difference was identified for the calculation of shimmer, HNR-V, CPP-V, HNR-S, and CPP-S (see Table 2). Using this data set, the calculation of F0 and jitter were not statistically different when comparing the recording environment of a soundproof booth to a quiet office (see Table 2)."}
{"note": "Mean, median, standard deviation, and standard error of the mean for each recording condition are reported in Tables 3-9. These indicate limited differences with respect to the calculation of F0, jitter, CPP-V, HNR-S, and CPP-S. A larger impact was identified for calculation of shimmer and HNR-V."}
{"note": "Prospective Recordings - Sustained Vowels"}
{"note": "The second phase of the present study evaluated prospectively collected voice samples that were analyzed using two methods: 1) via the proprietary algorithm and 2) using Praat. Prospectively recorded voice samples yielded a sustained vowel and continuous speech sample for analysis. This resulted in two paired data sets (n=51), one set analyzed by each algorithm. This data was compared using paired sample t-tests in SPSS."}
{"note": "Analysis revealed a statistically significant difference in the calculation of F0, Shimmer, HNR-V, and CPP-V. The correlations were strong between each of these measures (see Tables 10, 12-14). The differences between the calculated values for jitter were not found to be statistically significant. The correlation for jitter was moderate (see Table 11). In order to visually evaluate the present data, scatter plots comparing the two groups with respect to these five measures are graphically represented in Figures 9-13."}
{"note": "The mean F0 calculated using Praat was 142.43 Hz, compared to 153.25 Hz when calculated with the proprietary application and correlation between the calculations was 0.879 (p < .001) (see Table 10). This indicates a reliable identification of the fundamental period, which is critical for the calculation for both jitter and shimmer, as previously discussed. Interestingly, mean jitter calculated in Praat was 0.54 %, compared to 0.43 % when calculated with the proprietary application. Here, the correlation between the calculations was only moderate at 0.355 (p = .011) (see Table 11). Next, the correlation of shimmer was strong, at 0.829 (p < .001) (see Table 12). The mean shimmer calculated in Praat was 0.40 dB, compared to 0.52 dB when calculated with the proprietary application. Once again there was a strong correlation between the two methods of measurement for HNR-V (0.964, p < .001) and CPP-V (0.881, p < .001). The mean HNR-V calculated in Praat was 19.11 dB, compared to 16.71 dB when calculated with the proprietary application. Finally, the mean CPP-V calculated in Praat was 18.99 dB, compared to 23.39 dB when calculated with the proprietary application."}
{"note": "Prospective Recordings - Continuous Speech"}
{"note": "In addition to the above sustained vowel analysis, the same two paired data sets of 51 continuous speech samples were analyzed using Praat and the proprietary developed application. These data were evaluated using paired sample t-tests in SPSS. Because of the nature of continuous speech and the increased complexity of the signal, only HNR-S and CPP-S were calculated for these samples."}
{"note": "In the comparison of means, the differences in CPP-S and HNR-S were found to be statistically significant (p < .001) as seen in Tables 15 and 16. In this analysis, the differences between the values were substantially larger. The mean CPP-S calculated in Praat was 9.197 dB, compared to 16.985 dB when calculated with the proprietary application (see Table 16). Similarly, for HNR-V, the mean calculated in Praat was 12.440 dB, compared to 4.132 dB when calculated with the proprietary application (see Table 15). These differences were reflected in the correlations between the values, which was strong for CPP-S (0.632, p < 0.001), and moderate for HNR-S (0.299, p = .033) (see Tables 15-16). Scatter plots comparing the two groups with respect to these two measures are also visually presented in Figures 14-15."}
{"note": "Discussion and Conclusion"}
{"note": "Voice analysis is an important part of a thorough voice examination. The comprehensive evaluation can include auditory-perceptual analysis, aerodynamic studies, and acoustic voice analysis, amongst others. Acoustic analysis can provide objective measurements for diagnostic and ongoing monitoring documentation and such data can complement auditory-perceptual assessments or prompt further investigations. The existing methods of voice sample collection and analysis are often resource intensive, creating barriers to access. In the current work, we sought to empirically examine a multi-phase study evaluating smartphones as an avenue to perform voice sample acquisition and on-device acoustic voice analysis. In the sections to follow, a discussion of the following areas of interest will be addressed. First, the assessment of the microphones and recording environments, as completed in phase one will be detailed. Second, we will discuss the results of the prospectively collected samples and the validity of the proprietary algorithm. Next, limitations of the study will be addressed. Clinical implications of this study and future study directions will each be discussed separately."}
{"note": "Mannequin Study"}
{"note": "In phase one of this study, microphones for voice sample collection and the influence of recording environment were evaluated. Pre-recorded samples were captured under four different conditions (i.e., Yeti microphone in a soundproof booth, iPhone microphone in a soundproof booth, Yeti microphone in a quiet office, and iPhone microphone in a quiet office) to reflect the two different microphones and recording environments that were evaluated. The resultant data revealed that statistically significant impacts from both the microphone and the environment do exist on the acoustic measurements, albeit limited. The differences in values that were produced were typically of small magnitude, that would not represent a clinically important difference in most cases. For example, the difference between the mean F0 in the gold standard scenario (Yeti microphone in a soundproof room) and the least controlled setting (iPhone microphone in a quiet office) revealed a 0.9 Hz difference. Normative ranges for F0 are approximately 120 Hz for men and 210 Hz for women, where a half hertz difference would not make an impact on clinical decision making. Similarly, small differences were found in the calculation of jitter and CPP-V, whereby clinical interpretation would not be affected. By introducing a lower quality microphone, many, but not all of the acoustic measures evaluated were found to be significantly different. Similarly, by recording in a less controlled environment, there were impacts identified specific to the calculation of microacoustic measures. That is, the calculation of F0, shimmer, CPP-V, and CPP-S all showed statistically significant differences when comparing the recording environments. The differences that were identified are consistent as expected, but small in overall effect. The magnitude of difference between the soundproof room and the quiet office was modest in the calculation of F0, jitter, CPP-V, HNR-S, and CPP-S. Despite the statistically significant differences observed, these impacts do not appear to represent a clinically important difference in the ability to measure these selected acoustic measures."}
{"note": "A repeated measures ANOVA indicates statistically significant differences that are present consistently between the variables being assessed. However, it does not indicate what the difference between the variables is. That is, whether one variable being assessed was higher or lower than the other cannot be verified using an ANOVA. To understand the identified results, the mean values were graphed in a scatter plot to evaluate which microphone had an effect, what the impact was, and the degree of difference between the microphones. Based on evaluation of the means of each group, it was clear that the effect of the microphone was limited and largely viewed to be clinically acceptable (Tables 2-8). However, calculation of shimmer and HNR-V had more significant effects from both the microphone and recording environment; therefore, the interpretation of these results should be undertaken with caution in a clinical setting . Though formal minimal clinically important differences are not known for either measurement in Praat, the normative values for each are informative in this context. The normative mean value for HNR is 11.9 dB (CI 7.0-17.0) (Yumoto et al., 1982). The normative range for shimmer is typically greater than 0.350 dB as a pathologic threshold (Praat Documentation, 2003). Given the greater mean differences observed in each of these values, the likelihood of a false result is elevated. For these reasons these results should be contextualized within the conditions the voice sample was collected in."}
{"note": "Proprietary Application Analysis"}
{"note": "Phase two of this study evaluated the proprietary application against the current standard used for acoustic voice analysis (Praat) (Boersma, 2002). Prospective voice samples were collected from volunteers. From these, a sustained vowel and continuous speech sample were collected and relevant components of these samples were extracted for analysis."}
{"note": "For the vowel analysis, the proprietary algorithm was observed to perform well when compared to the data generated using Praat. The correlations of the measures across the two platforms ranged from moderate-to-strong (0.355 to 0.964). Jitter results were the least correlated measure based on the current results. Looking at the jitter scatter plot (Figure 10), one can observe that there are 6 major outliers. Evaluation of the raw data reveals the selected fundamental period to be substantially different between the two analyses. As an example, one such non-dysphonic female sample had her F0 as measured by Praat to be 112.92 Hz, measured as 223.43 Hz by the proprietary algorithm. Closer examination of the pitch track reveals this to have been incorrectly selected by Praat (see Figure 16). This explains the variability in the jitter, which relies on fundamental period calculation to accurately determine. Jitter is a measure of cycle-to-cycle period variability. It compares the cycle-to-cycle differences to the observed fundamental period. If the fundamental period is incorrectly identified, then its utility as the comparator group is negated. The subsequent cycles will inevitably be highly variable, and the percentage differences will rise inappropriately leading to high measured jitter."}
{"note": "When reviewing the sustained vowel analysis, these data provided validation that the proprietary algorithm developed for this study can reliably measure our chosen acoustic variables. Furthermore, the fundamental frequency was correctly identified in some cases that appeared to have been miscalculated by Praat, leading to some of the discrepancies in the comparison of means. Importantly, this impacts not only the calculation of the mean F0 values that were compared but has downstream impacts on the calculation of both jitter and shimmer. Furthermore, this is an indication of the robustness of the algorithm, whereby the accuracy was maintained in noisier samples, even in the context of miscalculations by the gold standard (Praat)."}
{"note": "The sustained vowel analysis resulted in highly valid results based on the data in this study. However, analysis of continuous speech is a significantly more challenging task. More specifically, running speech signals are vastly more complex in the range of frequency and amplitude, as well as in the temporal domain (e.g., changes in speech rate, pauses, voice breaks between segments, etc.). The comparison of HNR-S and CPP-S measures for the continuous speech samples reflects these challenges. HNR-S, when calculated by the proprietary algorithm, had a moderate correlation (0.299) when compared to the same measure calculated by Praat. As discussed above, this acoustic measure is calculated by taking the smoothed waveform to represent the harmonic signal, subtracted from the complex overall signal, and the remaining waveform reflects the background noise. By not concatenating the continuous speech sample, there are breaks that include ambient noise. Ambient noise is the background noise in the samples. Specifically, in the context of samples collected outside of a soundproof setting, this background noise affects the calculation of microacoustic measures. It contributes noise to the signal, which affects the residual waveform when the harmonic signal is removed in the calculation of the HNR. Furthermore, the linear regression line for the calculation of CPP, which is based on the amplitude of the signal, will be affected by the presence of background noise. Ambient noise between voice segments was not accounted for in the proprietary analysis, which likely explains the reasons for the significantly different values; in this instance, this may reflect an altered estimation of the presence of dysphonia."}
{"note": "Despite this, in comparison to the HNR evaluation data, CPP performed better and demonstrated a strong correlation of 0.632. Once again, however, there was a significant discrepancy between the means of the comparator groups. This suggests that although this measure was shown to be reliable, it was different from the same measures provided using Praat. Thus, method of analysis remains an important area of consideration relative to acoustic measures of voice."}
{"note": "These collective results bring forth an important consideration when evaluating measures of acoustic voice analysis. More specifically, these data verify the importance of setting normative values for the analysis conditions (microphone, algorithm, and recording environment) (Watts, Awan, & Maryn, 2017). This is always an important consideration when comparing acoustic measures collected through different methods, which can have a variety of different algorithmic variances that can lead to different results (Watts et al., 2017). While differences across the analysis methods of the same measure are not to be unexpected, these findings stress the continued importance of establishing a normative database for both normal and disordered voices with any given system. As further support to this suggestion Watts et al (2017) compared two validated methods of calculating CPP and showed both resulted in reliable, though different, measurements. Yet most importantly, though the two methods are valid, they cannot be directly compared (Watts et al., 2017). With respect to this study, the measurement of CPP in continuous speech sample held strong correlation to Praat."}
{"note": "In summary, the present project has validated a new proprietary algorithm for acoustic voice analysis of sustained vowels; in some cases, outperforming the gold standard method. Furthermore, it has resulted in reliable measurements of CPP-S. This study has highlighted the importance of establishing normative values for each condition of acoustic voice analysis, to be used as context for interpreting clinical results. Finally, the results of the sentence analysis confirmed the findings that reliable measures of the same variable cannot necessarily be compared across different measurement methods."}
{"note": "Study Limitations"}
{"note": "It is important to contextualize the results of this study, as well as to recognize the limitations of acoustic analysis. As previously stated, acoustic measurements do not always correlate with auditory-perceptual assessments. Understanding this phenomenon when introducing new methods of measurement requires the clinical context to recognize that it is important to correlate the findings not just with previous acoustic methods, but also with auditory-perceptual assessments directly."}
{"note": "Furthermore, the proprietary algorithm used was developed for the unique purposes of this study. Although it performed well in the sustained vowel analysis component of the work, the correlations were less convincing in analysis of continuous speech. With these discrepancies in measurement, there always will be changes to the algorithmic approaches moving forward. These changes will serve to improve the sensitivity and specificity of the measurements. Therefore, any changes to the algorithm may result in different results if this experiment was to be repeated. However, if measures were to be reanalyzed within-system (i.e., the algorithm) we would expect uniform results."}
{"note": "Additionally, the majority of prospectively collected samples represented non-dysphonic voices (n=42). In theory, these are easier to analyze than moderate or severely disordered voices. Of major concern here is the issue of how a period is extracted, which in turn overlays to other microacoustic measures. As such, this may to some extent represent a gap in our validation. This concern may then raise support for efforts moving forward which seek to improve the algorithm in future studies."}
{"note": "With respect to the study design, there were three recording subgroups for the assessment of the proprietary algorithm (Yeti AB, iPhone AB, iPhone NAB). The design was powered to allow for subgroup analyses of the individual groups, but these analyses were not completed. All participant samples (n=51) were combined and analyzed under both algorithms, before the results were compared with a paired-sample t-test. Though the correlation results were excellent when comparing the groups at this level, subgroup analyses may have revealed discrepancies that were otherwise not identified for the larger group analysis."}
{"note": "Finally, the comparison of the proprietary analysis in this study was completed with correlational statistics. To assess whether a new method of measurement is adequate for the purposes of replacing an old one, a limits of agreement analysis (Bland-Altman plot) should be performed (Bland & Altman, 1999). When measuring the same feature or factor in two different ways, there is bound to be a high level of correlation. Limits of agreement analysis is likely to reveal that two different methods do not perfectly agree, even in the setting of high correlation coefficients. For this reason, such analysis would appear to be important for determining whether the different measures can be used equally without causing errors in clinical judgment."}
{"note": "Clinical Implications"}
{"note": "This study was designed to potentially reduce barriers to access for acoustic voice analysis. This was carried out in two phases, each phase structured to address specific barriers. First, we evaluated smartphone microphone utility for the purposes of acoustic voice analysis. This was done to systematically confirm the results of previous studies (Lin et al., 2012; Uloza et al., 2015). In the same phase, we also evaluated the effects of a non-soundproof environment on acoustic analysis measurements. With limited effect from each of these variables, the current results indicate that voice samples collected in quiet, non-soundproof, environments can acceptably be captured using smartphones for the purposes of performing acoustic analysis."}
{"note": "Statistically significant differences were identified on repeated measure ANOVAs, which looks for trends of differences between data sets and whether differences lead to a consistent effect. This statistical test does not indicate degree or amount of impact the variable being assessed had on the outcome, but rather, just that the effect was consistent and attributable to the variable rather than a result of random chance. With respect to clinical decision making, degree of impact is considerably more important than the presence of an impact alone. In the case of the voice measures included in the present study, there are no defined minimal clinically important differences (MCID). As a threshold for clinical significance, the normative ranges and pathologic limits were used as comparators. Accordingly, we looked at whether or not a mean difference identified represented a large or small amount of change compared to the normative range."}
{"note": "This brings up an important point, namely, that the normative values for any set of recording conditions needs to be set individually. This would include consideration of a different algorithm, microphone, or recording specifications, such as mouth-to-microphone distance, amongst others. Thus, these normative values are specific to a particular set of conditions and, therefore, any new measures would need to be established for any new procedure prior to further comparisons. This is particularly important when comparing results of one analysis environment to another as evidenced by work by Watts et al. (2017) where they showed that previously validated algorithms measuring cepstral peak prominence resulted in different absolute values. With this in mind, the normative values or ranges for each may not necessarily be equivalent when evaluating new conditions and algorithms, as was done in the present validation of the proprietary analysis algorithm."}
{"note": "Based on our results, the data suggests that voice samples can be collected from patients and research participants in a wider variety of settings. More specifically, this lowers the resource intensity requirements and can enable sample collection in remote locations where a soundproof setting is not available. And, of particular importance, these measures may provide reliable and valid indices of voice characteristics in both normal and dysphonic speakers. One setting where this can be directly utilized is in otolaryngology practices with a widely distributed patient population. In these settings, patients often are required to travel significant distances for (re)assessment, limiting the availability of resource intense testing practices. Remote collection of voice samples in a reliable manner under standardized collection protocols can limit the necessity of travel required for acoustic voice analysis. Additionally, patients may be able to collect a greater number of samples for assessment. These also can be collected in quiet spaces, which can include a home environment. This clearly opens the possibility of greater within subjects\u2019 analyses moving forward using the present smartphone application."}
{"note": "In the second phase of this project, a proprietary algorithm was compared to a well-established and widely used analysis procedure (Praat). This revealed strongly correlated assessments for sustained vowel analysis and moderate-to-strongly correlated results for continuous speech analysis. With this mobile acoustic voice analysis algorithm validation, this study indicates that both remote collection and analysis is feasible."}
{"note": "Introduction of a mobile acoustic voice analysis application will provide patients, practitioners, and researchers with a highly reliable, low-cost method of performing acoustic voice analysis. This increases the previously mentioned benefits, by further increasing the accessibility to a useful format of objectively assessing voice. Furthermore, within-subjects\u2019 assessments could be used as a comparator group to which other samples can be evaluated against while receiving treatment. This may in some manner provide the opportunity for direct biofeedback measures as a patient receives voice therapy. Furthermore, it could potentially indicate changes to the voice following surgical management of benign or malignant laryngeal tumors, whereby a change could prompt a direct reassessment by an otolaryngologist. During the development of a patient tool whereby voice samples would be collected and stored for analysis, significant consideration must be given to confidentiality of personal health information. Standards exist in the governing of the storage of personal health information (PHIPA, 2004) and these would need to be strictly adhered to for the safety of the end users."}
{"note": "Directions for Future Research"}
{"note": "The current study was the first introduction of a new proprietary algorithm to perform acoustic voice analysis on a mobile platform. It performed well, but there is room for improvement and refinement. Most specifically, improvements in the ability of the application to analyze the complex signals of continuous speech is required. Following these improvements, validation of the algorithm for use in continuous speech will be necessary and ideally, experimental replication within a clinical environment will validate the process in the future."}
{"note": "Additionally, it is important to mention that this study evaluated voice samples gathered in non-soundproof environments for voice analysis. A future study should seek to systematically evaluate environmental factors to define the limits of ambient noise on reliable acoustic assessments. Efforts of this type hold considerable value in confirming that voice measure can be accurately obtained and analyzed in a reliable manner."}
{"note": "Next, one of the benefits of this format of acoustic voice analysis pertains to within subject analyses; a future study should be designed to investigate this further as a means of tracking voice over time. As well, this algorithm could be evaluated for specificity with respect to specific voice disordered populations. Ultimately, the ability of the smartphone system to gather data over time provides the capacity to generate a time series of voice measures that can be compared over both short- and longer-term periods. This would be beneficial for both those who may await treatment, as well as those who are receiving treatment or post-treatment."}
{"note": "Conclusion"}
{"note": "Smartphone based voice analysis has recently become a real possibility with improvements in mobile technology (Fujimura et al., 2019; Lin et al., 2012; Mat Baki et al., 2013, 2015; Uloza et al., 2015). The utility of these systems has been hypothesized by Manfredi et al (2017) but has not yet been fully achieved at present. No widespread adoption of a mobile platform has occurred, but with increased availability and validation of these tools, there is likely to be new benefits realized. Continued research and refinements based on new data will serve to optimize both data acquisition and the comprehensive analyses of samples gathered."}
{"note": "The results of this study confirm the validity of previous findings reported in the literature that smartphone voice recordings are adequate for acoustic voice analysis (Lin et al., 2012; Uloza et al., 2015). The present study found the impact of a smartphone microphone on acoustic voice measures to be limited in magnitude and unlikely to impact clinical interpretation. Similarly, it is well-recognized that a soundproof setting is ideal for controlling the ambient noise effects relative to the acoustic analysis of voice, however, this is not always feasible. Therefore, the results of this study indicate that a quiet, non-soundproof setting is adequate for voice sample collection. While care must always be taken in gathering samples in a consistent fashion, the ability to provide such data on a within-subject basis serves to mitigate this influence."}
{"note": "Furthermore, the prospectively captured samples were used as part of the validation for the proprietary analysis algorithm. The proprietary algorithm in its current state can reliably analyze sustained vowel samples on par with Praat. The analysis of continuous speech is more complex with variability in the range of frequency, amplitude, and temporal (e.g., rate of speech, pauses, voice breaks, etc.) domains. Reliable mobile analysis is possible, based on the present study, but the existing algorithm requires modifications before its reliability meets that of existing available options such as Praat."}
{"note": "In conclusion, the results of this study indicate that robust acoustic voice analysis is feasible on current day smartphones with samples collected under non-soundproof conditions. These results provide an opportunity to decrease existing obstacles to acoustic voice analysis as part of a routine clinical voice assessment. The objectives of using mobile technology to perform acoustic analysis aim to increase accessibility and remove barriers to use. The results of this study provide data, based on a systematic acquisition methods which supports future use of the current proprietary algorithm. In doing so, the current findings indicate that the present smartphone application, as seen in Figure 17, could be used reliably by patients, healthcare providers, and researchers for acoustic voice analysis."}