repo stringlengths 7 90 | file_url stringlengths 81 315 | file_path stringlengths 4 228 | content stringlengths 0 32.8k | language stringclasses 1
value | license stringclasses 7
values | commit_sha stringlengths 40 40 | retrieved_at stringdate 2026-01-04 14:38:15 2026-01-05 02:33:18 | truncated bool 2
classes |
|---|---|---|---|---|---|---|---|---|
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/pyAudioLex/adjective_freq.py | features/audio_features/helpers/pyAudioLex/adjective_freq.py | '''
@package: pyAudioLex
@author: Drew Morris
@module: adjective_freq
Frequency of a POS tag is computed by dividing the total number of words
with that tag by the total number of words spoken by the subject in the
recording.
'''
from nltk.tokenize import word_tokenize
from nltk.tag import pos_tag, map_tag
def adjective_freq(s, tokens = None):
if tokens == None:
tokens = word_tokenize(s)
pos = pos_tag(tokens)
adjectives = []
for [token, tag] in pos:
part = map_tag('en-ptb', 'universal', tag)
if part == "ADJ":
adjectives.append(token)
if len(tokens) == 0:
return float(0)
else:
return float(len(adjectives)) / float(len(tokens))
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/pyAudioLex/wpm.py | features/audio_features/helpers/pyAudioLex/wpm.py | '''
@package: pyAudioLex
@author: Drew Morris
@module: wpm
Used to calculate words per minute.
'''
def wpm(s, tokens, duration):
r = float(duration / 60)
return len(tokens) / r | python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/pyAudioLex/question_ratio.py | features/audio_features/helpers/pyAudioLex/question_ratio.py | '''
@package: pyAudioLex
@author: Drew Morris
@module: question_ratio
Patients are more likely to forget details in the middle of conversation,
to not understand the questions, or to forget the context of the question.
In those cases, they tend to ask the interviewer to repeat the question or
they get confused, talk to themselves, and ask further questions about the
details. The question words such as 'which,' 'what,' etc. are tagged
automatically in each conversation. The full list of question tags that
were used here is shown in Table 2. The question ratio of a subject is
computed by dividing the total number of question words by the number
of utterances spoken by the subject.
'''
from nltk.tokenize import RegexpTokenizer, word_tokenize
def question_ratio(s, tokens = None):
if tokens == None:
tokens = word_tokenize(s)
tokenizer = RegexpTokenizer('Who|What|When|Where|Why|How|\?')
qtokens = tokenizer.tokenize(s)
if len(tokens) == 0:
return float(0)
else:
return float(len(qtokens)) / float(len(tokens)) | python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/pyAudioLex/conjunction_freq.py | features/audio_features/helpers/pyAudioLex/conjunction_freq.py | '''
@package: pyAudioLex
@author: Drew Morris
@module: conjunction_freq
Frequency of a POS tag is computed by dividing the total number of words
with that tag by the total number of words spoken by the subject in the
recording.
'''
from nltk.tokenize import word_tokenize
from nltk.tag import pos_tag, map_tag
def conjunction_freq(s, tokens = None):
if tokens == None:
tokens = word_tokenize(s)
pos = pos_tag(tokens)
conjunctions = []
for [token, tag] in pos:
part = map_tag('en-ptb', 'universal', tag)
if part == "CONJ":
conjunctions.append(token)
if len(tokens) == 0:
return float(0)
else:
return float(len(conjunctions)) / float(len(tokens))
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/pyAudioLex/standardized_word_entropy.py | features/audio_features/helpers/pyAudioLex/standardized_word_entropy.py | '''
@package: pyAudioLex
@author: Drew Morris
@module: standardized_word_entropy
One of the earliest parts of the brain to be damaged by Alzheimer's
disease is the part of the brain that deals with language ability [5].
We hypothesize that this may cause a degradation in the variety of words
and word combinations that a patient uses. Standardized word entropy,
i.e., word entropy divided by the log of the total word count, is used
to model this phenomenon. Because the aim is to compute the variety of word
choice, stemming is done, and only the stems of the words are considered.
'''
import math
from nltk import FreqDist
from nltk.tokenize import word_tokenize
def entropy(tokens):
freqdist = FreqDist(tokens)
probs = [freqdist.freq(l) for l in freqdist]
return -sum(p * math.log(p, 2) for p in probs)
def standardized_word_entropy(s, tokens = None):
if tokens == None:
tokens = word_tokenize(s)
if len(tokens) == 0:
return float(0)
else:
if math.log(len(tokens)) == 0:
return float(0)
else:
return entropy(tokens) / math.log(len(tokens))
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/pyAudioLex/filler_ratio.py | features/audio_features/helpers/pyAudioLex/filler_ratio.py | '''
@package: pyAudioLex
@author: Drew Morris
@module: filler_ratio
Filler sounds such as 'ahm' and 'ehm' are used by people in spoken language
when they think about what to say next. We hypothesize that they may be used
more frequently by the patients because of slow thinking and memory recall
processes. Patients tend to forget what they are talking about and to use
fillers more often than the control subjects. The filler ratio is computed
by dividing the total number of filler words by the total number of
utterances spoken by the subject.
'''
from nltk.tokenize import RegexpTokenizer, word_tokenize
def filler_ratio(s, tokens = None):
if tokens == None:
tokens = word_tokenize(s)
tokenizer = RegexpTokenizer('uh|ugh|um|like|you know')
qtokens = tokenizer.tokenize(s.lower())
if len(tokens) == 0:
return float(0)
else:
return float(len(qtokens)) / float(len(tokens))
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/pyAudioLex/audio_.py | features/audio_features/helpers/pyAudioLex/audio_.py | '''
@package: pyAudioLex
@author: Drew Morris
@module: audio
Used to process all audio features based heavily on pyAudioAnalysis
'''
from pyAudioAnalysis import audioBasicIO
from pyAudioAnalysis import audioFeatureExtraction
import wave
import contextlib
# get duration
def get_duration(filepath):
try:
wavefile = wave.open(filepath, 'r')
# see how long the file is
with contextlib.closing(wavefile) as f:
frames = f.getnframes()
rate = f.getframerate()
duration = frames / float(rate)
return duration
except:
return 0.0
# process audio
def audio_featurize(wav):
[Fs, x] = audioBasicIO.readAudioFile(wav)
x = audioBasicIO.stereo2mono(x) # convert to mono
F = audioFeatureExtraction.stFeatureExtraction(x, Fs, 0.050*Fs, 0.025*Fs)[0]
print(F)
results = {}
results['ZCR'] = F[0].tolist()
results['energy'] = F[1].tolist()
results['entropy'] = F[2].tolist()
results['spectral_centroid'] = F[3].tolist()
results['spectral_spread'] = F[4].tolist()
results['spectral_entropy'] = F[5].tolist()
results['spectral_flux'] = F[6].tolist()
results['spectral_rolloff'] = F[7].tolist()
results['MFCC_1']=F[8].tolist()
results['MFCC_2']=F[9].tolist()
results['MFCC_3']=F[10].tolist()
results['MFCC_4']=F[11].tolist()
results['MFCC_5']=F[12].tolist()
results['MFCC_6']=F[13].tolist()
results['MFCC_7']=F[14].tolist()
results['MFCC_8']=F[15].tolist()
results['MFCC_9']=F[16].tolist()
results['MFCC_10']=F[17].tolist()
results['MFCC_11']=F[18].tolist()
results['MFCC_12']=F[19].tolist()
results['MFCC_13']=F[20].tolist()
results['chroma_vector_1']=F[21].tolist()
results['chroma_vector_2']=F[22].tolist()
results['chroma_vector_3']=F[23].tolist()
results['chroma_vector_4']=F[24].tolist()
results['chroma_vector_5']=F[25].tolist()
results['chroma_vector_6']=F[26].tolist()
results['chroma_vector_7']=F[27].tolist()
results['chroma_vector_8']=F[28].tolist()
results['chroma_vector_9']=F[29].tolist()
results['chroma_vector_10']=F[30].tolist()
results['chroma_vector_11']=F[31].tolist()
results['chroma_vector_12']=F[32].tolist()
results['chroma_deviation'] = F[33].tolist()
return results
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/pyAudioLex/pronoun_freq.py | features/audio_features/helpers/pyAudioLex/pronoun_freq.py | '''
@package: pyAudioLex
@author: Drew Morris
@module: pronoun_freq
Frequency of a POS tag is computed by dividing the total number of words
with that tag by the total number of words spoken by the subject in the
recording.
'''
from nltk.tokenize import word_tokenize
from nltk.tag import pos_tag, map_tag
def pronoun_freq(s, tokens = None):
if tokens == None:
tokens = word_tokenize(s)
pos = pos_tag(tokens)
pronouns = []
for [token, tag] in pos:
part = map_tag('en-ptb', 'universal', tag)
if part == "PRON":
pronouns.append(token)
if len(tokens) == 0:
return float(0)
else:
return float(len(pronouns)) / float(len(tokens))
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/pyAudioLex/brunets_index.py | features/audio_features/helpers/pyAudioLex/brunets_index.py | '''
@package: pyAudioLex
@author: Drew Morris
@module: brunets_index
Brunet's index (W) quantifies lexical richness [20]. It is
calculated as W = N^V^-0.165, where N is the total text length and V is the
total vocabulary. Lower values of W correspond to richer texts. As with
standardized word entropy, stemming is done on words and only the stems
are considered.
'''
import math
from nltk.tokenize import word_tokenize
def brunets_index(s, tokens = None):
if tokens == None:
tokens = word_tokenize(s)
N = float(len(tokens))
V = float(len(set(tokens)))
if N == 0 or V == 0:
return float(0)
else:
return math.pow(N, math.pow(V, -0.165))
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/pyAudioLex/adverb_freq.py | features/audio_features/helpers/pyAudioLex/adverb_freq.py | '''
@package: pyAudioLex
@author: Drew Morris
@module: adverb_freq
Frequency of a POS tag is computed by dividing the total number of words
with that tag by the total number of words spoken by the subject in the
recording.
'''
from nltk.tokenize import word_tokenize
from nltk.tag import pos_tag, map_tag
def adverb_freq(s, tokens = None):
if tokens == None:
tokens = word_tokenize(s)
pos = pos_tag(tokens)
adverbs = []
for [token, tag] in pos:
part = map_tag('en-ptb', 'universal', tag)
if part == "ADV":
adverbs.append(token)
if len(tokens) == 0:
return float(0)
else:
return float(len(adverbs)) / float(len(tokens))
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/pyAudioLex/pronoun_to_noun_ratio.py | features/audio_features/helpers/pyAudioLex/pronoun_to_noun_ratio.py | '''
@package: pyAudioLex
@author: Drew Morris
@module: pronoun_to_noun_ratio
Pronoun-to-noun ratio is the ratio of the total number of pronouns to
the total number of nouns.
'''
from nltk.tokenize import word_tokenize
from nltk.tag import pos_tag, map_tag
def pronoun_to_noun_ratio(s, tokens = None):
if tokens == None:
tokens = word_tokenize(s)
pos = pos_tag(tokens)
pronouns = []
nouns = []
for [token, tag] in pos:
part = map_tag('en-ptb', 'universal', tag)
if part == "PRON":
pronouns.append(token)
for [token, tag] in pos:
part = map_tag('en-ptb', 'universal', tag)
if part == "NOUN":
nouns.append(token)
if len(nouns) == 0:
return float(0)
else:
return float(len(pronouns)) / float(len(nouns))
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/pyAudioLex/type_token_ratio.py | features/audio_features/helpers/pyAudioLex/type_token_ratio.py | '''
@package: pyAudioLex
@author: Drew Morris
@module: type_token_ratio
A pattern that we noticed in the recordings of the Alzheimer's
patients is the frequency of repetitions in conversation. Patients tend
to forget what they have said and to repeat it elsewhere in the
conversation. The metric that we used to measure this phenomenon is
type-token ratio [22]. Type-token ratio is defined as the ratio of
the number of unique words to the total number of words. In order to
better assess the repetitions, only the stems of the words are considered
in calculations.
'''
from nltk.tokenize import word_tokenize
from nltk import FreqDist
def type_token_ratio(s, tokens = None):
if tokens == None:
tokens = word_tokenize(s)
uniques = []
for token, count in FreqDist(tokens).items():
if count == 1:
uniques.append(token)
if len(tokens) == 0:
return float(0)
else:
return float(len(uniques)) / float(len(tokens))
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/pyAudioLex/tests/run_tests.py | features/audio_features/helpers/pyAudioLex/tests/run_tests.py | import sys
sys.path.append('..')
import wave
import contextlib
import glob
import json
import pyAudioLex
from colorama import Fore, Back, Style
default_format = '.3f'
# question_ratio
question_ratio = pyAudioLex.question_ratio('What time is it? What day is it?')
print('question_ratio', format(question_ratio, default_format))
assert format(question_ratio, default_format) == '0.400'
assert format(pyAudioLex.question_ratio(''), default_format) == '0.000'
# filler_ratio
filler_ratio = pyAudioLex.filler_ratio('Uh I am, like, hello, you know, and um or umm.')
print('filler_ratio', format(filler_ratio, default_format))
assert format(filler_ratio, default_format) == '0.312'
assert format(pyAudioLex.filler_ratio(''), default_format) == '0.000'
# verb_freq
verb_freq = pyAudioLex.verb_freq('They refuse to permit us to obtain the refuse permit.')
print('verb_freq', format(verb_freq, default_format))
assert format(verb_freq, default_format) == '0.273'
assert format(pyAudioLex.verb_freq(''), default_format) == '0.000'
# noun_freq
noun_freq = pyAudioLex.noun_freq('They refuse to permit us to obtain the refuse permit.')
print('noun_freq', format(noun_freq, default_format))
assert format(noun_freq, default_format) == '0.182'
assert format(pyAudioLex.noun_freq(''), default_format) == '0.000'
# pronoun_freq
pronoun_freq = pyAudioLex.pronoun_freq('They refuse to permit us to obtain the refuse permit.')
print('pronoun_freq', format(pronoun_freq, default_format))
assert format(pronoun_freq, default_format) == '0.182'
assert format(pyAudioLex.pronoun_freq(''), default_format) == '0.000'
# adverb_freq
adverb_freq = pyAudioLex.adverb_freq('They refuse to permit us to obtain the refuse permit.')
print('adverb_freq', format(adverb_freq, default_format))
assert format(adverb_freq, default_format) == '0.000'
assert format(pyAudioLex.adverb_freq(''), default_format) == '0.000'
# adjective_freq
adjective_freq = pyAudioLex.adjective_freq('They refuse to permit us to obtain the refuse permit.')
print('adjective_freq', format(adjective_freq, default_format))
assert format(adjective_freq, default_format) == '0.000'
assert format(pyAudioLex.adjective_freq(''), default_format) == '0.000'
# particle_freq
particle_freq = pyAudioLex.particle_freq('They refuse to permit us to obtain the refuse permit.')
print('particle_freq', format(particle_freq, default_format))
assert format(particle_freq, default_format) == '0.182'
assert format(pyAudioLex.particle_freq(''), default_format) == '0.000'
# conjunction_freq
conjunction_freq = pyAudioLex.conjunction_freq('They refuse to permit us to obtain the refuse permit.')
print('conjunction_freq', format(conjunction_freq, default_format))
assert format(conjunction_freq, default_format) == '0.000'
assert format(pyAudioLex.conjunction_freq(''), default_format) == '0.000'
# pronoun_to_noun_ratio
pronoun_to_noun_ratio = pyAudioLex.pronoun_to_noun_ratio('They refuse to permit us to obtain the refuse permit.')
print('pronoun_to_noun_ratio', format(pronoun_to_noun_ratio, default_format))
assert format(pronoun_to_noun_ratio, default_format) == '1.000'
assert format(pyAudioLex.pronoun_to_noun_ratio(''), default_format) == '0.000'
# standardized_word_entropy
standardized_word_entropy = pyAudioLex.standardized_word_entropy('male female male female')
print('standardized_word_entropy', format(standardized_word_entropy, default_format))
assert format(standardized_word_entropy, default_format) == '0.721'
assert format(pyAudioLex.standardized_word_entropy(''), default_format) == '0.000'
# number_ratio
number_ratio = pyAudioLex.number_ratio('I found seven apples by a couple of trees. I found a dozen eggs by those 5 chickens.')
print('number_ratio', format(number_ratio, default_format))
assert format(number_ratio, default_format) == '0.200'
assert format(pyAudioLex.number_ratio(''), default_format) == '0.000'
# brunets_index
brunets_index = pyAudioLex.brunets_index('Bravely bold Sir Robin, rode forth from Camelot.')
print('brunets_index', format(brunets_index, default_format))
assert format(brunets_index, default_format) == '4.830'
assert format(pyAudioLex.brunets_index(''), default_format) == '0.000'
# honores_statistic
honores_statistic = pyAudioLex.honores_statistic('Bravely bold Sir Robin, rode forth from Camelot. Afterwards, Sir Robin went to the castle.')
print('honores_statistic', format(honores_statistic, default_format))
assert format(honores_statistic, default_format) == '1104.165'
assert format(pyAudioLex.honores_statistic(''), default_format) == '0.000'
# type_token_ratio
type_token_ratio = pyAudioLex.type_token_ratio('Bravely bold Sir Robin, rode forth from Camelot. Afterwards, Sir Robin went to the castle.')
print('type_token_ratio', format(type_token_ratio, default_format))
assert format(type_token_ratio, default_format) == '0.579'
assert format(pyAudioLex.type_token_ratio(''), default_format) == '0.000'
# ---------------------
# test the audio
print 'Processing test audio sample...'
recording_id = 'NLX-10'
samplejson = './test/process_inq/' + recording_id + '.json'
samplewav = './test/process_inq/' + recording_id + '.wav'
# get json
jsonfile = open(samplejson, 'r')
data = json.loads(jsonfile.read())
transcript = data['transcript'].replace('[','').replace('?]','')
# get wav
wavefile = wave.open(samplewav, 'r')
# see how long the file is
with contextlib.closing(wavefile) as f:
frames = f.getnframes()
rate = f.getframerate()
duration = frames / float(rate)
features = pyAudioLex.process(transcript, duration, samplewav)
brunets_index = features['linguistic']['brunets_index']
print('asserting linguistic feature', format(brunets_index, default_format))
assert format(brunets_index, default_format) == '11.844'
zcr = features['audio']['ZCR'][2]
print('asserting audio feature', format(zcr, default_format))
assert format(zcr, default_format) == '0.021'
print 'Done!'
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/pyAudioLex/text_features/grammar/get_grammar.py | features/audio_features/helpers/pyAudioLex/text_features/grammar/get_grammar.py | '''
@package: pyAudioLex
@author: Jim Schwoebel
@module: get_grammar
This script takes in a text sample and outputs 85140 grammar features.
Specifically, it extracts grammar from permutations of various parts of
speech in series.
For example, for the sentence 'I ate ham' it would be saved as
[Pronoun, Verb, Noun] for the first position.
The output is calculated in terms of the frequencies of these parts of speech,
from highest probability to lowest probability.
This is important for many applications, as grammar is context-free and
often reflects the state-of-mind of the speaker.
'''
from itertools import permutations
import nltk
from nltk import load, word_tokenize
def get_grammar(importtext):
#now have super long string of text can do operations
#get all POS fromo Penn Treebank (POS tagger)
tagdict = load('help/tagsets/upenn_tagset.pickle')
nltk_pos_list=tagdict.keys()
#get all permutations of this list
perm=permutations(nltk_pos_list,3)
#make these permutations in a list
listobj=list()
for i in list(perm):
listobj.append(list(i))
#split by sentences? or by word? (will do by word here)
text=word_tokenize(importtext)
#tokens now
tokens=nltk.pos_tag(text)
#initialize new list for pos
pos_list=list()
#parse through entire document and tokenize every 3 words until end
for i in range(len(tokens)-3):
pos=[tokens[i][1],tokens[i+1][1],tokens[i+2][1]]
pos_list.append(pos)
#count each part of speech event and total event count
counts=list()
totalcounts=0
for i in range(len(listobj)):
count=pos_list.count(listobj[i])
totalcounts=totalcounts+count
counts.append(count)
#now create probabilities / frequencies from total count
freqs=list()
for i in range(len(counts)):
freq=counts[i]/totalcounts
freqs.append(freq)
#now you can append all the permutation labels with freqs
for i in range(len(listobj)):
listobj[i].append(freqs[i])
#now you can sort lowest to highest frequency (this is commented out to keep order consistent)
# listobj.sort(key=lambda x: (x[3]),reverse=True)
return listobj
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/pyAudioLex/text_features/parts of speech/in_freq.py | features/audio_features/helpers/pyAudioLex/text_features/parts of speech/in_freq.py | '''
@package: pyAudioLex
@author: Jim Schwoebel
@module: in_freq
in = preposition or conjunction, suborbinating
'''
from nltk.tokenize import word_tokenize
from nltk.tag import pos_tag, map_tag
from collections import Counter
def in_freq(importtext):
text=word_tokenize(importtext)
tokens=nltk.pos_tag(text)
c=Counter(token for word, token in tokens)
return c['IN']/len(text)
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/pyAudioLex/text_features/parts of speech/nns_freq.py | features/audio_features/helpers/pyAudioLex/text_features/parts of speech/nns_freq.py | '''
@package: pyAudioLex
@author: Jim Schwoebel
@module: nns_freq
#nns = noun, common, plural
'''
from nltk.tokenize import word_tokenize
from nltk.tag import pos_tag, map_tag
from collections import Counter
def nns_freq(importtext):
text=word_tokenize(importtext)
tokens=nltk.pos_tag(text)
c=Counter(token for word, token in tokens)
return c['NNS']/len(text)
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/pyAudioLex/text_features/parts of speech/ls_freq.py | features/audio_features/helpers/pyAudioLex/text_features/parts of speech/ls_freq.py | '''
@package: pyAudioLex
@author: Jim Schwoebel
@module: ls_freq
#ls = list item marker
'''
from nltk.tokenize import word_tokenize
from nltk.tag import pos_tag, map_tag
from collections import Counter
def ls_freq(importtext):
text=word_tokenize(importtext)
tokens=nltk.pos_tag(text)
c=Counter(token for word, token in tokens)
return c['LS']/len(text)
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/pyAudioLex/text_features/parts of speech/jj_freq.py | features/audio_features/helpers/pyAudioLex/text_features/parts of speech/jj_freq.py | '''
@package: pyAudioLex
@author: Jim Schwoebel
@module: jj_freq
#jj = adjective or numeral, ordinal
'''
from nltk.tokenize import word_tokenize
from nltk.tag import pos_tag, map_tag
from collections import Counter
def jj_freq(importtext):
text=word_tokenize(importtext)
tokens=nltk.pos_tag(text)
c=Counter(token for word, token in tokens)
return c['JJ']/len(text)
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/pyAudioLex/text_features/parts of speech/vbp_freq.py | features/audio_features/helpers/pyAudioLex/text_features/parts of speech/vbp_freq.py | '''
@package: pyAudioLex
@author: Jim Schwoebel
@module: vbp_freq
#vbp = verb, present tense, not 3rd person singular
'''
from nltk.tokenize import word_tokenize
from nltk.tag import pos_tag, map_tag
from collections import Counter
def vbp_freq(importtext):
text=word_tokenize(importtext)
tokens=nltk.pos_tag(text)
c=Counter(token for word, token in tokens)
return ['VBP']/len(text)
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/pyAudioLex/text_features/parts of speech/wdt_freq.py | features/audio_features/helpers/pyAudioLex/text_features/parts of speech/wdt_freq.py | '''
@package: pyAudioLex
@author: Jim Schwoebel
@module: wdt_freq
#wdt = WH-determiner
'''
from nltk.tokenize import word_tokenize
from nltk.tag import pos_tag, map_tag
from collections import Counter
def wdt_freq(importtext):
text=word_tokenize(importtext)
tokens=nltk.pos_tag(text)
c=Counter(token for word, token in tokens)
return ['WDT']/len(text)
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/pyAudioLex/text_features/parts of speech/dt_freq.py | features/audio_features/helpers/pyAudioLex/text_features/parts of speech/dt_freq.py | '''
@package: pyAudioLex
@author: Jim Schwoebel
@module: dt_freq
#dt = determiner frequency
increased use of determiners a signal for schizophrenia.
'''
from nltk.tokenize import word_tokenize
from nltk.tag import pos_tag, map_tag
from collections import Counter
def dt_freq(importtext):
text=word_tokenize(importtext)
tokens=nltk.pos_tag(text)
c=Counter(token for word, token in tokens)
return c['DT']/len(text)
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/pyAudioLex/text_features/parts of speech/wp_freq.py | features/audio_features/helpers/pyAudioLex/text_features/parts of speech/wp_freq.py | '''
@package: pyAudioLex
@author: Jim Schwoebel
@module: wrb_freq
#wrb = wh-adverb
'''
from nltk.tokenize import word_tokenize
from nltk.tag import pos_tag, map_tag
from collections import Counter
def wrb_freq(importtext):
text=word_tokenize(importtext)
tokens=nltk.pos_tag(text)
c=Counter(token for word, token in tokens)
return ['WRB']/len(text)
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/pyAudioLex/text_features/parts of speech/rbs_freq.py | features/audio_features/helpers/pyAudioLex/text_features/parts of speech/rbs_freq.py | '''
@package: pyAudioLex
@author: Jim Schwoebel
@module: rp_freq
rp = particle
'''
from nltk.tokenize import word_tokenize
from nltk.tag import pos_tag, map_tag
from collections import Counter
def rp_freq(importtext):
text=word_tokenize(importtext)
tokens=nltk.pos_tag(text)
c=Counter(token for word, token in tokens)
return ['RP']/len(text)
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/pyAudioLex/text_features/parts of speech/ex_freq.py | features/audio_features/helpers/pyAudioLex/text_features/parts of speech/ex_freq.py | '''
@package: pyAudioLex
@author: Jim Schwoebel
@module: ex_freq
#ex = existential there
'''
from nltk.tokenize import word_tokenize
from nltk.tag import pos_tag, map_tag
from collections import Counter
def ex_freq(importtext):
text=word_tokenize(importtext)
tokens=nltk.pos_tag(text)
c=Counter(token for word, token in tokens)
return c['EX']/len(text)
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/pyAudioLex/text_features/parts of speech/vb_freq.py | features/audio_features/helpers/pyAudioLex/text_features/parts of speech/vb_freq.py | '''
@package: pyAudioLex
@author: Jim Schwoebel
@module: vb_freq
#vb = verb, base form
'''
from nltk.tokenize import word_tokenize
from nltk.tag import pos_tag, map_tag
from collections import Counter
def vb_freq(importtext):
text=word_tokenize(importtext)
tokens=nltk.pos_tag(text)
c=Counter(token for word, token in tokens)
return ['VB']/len(text)
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/pyAudioLex/text_features/parts of speech/rbr_freq.py | features/audio_features/helpers/pyAudioLex/text_features/parts of speech/rbr_freq.py | '''
@package: pyAudioLex
@author: Jim Schwoebel
@module: rbr_freq
#rbr = adverb, comparative
'''
from nltk.tokenize import word_tokenize
from nltk.tag import pos_tag, map_tag
from collections import Counter
def rbr_freq(importtext):
text=word_tokenize(importtext)
tokens=nltk.pos_tag(text)
c=Counter(token for word, token in tokens)
return ['RBR']/len(text)
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/pyAudioLex/text_features/parts of speech/jjs_freq.py | features/audio_features/helpers/pyAudioLex/text_features/parts of speech/jjs_freq.py | '''
@package: pyAudioLex
@author: Jim Schwoebel
@module: jjs_freq
#jjs = adjective, superlative
'''
from nltk.tokenize import word_tokenize
from nltk.tag import pos_tag, map_tag
from collections import Counter
def jjs_freq(importtext):
text=word_tokenize(importtext)
tokens=nltk.pos_tag(text)
c=Counter(token for word, token in tokens)
return c['JJS']/len(text)
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/pyAudioLex/text_features/parts of speech/to_freq.py | features/audio_features/helpers/pyAudioLex/text_features/parts of speech/to_freq.py | '''
@package: pyAudioLex
@author: Jim Schwoebel
@module: to_freq
#to = 'to' as a preposition or infinitive marker
'''
from nltk.tokenize import word_tokenize
from nltk.tag import pos_tag, map_tag
from collections import Counter
def to_freq(importtext):
text=word_tokenize(importtext)
tokens=nltk.pos_tag(text)
c=Counter(token for word, token in tokens)
return ['TO']/len(text)
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/pyAudioLex/text_features/parts of speech/uh_freq.py | features/audio_features/helpers/pyAudioLex/text_features/parts of speech/uh_freq.py | '''
@package: pyAudioLex
@author: Jim Schwoebel
@module: uh_freq
#uh = interjection frequency
'''
from nltk.tokenize import word_tokenize
from nltk.tag import pos_tag, map_tag
from collections import Counter
def uh_freq(importtext):
text=word_tokenize(importtext)
tokens=nltk.pos_tag(text)
c=Counter(token for word, token in tokens)
return ['UH']/len(text)
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/pyAudioLex/text_features/parts of speech/genitive_freq.py | features/audio_features/helpers/pyAudioLex/text_features/parts of speech/genitive_freq.py | '''
@package: pyAudioLex
@author: Jim Schwoebel
@module: genetive_freq
#pos = genitive marker ('s)
'''
from nltk.tokenize import word_tokenize
from nltk.tag import pos_tag, map_tag
from collections import Counter
def genitive_freq(importtext):
text=word_tokenize(importtext)
tokens=nltk.pos_tag(text)
c=Counter(token for word, token in tokens)
return ['POS']/len(text)
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/pyAudioLex/text_features/parts of speech/vbg_freq.py | features/audio_features/helpers/pyAudioLex/text_features/parts of speech/vbg_freq.py | '''
@package: pyAudioLex
@author: Jim Schwoebel
@module: vbg_freq
#vbg = verb, prsent participle or gerund
'''
from nltk.tokenize import word_tokenize
from nltk.tag import pos_tag, map_tag
from collections import Counter
def vbg_freq(importtext):
text=word_tokenize(importtext)
tokens=nltk.pos_tag(text)
c=Counter(token for word, token in tokens)
return ['VBG']/len(text)
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/pyAudioLex/text_features/parts of speech/cc_freq.py | features/audio_features/helpers/pyAudioLex/text_features/parts of speech/cc_freq.py | '''
@package: pyAudioLex
@author: Jim Schwoebel
@module: cc_freq
In grammar, a conjunction (abbreviated conj or cnj) is a part of speech
that connects words, phrases, or clauses that are called the conjuncts
of the conjoining construction. The term discourse marker is mostly used
for conjunctions joining sentences. This definition may overlap with that
of other parts of speech, so what constitutes a "conjunction" must be defined
for each language. In general, a conjunction is an invariable grammatical
particle and it may or may not stand between the items in a conjunction.
Coordinating conjunctions, also called coordinators, are conjunctions that join,
or coordinate, two or more items (such as words, main clauses, or sentences) of
equal syntactic importance. In English, the mnemonic acronym FANBOYS can be used
to remember the coordinators for, and, nor, but, or, yet, and so.[4] These are
not the only coordinating conjunctions; various others are used, including[5]:
ch. 9[6]:p. 171 "and nor" (British), "but nor" (British), "or nor" (British),
"neither" ("They don't gamble; neither do they smoke"), "no more"
("They don't gamble; no more do they smoke"), and "only"
("I would go, only I don't have time"). Types of coordinating conjunctions
include cumulative conjunctions, adversative conjunctions, alternative conjunctions,
and illative conjunctions.[7]
Here are some examples of coordinating conjunctions in English and what they do:
For – presents rationale ("They do not gamble or smoke, for they are ascetics.")
And – presents non-contrasting item(s) or idea(s) ("They gamble, and they smoke.")
Nor – presents a non-contrasting negative idea ("They do not gamble, nor do they smoke.")
But – presents a contrast or exception ("They gamble, but they don't smoke.")
Or – presents an alternative item or idea ("Every day they gamble, or they smoke.")
Yet – presents a contrast or exception ("They gamble, yet they don't smoke.")
So – presents a consequence ("He gambled well last night, so he smoked a cigar to celebrate.")
'''
#POS not listed in the doc
from nltk.tokenize import word_tokenize
from nltk.tag import pos_tag, map_tag
from collections import Counter
def cc_freq(importtext):
text=word_tokenize(importtext)
tokens=nltk.pos_tag(text)
c=Counter(token for word, token in tokens)
return c['CC']/len(text)
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/pyAudioLex/text_features/parts of speech/jjr_freq.py | features/audio_features/helpers/pyAudioLex/text_features/parts of speech/jjr_freq.py | '''
@package: pyAudioLex
@author: Jim Schwoebel
@module: jjr_freq
#jjr = adjective, comparative
'''
from nltk.tokenize import word_tokenize
from nltk.tag import pos_tag, map_tag
from collections import Counter
def jjr_freq(importtext):
text=word_tokenize(importtext)
tokens=nltk.pos_tag(text)
c=Counter(token for word, token in tokens)
return c['JJR']/len(text)
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/pyAudioLex/text_features/parts of speech/vbd_freq.py | features/audio_features/helpers/pyAudioLex/text_features/parts of speech/vbd_freq.py | '''
@package: pyAudioLex
@author: Jim Schwoebel
@module: vbd_freq
#vbd = verb, past tense
'''
from nltk.tokenize import word_tokenize
from nltk.tag import pos_tag, map_tag
from collections import Counter
def vbd_freq(importtext):
text=word_tokenize(importtext)
tokens=nltk.pos_tag(text)
c=Counter(token for word, token in tokens)
return ['VBD']/len(text)
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/pyAudioLex/text_features/parts of speech/vbz_freq.py | features/audio_features/helpers/pyAudioLex/text_features/parts of speech/vbz_freq.py | '''
@package: pyAudioLex
@author: Jim Schwoebel
@module: vbz_freq
#vbz = verb, present tense, 3rd person singular
'''
from nltk.tokenize import word_tokenize
from nltk.tag import pos_tag, map_tag
from collections import Counter
def vbz_freq(importtext):
text=word_tokenize(importtext)
tokens=nltk.pos_tag(text)
c=Counter(token for word, token in tokens)
return ['VBZ']/len(text)
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/pyAudioLex/text_features/parts of speech/nn_freq.py | features/audio_features/helpers/pyAudioLex/text_features/parts of speech/nn_freq.py | '''
@package: pyAudioLex
@author: Jim Schwoebel
@module: nn_freq
#nn = noun, common, singular or mass
'''
from nltk.tokenize import word_tokenize
from nltk.tag import pos_tag, map_tag
from collections import Counter
def nn_freq(importtext):
text=word_tokenize(importtext)
tokens=nltk.pos_tag(text)
c=Counter(token for word, token in tokens)
return c['NN']/len(text)
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/pyAudioLex/text_features/parts of speech/vbn_freq.py | features/audio_features/helpers/pyAudioLex/text_features/parts of speech/vbn_freq.py | '''
@package: pyAudioLex
@author: Jim Schwoebel
@module: vbn_freq
#vbn = verb, past participle
'''
from nltk.tokenize import word_tokenize
from nltk.tag import pos_tag, map_tag
from collections import Counter
def vbn_freq(importtext):
text=word_tokenize(importtext)
tokens=nltk.pos_tag(text)
c=Counter(token for word, token in tokens)
return ['VBN']/len(text)
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/pyAudioLex/text_features/parts of speech/cd_freq.py | features/audio_features/helpers/pyAudioLex/text_features/parts of speech/cd_freq.py | '''
@package: pyAudioLex
@author: Jim Schwoebel
@module: cd_freq
#cd = numeral, cardinal - also a number frequency.
'''
from nltk.tokenize import word_tokenize
from nltk.tag import pos_tag, map_tag
from collections import Counter
def cd_freq(importtext):
text=word_tokenize(importtext)
tokens=nltk.pos_tag(text)
c=Counter(token for word, token in tokens)
return c['CD']/len(text)
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/pyAudioLex/text_features/parts of speech/nnp_freq.py | features/audio_features/helpers/pyAudioLex/text_features/parts of speech/nnp_freq.py | '''
@package: pyAudioLex
@author: Jim Schwoebel
@module: nnp_freq
#nnp = noun, proper, singular
'''
from nltk.tokenize import word_tokenize
from nltk.tag import pos_tag, map_tag
from collections import Counter
def nnp_freq(importtext):
text=word_tokenize(importtext)
tokens=nltk.pos_tag(text)
c=Counter(token for word, token in tokens)
return c['NNP']/len(text)
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/pyAudioLex/text_features/parts of speech/prp2_freq.py | features/audio_features/helpers/pyAudioLex/text_features/parts of speech/prp2_freq.py | '''
@package: pyAudioLex
@author: Jim Schwoebel
@module: prp2_freq
prp$ - pronoun, possessive
'''
from nltk.tokenize import word_tokenize
from nltk.tag import pos_tag, map_tag
from collections import Counter
def prp2_freq(importtext):
text=word_tokenize(importtext)
tokens=nltk.pos_tag(text)
c=Counter(token for word, token in tokens)
return ['PRP$']/len(text)
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/pyAudioLex/text_features/parts of speech/pdt_freq.py | features/audio_features/helpers/pyAudioLex/text_features/parts of speech/pdt_freq.py | '''
@package: pyAudioLex
@author: Jim Schwoebel
@module: pdt_freq
#pdt = pre-determiner
'''
from nltk.tokenize import word_tokenize
from nltk.tag import pos_tag, map_tag
from collections import Counter
def pdt_freq(importtext):
text=word_tokenize(importtext)
tokens=nltk.pos_tag(text)
c=Counter(token for word, token in tokens)
return c['PDT']/len(text)
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/pyAudioLex/text_features/parts of speech/rb_freq.py | features/audio_features/helpers/pyAudioLex/text_features/parts of speech/rb_freq.py | '''
@package: pyAudioLex
@author: Jim Schwoebel
@module: rb_freq
#rb = adverb
'''
from nltk.tokenize import word_tokenize
from nltk.tag import pos_tag, map_tag
from collections import Counter
def rb_freq(importtext):
text=word_tokenize(importtext)
tokens=nltk.pos_tag(text)
c=Counter(token for word, token in tokens)
return ['RB']/len(text)
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/pyAudioLex/text_features/parts of speech/prp_freq.py | features/audio_features/helpers/pyAudioLex/text_features/parts of speech/prp_freq.py | '''
@package: pyAudioLex
@author: Jim Schwoebel
@module: prp_freq
#prp = pronoun, personal
'''
from nltk.tokenize import word_tokenize
from nltk.tag import pos_tag, map_tag
from collections import Counter
def prp_freq(importtext):
text=word_tokenize(importtext)
tokens=nltk.pos_tag(text)
c=Counter(token for word, token in tokens)
return c['PRP']/len(text)
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/pyAudioLex/text_features/words/freq_dist.py | features/audio_features/helpers/pyAudioLex/text_features/words/freq_dist.py | '''
@package: pyAudioLex
@author: Jim Schwoebel
@module: freq_dist
outputs most frequent to least frequent words
'''
from nltk.tokenize import word_tokenize
from nltk import FreqDist
def freq_dist(importtext):
text=word_tokenize(importtext)
fdist1=FreqDist(text)
distribution=fdist1.most_common(len(fdist1))
return distribution
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/pyAudioLex/text_features/words/polarity.py | features/audio_features/helpers/pyAudioLex/text_features/words/polarity.py | '''
@package: pyAudioLex
@author: Jim Schwoebel
@module: polarity
Take in a text sample and output the average, standard deviation, and variance
polarity. (+) indicates happy and (-) indicates sad, 0 is neutral.
'''
import nltk
from nltk import word_tokenize
from textblob import TextBlob
import numpy as np
def polarity(importtext):
text=word_tokenize(importtext)
tokens=nltk.pos_tag(text)
#sentiment polarity of the session
polarity=TextBlob(importtext).sentiment[0]
#sentiment subjectivity of the session
sentiment=TextBlob(importtext).sentiment[1]
#average difference polarity every 3 words
polaritylist=list()
for i in range(0,len(tokens),3):
if i <= len(tokens)-3:
words=text[i]+' '+text[i+1]+' '+text[i+2]
polaritylist.append(TextBlob(words).sentiment[0])
else:
pass
avgpolarity=np.mean(polaritylist)
#std polarity every 3 words
stdpolarity=np.std(polaritylist)
#variance polarity every 3 words
varpolarity=np.var(polaritylist)
return [float(avgpolarity), float(stdpolarity), float(varpolarity)]
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/pyAudioLex/text_features/words/datewords_freq.py | features/audio_features/helpers/pyAudioLex/text_features/words/datewords_freq.py | '''
@package: pyAudioLex
@author: Jim Schwoebel
@module: datwords_freq
Take in a text sample and output frequency of date-related words.
'''
import nltk
from nltk import word_tokenize
def datewords_freq(importtext):
text=word_tokenize(importtext)
datewords=['time','monday','tuesday','wednesday','thursday','friday','saturday','sunday','january','february','march','april','may','june','july','august','september','november','december','year','day','hour','today','month',"o'clock","pm","am"]
datewords2=list()
for i in range(len(datewords)):
datewords2.append(datewords[i]+'s')
datewords=datewords+datewords2
print(datewords)
datecount=0
for i in range(len(text)):
if text[i].lower() in datewords:
datecount=datecount+1
datewords=datecount
datewordfreq=datecount/len(text)
return datewordfreq
print(datewords_freq('I love you jess on vonly'))
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/pyAudioLex/text_features/words/word_endings.py | features/audio_features/helpers/pyAudioLex/text_features/words/word_endings.py | '''
@package: pyAudioLex
@author: Jim Schwoebel
@module: word_endings
Given a word ending (e.g. '-ed'), output the words with that ending and the associated count.
'''
from nltk import word_tokenize
import re
def word_endings(importtext,ending):
text=word_tokenize(importtext)
#number of words ending in 'ed'
words=[w for w in text if re.search(ending+'$', w)]
return [len(words),words]
#test
#print(word_endings('In a blunt warning to the remaining ISIS fighters, Army Command Sgt. Maj. John Wayne Troxell said the shrinking band of militants could either surrender to the U.S. military or face death. “ISIS needs to understand that the Joint Force is on orders to annihilate them,” he wrote in a forceful message on Facebook. “So they have two options, should they decide to come up against the United States, our allies and partners: surrender or die!”','s'))
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/pyAudioLex/text_features/words/emotion_freqs.py | features/audio_features/helpers/pyAudioLex/text_features/words/emotion_freqs.py | '''
@package: pyAudioLex
@author: Jim Schwoebel
@module: emotion_freqs
Take in a text sample and output a range of emotional frequencies.
The script does this for looking for specific hotwords related to the 7 main emotions:
fear, anger, sadness, joy, disgust, surprise, trust, and anticipation.
Note we will train these hotwords on actual emotions into the future and make this
hotword detection more accurate.
'''
from nltk import word_tokenize
def emotion_freqs(importtext):
tokens=word_tokenize(importtext)
#emotions - fear, anger, sadness, joy, disgust, suprise, trust, anticipation
fearwords=['scared','afraid','avoid','not','no','anxiety','road','spider','snake','heights','die','falling','death','fast','despair','agonize','bother','worry','endure','sustain','tolerate','creeps','jitters','nervous','nervousness','concerned','worry']
angerwords=['angry','mad','injustice','annoyed','school','work','predictable','upset','frustrated','sick','tired','fuck','shoot','shit','darn','sucks','bad','ugly']
sadwords=['sad','depressed','cry','bad','disappointed','distress','uneasy','upset','regret','dismal','black','hopeless']
joywords=['happy','glad','swell','pleasant','well','good','joy','sweet','grateful','ecstatic','euphoric','encouraged','smile','laugh','content','satisfied','delighted']
disgustwords=['wrong','disgusting','bad','taste', 'aversion','horror','repulsed','hate','allergy','dislike','displeasure']
surprisewords=['surprised','appetite','fondness','like','relish','shine','surprise','unexpected','random','new','plastic','cool']
trustwords=['useful','trust','listen','insight','believe','seek','see','feel','touch','mom','brother','friend','girlfriend','father','dad','uncle','family']
anticipationwords=['excited','looking','forward','to','birthday','anniversary','christmas','new years','halloween','party','expectation']
fear=0
anger=0
sad=0
joy=0
disgust=0
surprise=0
trust=0
anticipation=0
for i in range(len(tokens)):
if tokens[i].lower() in fearwords:
fear=fear+1
if tokens[i].lower() in angerwords:
anger=anger+1
if tokens[i].lower() in sadwords:
sad=sad+1
if tokens[i].lower() in joywords:
joy=joy+1
if tokens[i].lower() in disgustwords:
disgust=digust+1
if tokens[i].lower() in surprisewords:
surprise=surprise+1
if tokens[i].lower() in trustwords:
trust=trust+1
if tokens[i].lower() in anticipationwords:
anticipation=anticipation+1
fearfreq=float(fear/len(tokens))
angerfreq=float(anger/len(tokens))
sadfreq=float(sad/len(tokens))
joyfreq=float(joy/len(tokens))
disgustfreq=float(disgust/len(tokens))
surprisefreq=float(surprise/len(tokens))
trustfreq=float(trust/len(tokens))
anticipationfreq=float(anticipation/len(tokens))
return [fearfreq,angerfreq,sadfreq,joyfreq,disgustfreq,surprisefreq,trustfreq,anticipationfreq]
#test below
#print(emotion_freqs('this is a test of sad emotional detection freqs'))
#[0.0, 0.0, 0.1111111111111111, 0.0, 0.0, 0.0, 0.0, 0.0]
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/pyAudioLex/text_features/words/word_repeats.py | features/audio_features/helpers/pyAudioLex/text_features/words/word_repeats.py | '''
@package: pyAudioLex
@author: Jim Schwoebel
@module: word_repeats
Word repeats on average every 10 words (important for psychosis).
Typical repeated word output:
the
the
the
the
to
to
to
to
on
on
,
,
If words are typically not articles like 'the', it could indicate a thought disorder.
'''
from nltk import word_tokenize
def word_repeats(importtext):
tokens=word_tokenize(importtext)
tenwords=list()
tenwords2=list()
repeatnum=0
repeatedwords=list()
#make number of sentences
for i in range(0,len(tokens),10):
tenwords.append(i)
for j in range(0,len(tenwords)):
if j not in [len(tenwords)-2,len(tenwords)-1]:
tenwords2.append(tokens[tenwords[j]:tenwords[j+1]])
else:
pass
#now parse for word repeats sentence-over-sentence
for k in range(0,len(tenwords2)):
if k<len(tenwords2)-1:
for l in range(10):
if tenwords2[k][l] in tenwords2[k+1]:
repeatnum=repeatnum+1
repeatedwords.append(tenwords2[k][l])
if tenwords2[k+1][l] in tenwords2[k]:
repeatnum=repeatnum+1
repeatedwords.append(tenwords2[k+1][l])
else:
pass
print
#calculate the number of sentences and repeat word avg per sentence
sentencenum=len(tenwords)
repeatavg=repeatnum/sentencenum
#repeated word freqdist
return [repeatedwords, sentencenum, repeatavg]
#test
#print(word_repeats('In a blunt warning to the remaining ISIS fighters, Army Command Sgt. Maj. John Wayne Troxell said the shrinking band of militants could either surrender to the U.S. military or face death. “ISIS needs to understand that the Joint Force is on orders to annihilate them,” he wrote in a forceful message on Facebook. “So they have two options, should they decide to come up against the United States, our allies and partners: surrender or die!”'))
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/pyAudioLex/text_features/words/num_sentences.py | features/audio_features/helpers/pyAudioLex/text_features/words/num_sentences.py | '''
@package: pyAudioLex
@author: Jim Schwoebel
@module: num_sentences
Total number of sentences, as calculated by punctuation marks:
Periods (.),
Interjections (!),
Questions (?).
'''
def num_sentences(importtext):
#actual number of periods
periods=importtext.count('.')
#count number of questions
questions=importtext.count('?')
#count number of interjections
interjections=importtext.count('!')
#actual number of sentences
sentencenum=periods+questions+interjections
return [sentencenum,periods,questions,interjections]
#print(num_sentences('In a blunt warning to the remaining ISIS fighters, Army Command Sgt. Maj. John Wayne Troxell said the shrinking band of militants could either surrender to the U.S. military or face death.“ISIS needs to understand that the Joint Force is on orders to annihilate them,” he wrote in a forceful message on Facebook. “So they have two options, should they decide to come up against the United States, our allies and partners: surrender or die!'))
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/pyAudioLex/text_features/words/subjectivity.py | features/audio_features/helpers/pyAudioLex/text_features/words/subjectivity.py | '''
@package: pyAudioLex
@author: Jim Schwoebel
@module: subjectivity
Take in a text sample and output the average, standard deviation, and variance
subjectivity.
'''
import nltk
from nltk import word_tokenize
from textblob import TextBlob
import numpy as np
def subjectivity(importtext):
text=word_tokenize(importtext)
tokens=nltk.pos_tag(text)
#sentiment subjectivity of the session
sentiment=TextBlob(importtext).sentiment[1]
subjectivitylist=list()
for i in range(0,len(tokens),3):
if i <= len(tokens)-3:
words=text[i]+' '+text[i+1]+' '+text[i+2]
subjectivitylist.append(TextBlob(words).sentiment[1])
else:
pass
#average difference subjectivity every 3 words
avgsubjectivity=np.mean(subjectivitylist)
#std subjectivity every 3 words
stdsubjectivity=np.std(subjectivitylist)
#var subjectivity every 3 words
varsubjectivity=np.var(subjectivitylist)
return [float(avgsubjectivity), float(stdsubjectivity), float(varsubjectivity)]
g=subjectivity('hello I suck so much right now')
print(g)
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/pyAudioLex/text_features/words/word_stats.py | features/audio_features/helpers/pyAudioLex/text_features/words/word_stats.py | '''
@package: pyAudioLex
@author: Jim Schwoebel
@module: word_stats
Take in a text sample and output the average word length,
the maximum word length, the minimum word length,
the variance of the vocabulary, and the standard deviation of the vocabulary.
All of this is done by counting the length of individual words from tokens.
'''
from nltk import word_tokenize
import numpy as np
def word_stats(importtext):
text=word_tokenize(importtext)
#average word length
awords=list()
for i in range(len(text)):
awords.append(len(text[i]))
awordlength=np.mean(awords)
#all words greater than 5 in length
fivewords= [w for w in text if len(w) > 5]
fivewordnum=len(fivewords)
#maximum word length
vmax=np.amax(awords)
#minimum word length
vmin=np.amin(awords)
#variance of the vocabulary
vvar=np.var(awords)
#stdev of vocabulary
vstd=np.std(awords)
return [float(awordlength),float(fivewordnum), float(vmax),float(vmin),float(vvar),float(vstd)]
#print(word_stats('hello this is test of the script.'))
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/pyAudioLex/audioFeatures/audio_features.py | features/audio_features/helpers/pyAudioLex/audioFeatures/audio_features.py | '''
@package: pyAudioLex
@author: Jim Schwoebel
@module: audio_features
#These are all audio features extracted with librosa library.
See the documentation here: https://librosa.github.io/librosa/
All these features are represented as the mean, standard deviation, variance,
median, min, and max.
Note that another library exists for time-series features (e.g. at 100 ms timescale).
'''
import librosa
import numpy as np
import os
def statlist(veclist):
newlist=list()
#fingerprint statistical features
#append each with mean, std, var, median, min, and max
if len(veclist)>100:
newlist=[float(np.mean(veclist)),float(np.std(veclist)),float(np.var(veclist)),
float(np.median(veclist)),float(np.amin(veclist)),float(np.amax(veclist))]
else:
for i in range(len(veclist)):
newlist.append([float(np.mean(veclist[i])),float(np.std(veclist[i])),float(np.var(veclist[i])),
float(np.median(veclist[i])),float(np.amin(veclist[i])),float(np.amax(veclist[i]))])
return newlist
def audio_features(filename):
hop_length = 512
n_fft=2048
#load file
y, sr = librosa.load(filename)
duration=float(librosa.core.get_duration(y))
#extract features from librosa
tempo, beat_frames = librosa.beat.beat_track(y=y, sr=sr)
beat_times = librosa.frames_to_time(beat_frames, sr=sr)
y_harmonic,y_percussive=librosa.effects.hpss(y)
mfcc = librosa.feature.mfcc(y=y, sr=sr, hop_length=hop_length, n_mfcc=13)
mfcc_delta = librosa.feature.delta(mfcc)
beat_mfcc_delta = librosa.util.sync(np.vstack([mfcc, mfcc_delta]), beat_frames)
chromagram = librosa.feature.chroma_cqt(y=y_harmonic, sr=sr)
beat_chroma = librosa.util.sync(chromagram,
beat_frames,
aggregate=np.median)
beat_features = np.vstack([beat_chroma, beat_mfcc_delta])
zero_crossings = librosa.zero_crossings(y)
zero_crossing_time = librosa.feature.zero_crossing_rate(y)
spectral_centroid = librosa.feature.spectral_centroid(y)
spectral_bandwidth = librosa.feature.spectral_bandwidth(y)
spectral_contrast = librosa.feature.spectral_contrast(y)
spectral_rolloff = librosa.feature.spectral_rolloff(y)
rmse=librosa.feature.rmse(y)
poly_features=librosa.feature.poly_features(y)
chroma_stft = librosa.feature.chroma_stft(y)
chroma_cens = librosa.feature.chroma_cens(y)
tonnetz=librosa.feature.tonnetz(y)
mfcc_all=statlist(mfcc)
mfccd_all=statlist(mfcc_delta)
bmfccd_all=statlist(beat_mfcc_delta)
cg_all=statlist(chromagram)
bc_all=statlist(beat_chroma)
bf_all=statlist(beat_features)
zc_all=statlist(zero_crossings)
sc_all=statlist(spectral_centroid)
sb_all=statlist(spectral_bandwidth)
sc_all=statlist(spectral_contrast)
sr_all=statlist(spectral_rolloff)
rmse_all=statlist(rmse)
pf_all=statlist(poly_features)
cstft_all=statlist(chroma_stft)
ccens_all=statlist(chroma_cens)
tonnetz_all=statlist(tonnetz)
return [duration,float(tempo),beat_frames.tolist(),beat_times.tolist(),mfcc_all,
mfccd_all,bmfccd_all,cg_all,bc_all,bf_all,zc_all,sc_all,sb_all,
sc_all,sr_all,rmse_all,pf_all,cstft_all,ccens_all,tonnetz_all]
#TEST
#import os
#os.chdir('/Users/jimschwoebel/Desktop')
#output=audio_features('test.wav')
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/pyAudioLex/audioFeatures/audio_time_features.py | features/audio_features/helpers/pyAudioLex/audioFeatures/audio_time_features.py | '''
@package: pyAudioLex
@author: Jim Schwoebel
@module: audio_time_features
#Note that another library exists for time-series features
(e.g. at 100 ms timescale). Uses audio_features library.
See the documentation here: https://librosa.github.io/librosa/
All these features are represented as the mean, standard deviation, variance,
median, min, and max.
'''
import librosa
import numpy as np
from pydub import AudioSegment
import os
from . import audio_features
def exportfile(newAudio,time1,time2,filename,i):
#Exports to a wav file in the current path.
newAudio2 = newAudio[time1:time2]
g=os.listdir()
if filename[0:-4]+'_'+str(i)+'.wav' in g:
filename2=str(i)+'_segment'+'.wav'
print('making %s'%(filename2))
newAudio2.export(filename2,format="wav")
else:
filename2=str(i)+'.wav'
print('making %s'%(filename2))
newAudio2.export(filename2, format="wav")
return filename2
def audio_time_features(filename,timesplit):
#recommend >0.50 seconds for timesplit (timesplit > 0.50)
hop_length = 512
n_fft=2048
y, sr = librosa.load(filename)
duration=float(librosa.core.get_duration(y))
#Now splice an audio signal into individual elements of 100 ms and extract
#all these features per 100 ms
segnum=round(duration/timesplit)
deltat=duration/segnum
timesegment=list()
time=0
for i in range(segnum):
#milliseconds
timesegment.append(time)
time=time+deltat*1000
newAudio = AudioSegment.from_wav(filename)
filelist=list()
for i in range(len(timesegment)-1):
filename=exportfile(newAudio,timesegment[i],timesegment[i+1],filename,i)
filelist.append(filename)
featureslist=list()
#save 100 ms segments in current folder (delete them after)
for j in range(len(filelist)):
try:
features=audio_features.audio_features(filelist[i])
featureslist.append(features)
os.remove(filelist[j])
except:
print('error splicing')
featureslist.append('silence')
os.remove(filelist[j])
#outputfeatures
return [duration, segnum, featureslist]
#test - recommended settings @ 0.50 seconds
##os.chdir('/Users/jimschwoebel/Desktop/audiotest')
##output=audio_time_features('test.wav',0.500)
#timessplit=secs
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/pyAudioLex/audioFeatures/__init__.py | features/audio_features/helpers/pyAudioLex/audioFeatures/__init__.py | python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false | |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/pyAudioLex/audioFeatures/standard_feature_array.py | features/audio_features/helpers/pyAudioLex/audioFeatures/standard_feature_array.py | import os
import librosa
import numpy as np
from pydub import AudioSegment
def featurize(wavfile):
#initialize features
hop_length = 512
n_fft=2048
#load file
y, sr = librosa.load(wavfile)
#extract mfcc coefficients
mfcc = librosa.feature.mfcc(y=y, sr=sr, hop_length=hop_length, n_mfcc=13)
mfcc_delta = librosa.feature.delta(mfcc)
#extract mean, standard deviation, min, and max value in mfcc frame, do this across all mfccs
mfcc_features=np.array([np.mean(mfcc[0]),np.std(mfcc[0]),np.amin(mfcc[0]),np.amax(mfcc[0]),
np.mean(mfcc[1]),np.std(mfcc[1]),np.amin(mfcc[1]),np.amax(mfcc[1]),
np.mean(mfcc[2]),np.std(mfcc[2]),np.amin(mfcc[2]),np.amax(mfcc[2]),
np.mean(mfcc[3]),np.std(mfcc[3]),np.amin(mfcc[3]),np.amax(mfcc[3]),
np.mean(mfcc[4]),np.std(mfcc[4]),np.amin(mfcc[4]),np.amax(mfcc[4]),
np.mean(mfcc[5]),np.std(mfcc[5]),np.amin(mfcc[5]),np.amax(mfcc[5]),
np.mean(mfcc[6]),np.std(mfcc[6]),np.amin(mfcc[6]),np.amax(mfcc[6]),
np.mean(mfcc[7]),np.std(mfcc[7]),np.amin(mfcc[7]),np.amax(mfcc[7]),
np.mean(mfcc[8]),np.std(mfcc[8]),np.amin(mfcc[8]),np.amax(mfcc[8]),
np.mean(mfcc[9]),np.std(mfcc[9]),np.amin(mfcc[9]),np.amax(mfcc[9]),
np.mean(mfcc[10]),np.std(mfcc[10]),np.amin(mfcc[10]),np.amax(mfcc[10]),
np.mean(mfcc[11]),np.std(mfcc[11]),np.amin(mfcc[11]),np.amax(mfcc[11]),
np.mean(mfcc[12]),np.std(mfcc[12]),np.amin(mfcc[12]),np.amax(mfcc[12]),
np.mean(mfcc_delta[0]),np.std(mfcc_delta[0]),np.amin(mfcc_delta[0]),np.amax(mfcc_delta[0]),
np.mean(mfcc_delta[1]),np.std(mfcc_delta[1]),np.amin(mfcc_delta[1]),np.amax(mfcc_delta[1]),
np.mean(mfcc_delta[2]),np.std(mfcc_delta[2]),np.amin(mfcc_delta[2]),np.amax(mfcc_delta[2]),
np.mean(mfcc_delta[3]),np.std(mfcc_delta[3]),np.amin(mfcc_delta[3]),np.amax(mfcc_delta[3]),
np.mean(mfcc_delta[4]),np.std(mfcc_delta[4]),np.amin(mfcc_delta[4]),np.amax(mfcc_delta[4]),
np.mean(mfcc_delta[5]),np.std(mfcc_delta[5]),np.amin(mfcc_delta[5]),np.amax(mfcc_delta[5]),
np.mean(mfcc_delta[6]),np.std(mfcc_delta[6]),np.amin(mfcc_delta[6]),np.amax(mfcc_delta[6]),
np.mean(mfcc_delta[7]),np.std(mfcc_delta[7]),np.amin(mfcc_delta[7]),np.amax(mfcc_delta[7]),
np.mean(mfcc_delta[8]),np.std(mfcc_delta[8]),np.amin(mfcc_delta[8]),np.amax(mfcc_delta[8]),
np.mean(mfcc_delta[9]),np.std(mfcc_delta[9]),np.amin(mfcc_delta[9]),np.amax(mfcc_delta[9]),
np.mean(mfcc_delta[10]),np.std(mfcc_delta[10]),np.amin(mfcc_delta[10]),np.amax(mfcc_delta[10]),
np.mean(mfcc_delta[11]),np.std(mfcc_delta[11]),np.amin(mfcc_delta[11]),np.amax(mfcc_delta[11]),
np.mean(mfcc_delta[12]),np.std(mfcc_delta[12]),np.amin(mfcc_delta[12]),np.amax(mfcc_delta[12])])
return mfcc_features
def exportfile(newAudio,time1,time2,filename,i):
#Exports to a wav file in the current path.
newAudio2 = newAudio[time1:time2]
g=os.listdir()
if filename[0:-4]+'_'+str(i)+'.wav' in g:
filename2=str(i)+'_segment'+'.wav'
print('making %s'%(filename2))
newAudio2.export(filename2,format="wav")
else:
filename2=str(i)+'.wav'
print('making %s'%(filename2))
newAudio2.export(filename2, format="wav")
return filename2
def audio_time_features(filename):
#recommend >0.50 seconds for timesplit
timesplit=0.50
hop_length = 512
n_fft=2048
y, sr = librosa.load(filename)
duration=float(librosa.core.get_duration(y))
#Now splice an audio signal into individual elements of 100 ms and extract
#all these features per 100 ms
segnum=round(duration/timesplit)
deltat=duration/segnum
timesegment=list()
time=0
for i in range(segnum):
#milliseconds
timesegment.append(time)
time=time+deltat*1000
newAudio = AudioSegment.from_wav(filename)
filelist=list()
featureslist=np.zeros(104)
for i in range(len(timesegment)-1):
filename=exportfile(newAudio,timesegment[i],timesegment[i+1],filename,i)
filelist.append(filename)
#save 100 ms segments in current folder (delete them after)
for j in range(len(filelist)):
try:
features=featurize(filelist[i])
featureslist=featureslist+features
os.remove(filelist[j])
except:
print('error splicing')
os.remove(filelist[j])
#now scale the featureslist array by the length to get mean in each category
featureslist=featureslist/segnum
return featureslist
def standard_feature_array(wavfile):
# add some error handling in case < 7 frames, if less than 7 frames output a zero array
try:
features=np.append(featurize(wavfile),audio_time_features(wavfile))
except:
features=np.zeros(208)
return features
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/pysptk/pysptk_features.py | features/audio_features/helpers/pysptk/pysptk_features.py | import seaborn, pysptk, matplotlib
import numpy as np
from scipy.io import wavfile
# get statistical features in numpy
def stats(matrix):
mean=np.mean(matrix)
std=np.std(matrix)
maxv=np.amax(matrix)
minv=np.amin(matrix)
median=np.median(matrix)
output=np.array([mean,std,maxv,minv,median])
return output
# get labels for later
def stats_labels(label, sample_list):
mean=label+'_mean'
std=label+'_std'
maxv=label+'_maxv'
minv=label+'_minv'
median=label+'_median'
sample_list.append(mean)
sample_list.append(std)
sample_list.append(maxv)
sample_list.append(minv)
sample_list.append(median)
return sample_list
def pysptk_featurize(audiofile):
labels=list()
features=list()
fs, x = wavfile.read(audiofile)
f0_swipe = pysptk.swipe(x.astype(np.float64), fs=fs, hopsize=80, min=60, max=200, otype="f0")
features=features+stats(f0_swipe)
labels=stats_labels('f0_swipe',labels)
f0_rapt = pysptk.rapt(x.astype(np.float32), fs=fs, hopsize=80, min=60, max=200, otype="f0")
features=features+stats(f0_rapt)
labels=stats_labels('f0_rapt',labels)
mgc = pysptk.mgcep(xw, 20, 0.0, 0.0)
features=features+stats(mgc)
labels=stats_labels('mel-spectrum envelope',labels)
return features, labels
features, labels = pysptk_featurize('test.wav')
print(features)
print(labels) | python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/sa/features/signal.py | features/audio_features/helpers/sa/features/signal.py | import numpy as np
import peakutils as pu
def get_F_0( signal, rate, time_step = 0.0, min_pitch = 75, max_pitch = 600,
max_num_cands = 15, silence_thres = .03, voicing_thres = .45,
octave_cost = .01, octave_jump_cost = .35,
voiced_unvoiced_cost = .14, accurate = False, pulse = False ):
"""
Computes median Fundamental Frequency ( :math:`F_0` ).
The fundamental frequency ( :math:`F_0` ) of a signal is the lowest
frequency, or the longest wavelength of a periodic waveform. In the context
of this algorithm, :math:`F_0` is calculated by segmenting a signal into
frames, then for each frame the most likely candidate is chosen from the
lowest possible frequencies to be :math:`F_0`. From all of these values,
the median value is returned. More specifically, the algorithm filters out
frequencies higher than the Nyquist Frequency from the signal, then
segments the signal into frames of at least 3 periods of the minimum
pitch. For each frame, it then calculates the normalized autocorrelation
( :math:`r_a` ), or the correlation of the signal to a delayed copy of
itself. :math:`r_a` is calculated according to Boersma's paper
( referenced below ), which is an improvement of previous methods.
:math:`r_a` is estimated by dividing the autocorrelation of the windowed
signal by the autocorrelation of the window. After :math:`r_a` is
calculated the maxima values of :math:`r_a` are found. These points
correspond to the lag domain, or points in the delayed signal, where the
correlation value has peaked. The higher peaks indicate a stronger
correlation. These points in the lag domain suggest places of wave
repetition and are the candidates for :math:`F_0`. The best candidate for
:math:`F_0` of each frame is picked by a cost function, a function that
compares the cost of transitioning from the best :math:`F_0` of the
previous frame to all possible :math:`F_0's` of the current frame. Once the
path of :math:`F_0's` of least cost has been determined, the median
:math:`F_0` of all voiced frames is returned.
This algorithm is adapted from:
http://www.fon.hum.uva.nl/david/ba_shs/2010/Boersma_Proceedings_1993.pdf
and from:
https://github.com/praat/praat/blob/master/fon/Sound_to_Pitch.cpp
.. note::
It has been shown that depressed and suicidal men speak with a reduced
fundamental frequency range, ( described in:
http://ameriquests.org/index.php/vurj/article/download/2783/1181 ) and
patients responding well to depression treatment show an increase in
their fundamental frequency variability ( described in :
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3022333/ ). Because
acoustical properties of speech are the earliest and most consistent
indicators of mood disorders, early detection of fundamental frequency
changes could significantly improve recovery time for disorders with
psychomotor symptoms.
Args:
signal ( numpy.ndarray ): This is the signal :math:`F_0` will be calculated from.
rate ( int ): This is the number of samples taken per second.
time_step ( float ): ( optional, default value: 0.0 ) The measurement, in seconds, of time passing between each frame. The smaller the time_step, the more overlap that will occur. If 0 is supplied the degree of oversampling will be equal to four.
min_pitch ( float ): ( optional, default value: 75 ) This is the minimum value to be returned as pitch, which cannot be less than or equal to zero.
max_pitch ( float ): ( optional, default value: 600 ) This is the maximum value to be returned as pitch, which cannot be greater than the Nyquist Frequency.
max_num_cands ( int ): ( optional, default value: 15 ) This is the maximum number of candidates to be considered for each frame, the unvoiced candidate ( i.e. :math:`F_0` equal to zero ) is always considered.
silence_thres ( float ): ( optional, default value: 0.03 ) Frames that do not contain amplitudes above this threshold ( relative to the global maximum amplitude ), are probably silent.
voicing_thres ( float ): ( optional, default value: 0.45 ) This is the strength of the unvoiced candidate, relative to the maximum possible :math:`r_a`. To increase the number of unvoiced decisions, increase this value.
octave_cost ( float ): ( optional, default value: 0.01 per octave ) This is the degree of favouring of high-frequency candidates, relative to the maximum possible :math:`r_a`. This is necessary because in the case of a perfectly periodic signal, all undertones of :math:`F_0` are equally strong candidates as :math:`F_0` itself. To more strongly favour recruitment of high-frequency candidates, increase this value.
octave_jump_cost ( float ): ( optional, default value: 0.35 ) This is degree of disfavouring of pitch changes, relative to the maximum possible :math:`r_a`. To decrease the number of large frequency jumps, increase this value.
voiced_unvoiced_cost ( float ): ( optional, default value: 0.14 ) This is the degree of disfavouring of voiced/unvoiced transitions, relative to the maximum possible :math:`r_a`. To decrease the number of voiced/unvoiced transitions, increase this value.
accurate ( bool ): ( optional, default value: False ) If False, the window is a Hanning window with a length of :math:`\\frac{ 3.0} {min\_pitch}`. If True, the window is a Gaussian window with a length of :math:`\\frac{6.0}{min\_pitch}`, i.e. twice the length.
pulse ( bool ): ( optional, default value: False ) If False, the function returns a list containing only the median :math:`F_0`. If True, the function returns a list with all values necessary to calculate pulses. This list contains the median :math:`F_0`, the frequencies for each frame in a list, a list of tuples containing the beginning time of the frame, and the ending time of the frame, and the signal filtered by the Nyquist Frequency. The indicies in the second and third list correspond to each other.
Returns:
list: Index 0 contains the median :math:`F_0` in hz. If pulse is set
equal to True, indicies 1, 2, and 3 will contain: a list of all voiced
periods in order, a list of tuples of the beginning and ending time
of a voiced interval, with each index in the list corresponding to the
previous list, and a numpy.ndarray of the signal filtered by the
Nyquist Frequency. If pulse is set equal to False, or left to the
default value, then the list will only contain the median :math:`F_0`.
Raises:
ValueError: min_pitch has to be greater than zero.
ValueError: octave_cost isn't in [ 0, 1 ].
ValueError: silence_thres isn't in [ 0, 1 ].
ValueError: voicing_thres isn't in [ 0, 1 ].
ValueError: max_pitch can't be larger than Nyquist Frequency.
Example:
The example below demonstrates what different outputs this function
gives, using a synthesized signal.
>>> import numpy as np
>>> from matplotlib import pyplot as plt
>>> domain = np.linspace( 0, 6, 300000 )
>>> rate = 50000
>>> y = lambda x: np.sin( 2 * np.pi * 140 * x )
>>> signal = y( domain )
>>> get_F_0( signal, rate )
[ 139.70588235294116 ]
>>> get_F_0( signal, rate, voicing_threshold = .99, accurate = True )
[ 139.70588235294116 ]
>>> w, x, y, z = get_F_0( signal, rate, pulse = True )
>>> print( w )
139.70588235294116
>>> print( x[ :5 ] )
[ 0.00715789 0.00715789 0.00715789 0.00715789 0.00715789 ]
>>> print( y[ :5 ] )
[ ( 0.002500008333361111, 0.037500125000416669 ),
( 0.012500041666805555, 0.047500158333861113 ),
( 0.022500075000249999, 0.057500191667305557 ),
( 0.032500108333694447, 0.067500225000749994 ),
( 0.042500141667138891, 0.077500258334194452 ) ]
>>> print( z[ : 5 ] )
[ 0. 0.01759207 0.0351787 0.05275443 0.07031384 ]
The example below demonstrates the algorithms ability to adjust for
signals with dynamic frequencies, by comparing a plot of a synthesized
signal with an increasing frequency, and the calculated frequencies for
that signal.
>>> domain = np.linspace( 1, 2, 10000 )
>>> rate = 10000
>>> y = lambda x : np.sin( x ** 8 )
>>> signal = y( domain )
>>> median_F_0, periods, time_vals, modified_sig = get_F_0( signal,
rate, pulse = True )
>>> plt.subplot( 211 )
>>> plt.plot( domain, signal )
>>> plt.title( "Synthesized Signal" )
>>> plt.ylabel( "Amplitude" )
>>> plt.subplot( 212 )
>>> plt.plot( np.linspace( 1, 2, len( periods ) ), 1.0 / np.array(
periods ) )
>>> plt.title( "Frequencies of Signal" )
>>> plt.xlabel( "Samples" )
>>> plt.ylabel( "Frequency" )
>>> plt.suptitle( "Comparison of Synthesized Signal and it's Calculated Frequencies" )
>>> plt.show()
.. figure:: figures/F_0_synthesized_sig.png
:align: center
"""
if min_pitch <= 0:
raise ValueError( "min_pitch has to be greater than zero." )
if max_num_cands < max_pitch / min_pitch:
max_num_cands = int( max_pitch / min_pitch )
initial_len = len( signal )
total_time = initial_len / float( rate )
tot_time_arr = np.linspace( 0, total_time, initial_len )
max_place_poss = 1.0 / min_pitch
min_place_poss = 1.0 / max_pitch
#to silence formants
min_place_poss2 = 0.5 / max_pitch
if accurate: pds_per_window = 6.0
else: pds_per_window = 3.0
#degree of oversampling is 4
if time_step <= 0: time_step = ( pds_per_window / 4.0 ) / min_pitch
w_len = pds_per_window / min_pitch
#correcting for time_step
octave_jump_cost *= .01 / time_step
voiced_unvoiced_cost *= .01 / time_step
Nyquist_Frequency = rate / 2.0
upper_bound = .95 * Nyquist_Frequency
zeros_pad = 2 ** ( int( np.log2( initial_len ) ) + 1 ) - initial_len
signal = np.hstack( ( signal, np.zeros( zeros_pad ) ) )
fft_signal = np.fft.fft( signal )
fft_signal[ int( upper_bound ) : -int( upper_bound ) ] = 0
sig = np.fft.ifft( fft_signal )
sig = sig[ :initial_len ].real
#checking to make sure values are valid
if Nyquist_Frequency < max_pitch:
raise ValueError( "max_pitch can't be larger than Nyquist Frequency." )
if octave_cost < 0 or octave_cost > 1:
raise ValueError( "octave_cost isn't in [ 0, 1 ]" )
if voicing_thres< 0 or voicing_thres > 1:
raise ValueError( "voicing_thres isn't in [ 0, 1 ]" )
if silence_thres < 0 or silence_thres > 1:
raise ValueError( "silence_thres isn't in [ 0, 1 ]" )
#finding number of samples per frame and time_step
frame_len = int( w_len * rate + .5 )
time_len = int( time_step * rate + .5 )
#initializing list of candidates for F_0, and their strengths
best_cands, strengths, time_vals = [], [], []
#finding the global peak the way Praat does
global_peak = max( abs( sig - sig.mean() ) )
print(type(global_peak),'\n')
e = np.e
inf = np.inf
log = np.log2
start_i = 0
while start_i < len( sig ) - frame_len :
end_i = start_i + frame_len
segment = sig[ start_i : end_i ]
if accurate:
t = np.linspace( 0, w_len, len( segment ) )
numerator = e ** ( -12.0 * ( t / w_len - .5 ) ** 2.0 ) - e ** -12.0
denominator = 1.0 - e ** -12.0
window = numerator / denominator
interpolation_depth = 0.25
else:
window = np.hanning( len( segment ) )
interpolation_depth = 0.50
#shave off ends of time intervals to account for overlapping
start_time = tot_time_arr[ start_i + int( time_len / 4.0 ) ]
stop_time = tot_time_arr[ end_i - int( time_len / 4.0 ) ]
time_vals.append( ( start_time, stop_time ) )
start_i += time_len
long_pd_i = int( rate / min_pitch )
half_pd_i = int( long_pd_i / 2.0 + 1 )
long_pd_cushion = segment[ half_pd_i : - half_pd_i ]
#finding local peak and local mean the way Praat does
#local mean is found by looking a longest period to either side of the
#center of the frame, and using only the values within this interval to
#calculate the local mean, and similarly local peak is found by looking
#a half of the longest period to either side of the center of the
#frame, ( after the frame has windowed ) and choosing the absolute
#maximum in this interval
local_mean = long_pd_cushion.mean()
segment = segment - local_mean
segment *= window
half_pd_cushion = segment[ long_pd_i : -long_pd_i ]
local_peak = max( abs( half_pd_cushion ) )
if local_peak == 0:
#shortcut -> complete silence and only candidate is silent candidate
best_cands.append( [ inf ] )
strengths.append( [ voicing_thres + 2 ] )
else:
#calculating autocorrelation, based off steps 3.2-3.10
intensity = local_peak / float( global_peak )
N = len( segment )
nFFT = 2 ** int( log( ( 1.0 + interpolation_depth ) * N ) + 1 )
window = np.hstack( ( window, np.zeros( nFFT - N ) ) )
segment = np.hstack( ( segment, np.zeros( nFFT - N ) ) )
x_fft = np.fft.fft( segment )
r_a = np.real( np.fft.fft( x_fft * np.conjugate( x_fft ) ) )
r_a = r_a[ : int( N / pds_per_window ) ]
x_fft = np.fft.fft( window )
r_w = np.real( np.fft.fft( x_fft * np.conjugate( x_fft ) ) )
r_w = r_w[ : int( N / pds_per_window ) ]
r_x = r_a / r_w
r_x /= r_x[ 0 ]
#creating an array of the points in time corresponding to sampled
#autocorrelation of the signal ( r_x )
time_array = np.linspace( 0 , w_len / pds_per_window, len( r_x ) )
peaks = pu.indexes( r_x , thres = 0 )
max_values, max_places = r_x[ peaks ], time_array[ peaks ]
#only consider places that are voiced over a certain threshold
max_places = max_places[ max_values > 0.5 * voicing_thres ]
max_values = max_values[ max_values > 0.5 * voicing_thres ]
for i in range( len( max_values ) ):
#reflecting values > 1 through 1.
if max_values[ i ] > 1.0 :
max_values[ i ] = 1.0 / max_values[ i ]
#calculating the relative strength value
rel_val = [ val - octave_cost * log( place * min_pitch ) for
val, place in zip( max_values, max_places ) ]
if len( max_values ) > 0.0 :
#finding the max_num_cands-1 maximizers, and maximums, then
#calculating their strengths ( eq. 23 and 24 ) and accounting for
#silent candidate
max_places = [ max_places[ i ] for i in np.argsort( rel_val )[
-max_num_cands + 1 : ] ]
max_values = [ max_values[ i ] for i in np.argsort( rel_val )[
-max_num_cands + 1 : ] ]
max_places = np.array( max_places )
max_values = np.array( max_values )
rel_val = list(np.sort( rel_val )[ -max_num_cands + 1 : ] )
#adding the silent candidate's strength to strengths
rel_val.append( voicing_thres + max( 0, 2 - ( intensity /
( silence_thres / ( 1 + voicing_thres ) ) ) ) )
#inf is our silent candidate
max_places = np.hstack( ( max_places, inf ) )
best_cands.append( list( max_places ) )
strengths.append( rel_val )
else:
#if there are no available maximums, only account for silent
#candidate
best_cands.append( [ inf ] )
strengths.append( [ voicing_thres + max( 0, 2 - intensity /
( silence_thres / ( 1 + voicing_thres ) ) ) ] )
#Calculates smallest costing path through list of candidates ( forwards ),
#and returns path.
best_total_cost, best_total_path = -inf, []
#for each initial candidate find the path of least cost, then of those
#paths, choose the one with the least cost.
for cand in range( len( best_cands[ 0 ] ) ):
start_val = best_cands[ 0 ][ cand ]
total_path = [ start_val ]
level = 1
prev_delta = strengths[ 0 ][ cand ]
maximum = -inf
while level < len( best_cands ) :
prev_val = total_path[ -1 ]
best_val = inf
for j in range( len( best_cands[ level ] ) ):
cur_val = best_cands[ level ][ j ]
cur_delta = strengths[ level ][ j ]
cost = 0
cur_unvoiced = cur_val == inf or cur_val < min_place_poss2
prev_unvoiced = prev_val == inf or prev_val < min_place_poss2
if cur_unvoiced:
#both voiceless
if prev_unvoiced:
cost = 0
#voiced-to-unvoiced transition
else:
cost = voiced_unvoiced_cost
else:
#unvoiced-to-voiced transition
if prev_unvoiced:
cost = voiced_unvoiced_cost
#both are voiced
else:
cost = octave_jump_cost * abs( log( cur_val /
prev_val ) )
#The cost for any given candidate is given by the transition
#cost, minus the strength of the given candidate
value = prev_delta - cost + cur_delta
if value > maximum: maximum, best_val = value, cur_val
prev_delta = maximum
total_path.append( best_val )
level += 1
if maximum > best_total_cost:
best_total_cost, best_total_path = maximum, total_path
f_0_forth = np.array( best_total_path )
#Calculates smallest costing path through list of candidates ( backwards ),
#and returns path. Going through the path backwards introduces frequencies
#previously marked as unvoiced, or increases undertones, to decrease
#frequency jumps
best_total_cost, best_total_path2 = -inf, []
#Starting at the end, for each initial candidate find the path of least
#cost, then of those paths, choose the one with the least cost.
for cand in range( len( best_cands[ -1 ] ) ):
start_val = best_cands[ -1 ][ cand ]
total_path = [ start_val ]
level = len( best_cands ) - 2
prev_delta = strengths[ -1 ][ cand ]
maximum = -inf
while level > -1 :
prev_val = total_path[ -1 ]
best_val = inf
for j in range( len( best_cands[ level ] ) ):
cur_val = best_cands[ level ][ j ]
cur_delta = strengths[ level ][ j ]
cost = 0
cur_unvoiced = cur_val == inf or cur_val < min_place_poss2
prev_unvoiced = prev_val == inf or prev_val < min_place_poss2
if cur_unvoiced:
#both voiceless
if prev_unvoiced:
cost = 0
#voiced-to-unvoiced transition
else:
cost = voiced_unvoiced_cost
else:
#unvoiced-to-voiced transition
if prev_unvoiced:
cost = voiced_unvoiced_cost
#both are voiced
else:
cost = octave_jump_cost * abs( log( cur_val /
prev_val ) )
#The cost for any given candidate is given by the transition
#cost, minus the strength of the given candidate
value = prev_delta - cost + cur_delta
if value > maximum: maximum, best_val = value, cur_val
prev_delta = maximum
total_path.append( best_val )
level -= 1
if maximum > best_total_cost:
best_total_cost, best_total_path2 = maximum, total_path
f_0_back = np.array( best_total_path2 )
#reversing f_0_backward so the initial value corresponds to first frequency
f_0_back = f_0_back[ -1 : : -1 ]
#choose the maximum frequency from each path for the total path
f_0 = np.array( [ min( i, j ) for i, j in zip( f_0_forth, f_0_back ) ] )
if pulse:
#removing all unvoiced time intervals from list
removed = 0
for i in range( len( f_0 ) ):
if f_0[ i ] > max_place_poss or f_0[ i] < min_place_poss:
time_vals.remove( time_vals[ i - removed ] )
removed += 1
for i in range( len( f_0 ) ):
#if f_0 is voiceless assign occurance of peak to inf -> when divided
#by one this will give us a frequency of 0, corresponding to a unvoiced
#frame
if f_0[ i ] > max_place_poss or f_0[ i ] < min_place_poss :
f_0[ i ] = inf
f_0 = f_0[ f_0 < inf ]
if pulse:
return [ np.median( 1.0 / f_0 ), list( f_0 ), time_vals, signal ]
if len( f_0 ) == 0:
return [ 0 ]
else:
return [ np.median( 1.0 / f_0 ) ]
def get_HNR( signal, rate, time_step = 0, min_pitch = 75,
silence_threshold = .1, periods_per_window = 4.5 ):
"""
Computes mean Harmonics-to-Noise ratio ( HNR ).
The Harmonics-to-Noise ratio ( HNR ) is the ratio
of the energy of a periodic signal, to the energy of the noise in the
signal, expressed in dB. This value is often used as a measure of
hoarseness in a person's voice. By way of illustration, if 99% of the
energy of the signal is in the periodic part and 1% of the energy is in
noise, then the HNR is :math:`10 \cdot log_{10}( \\frac{99}{1} ) = 20`.
A HNR of 0 dB means there is equal energy in harmonics and in noise. The
first step for HNR determination of a signal, in the context of this
algorithm, is to set the maximum frequency allowable to the signal's
Nyquist Frequency. Then the signal is segmented into frames of length
:math:`\\frac{periods\_per\_window}{min\_pitch}`. Then for each frame, it
calculates the normalized autocorrelation ( :math:`r_a` ), or the
correlation of the signal to a delayed copy of itself. :math:`r_a` is
calculated according to Boersma's paper ( referenced below ). The highest
peak is picked from :math:`r_a`. If the height of this peak is larger than
the strength of the silent candidate, then the HNR for this frame is
calculated from that peak. The height of the peak corresponds to the energy
of the periodic part of the signal. Once the HNR value has been calculated
for all voiced frames, the mean is taken from these values and returned.
This algorithm is adapted from:
http://www.fon.hum.uva.nl/david/ba_shs/2010/Boersma_Proceedings_1993.pdf
and from:
https://github.com/praat/praat/blob/master/fon/Sound_to_Harmonicity.cpp
.. note::
The Harmonics-to-Noise ratio of a person's voice is strongly negatively
correlated to depression severity ( described in:
https://ll.mit.edu/mission/cybersec/publications/publication-files/full_papers/2012_09_09_MalyskaN_Interspeech_FP.pdf )
and can be used as an early indicator of depression, and suicide risk.
After this indicator has been realized, preventative medicine can be
implemented, improving recovery time or even preventing further
symptoms.
Args:
signal ( numpy.ndarray ): This is the signal the HNR will be calculated from.
rate ( int ): This is the number of samples taken per second.
time_step ( float ): ( optional, default value: 0.0 ) This is the measurement, in seconds, of time passing between each frame. The smaller the time_step, the more overlap that will occur. If 0 is supplied, the degree of oversampling will be equal to four.
min_pitch ( float ): ( optional, default value: 75 ) This is the minimum value to be returned as pitch, which cannot be less than or equal to zero
silence_threshold ( float ): ( optional, default value: 0.1 ) Frames that do not contain amplitudes above this threshold ( relative to the global maximum amplitude ), are considered silent.
periods_per_window ( float ): ( optional, default value: 4.5 ) 4.5 is best for speech. The more periods contained per frame, the more the algorithm becomes sensitive to dynamic changes in the signal.
Returns:
float: The mean HNR of the signal expressed in dB.
Raises:
ValueError: min_pitch has to be greater than zero.
ValueError: silence_threshold isn't in [ 0, 1 ].
Example:
The example below adjusts parameters of the function, using the same
synthesized signal with added noise, to demonstrate the stability of
the function.
>>> import numpy as np
>>> from matplotlib import pyplot as plt
>>> domain = np.linspace( 0, 6, 300000 )
>>> rate = 50000
>>> y = lambda x:( 1 + .3 * np.sin( 2 * np.pi * 140 * x ) ) * np.sin(
2 * np.pi * 140 * x )
>>> signal = y( domain ) + .2 * np.random.random( 300000 )
>>> get_HNR( signal, rate )
21.885338007330802
>>> get_HNR( signal, rate, periods_per_window = 6 )
21.866307805597849
>>> get_HNR( signal, rate, time_step = .04, periods_per_window = 6 )
21.878451649148804
We'd expect an increase in noise to reduce HNR and similar energies
in noise and harmonics to produce a HNR that approaches zero. This is
demonstrated below.
>>> signals = [ y( domain ) + i / 10.0 * np.random.random( 300000 ) for
i in range( 1, 11 ) ]
>>> HNRx10 = [ get_HNR( sig, rate ) for sig in signals ]
>>> plt.plot( np.linspace( .1, 1, 10 ), HNRx10 )
>>> plt.xlabel( "Amount of Added Noise" )
>>> plt.ylabel( "HNR" )
>>> plt.title( "HNR Values of Signals with Added Noise" )
>>> plt.show()
.. figure:: figures/HNR_values_added_noise.png
:align: center
"""
#checking to make sure values are valid
if min_pitch <= 0:
raise ValueError( "min_pitch has to be greater than zero." )
if silence_threshold < 0 or silence_threshold > 1:
raise ValueError( "silence_threshold isn't in [ 0, 1 ]." )
#degree of overlap is four
if time_step <= 0: time_step = ( periods_per_window / 4.0 ) / min_pitch
Nyquist_Frequency = rate / 2.0
max_pitch = Nyquist_Frequency
global_peak = max( abs( signal - signal.mean() ) )
window_len = periods_per_window / float( min_pitch )
#finding number of samples per frame and time_step
frame_len = int( window_len * rate )
t_len = int( time_step * rate )
#segmenting signal, there has to be at least one frame
num_frames = max( 1, int( len( signal ) / t_len + .5 ) )
seg_signal = [ signal[ int( i * t_len ) : int( i * t_len ) + frame_len ]
for i in range( num_frames + 1 ) ]
#initializing list of candidates for HNR
best_cands = []
for index in range( len( seg_signal ) ):
segment = seg_signal[ index ]
#ignoring any potential empty segment
if len( segment) > 0:
window_len = len( segment ) / float( rate )
#calculating autocorrelation, based off steps 3.2-3.10
segment = segment - segment.mean()
local_peak = max( abs( segment ) )
if local_peak == 0 :
best_cands.append( .5 )
else:
intensity = local_peak / global_peak
window = np.hanning( len( segment ) )
segment *= window
N = len( segment )
nsampFFT = 2 ** int( np.log2( N ) + 1 )
window = np.hstack( ( window, np.zeros( nsampFFT - N ) ) )
segment = np.hstack( ( segment, np.zeros( nsampFFT - N ) ) )
x_fft = np.fft.fft( segment )
r_a = np.real( np.fft.fft( x_fft * np.conjugate( x_fft ) ) )
r_a = r_a[ : N ]
r_a = np.nan_to_num( r_a )
x_fft = np.fft.fft( window )
r_w = np.real( np.fft.fft( x_fft * np.conjugate( x_fft ) ) )
r_w = r_w[ : N ]
r_w = np.nan_to_num( r_w )
r_x = r_a / r_w
r_x /= r_x[ 0 ]
#creating an array of the points in time corresponding to the
#sampled autocorrelation of the signal ( r_x )
time_array = np.linspace( 0, window_len, len( r_x ) )
i = pu.indexes( r_x )
max_values, max_places = r_x[ i ], time_array[ i ]
max_place_poss = 1.0 / min_pitch
min_place_poss = 1.0 / max_pitch
max_values = max_values[ max_places >= min_place_poss ]
max_places = max_places[ max_places >= min_place_poss ]
max_values = max_values[ max_places <= max_place_poss ]
max_places = max_places[ max_places <= max_place_poss ]
for i in range( len( max_values ) ):
#reflecting values > 1 through 1.
if max_values[ i ] > 1.0 :
max_values[ i ] = 1.0 / max_values[ i ]
#eq. 23 and 24 with octave_cost, and voicing_threshold set to zero
if len( max_values ) > 0:
strengths = [ max( max_values ), max( 0, 2 - ( intensity /
( silence_threshold ) ) ) ]
#if the maximum strength is the unvoiced candidate, then .5
#corresponds to HNR of 0
if np.argmax( strengths ):
best_cands.append( 0.5 )
else:
best_cands.append( strengths[ 0 ] )
else:
best_cands.append( 0.5 )
best_cands = np.array( best_cands )
best_cands = best_cands[ best_cands > 0.5 ]
if len(best_cands) == 0:
return 0
#eq. 4
best_cands = 10.0 * np.log10( best_cands / ( 1.0 - best_cands ) )
best_candidate = np.mean( best_cands )
return best_candidate
def get_Pulses( signal, rate, min_pitch = 75, max_pitch = 600,
include_max = False, include_min = True ):
"""
Computes glottal pulses of a signal.
This algorithm relies on the voiced/unvoiced decisions and fundamental
frequencies, calculated for each voiced frame by get_F_0. For every voiced
interval, a list of points is created by finding the initial point
:math:`t_1`, which is the absolute extremum ( or the maximum/minimum,
depending on your include_max and include_min parameters ) of the amplitude
of the sound in the interval
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | true |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/myprosody/setup.py | features/audio_features/helpers/myprosody/setup.py | from setuptools import setup
long_description="""*** Version-10 release: two new functions were added ***
The two new functions deploy different Machine Learning algorithms to estimate the speakers' spoken
language proficiency levels (only the prosody aspect not semantically).
Prosody is the study of the tune and rhythm of speech and how these features contribute to meaning.
Prosody is the study of those aspects of speech that typically apply to a level above that of the individual
phoneme and very often to sequences of words (in prosodic phrases). Features above the level of the phoneme
(or "segment") are referred to as suprasegmentals.
A phonetic study of prosody is a study of the suprasegmental features of speech. At the phonetic level,
prosody is characterised by:
1. vocal pitch (fundamental frequency)
2. acoustic intensity
3. rhythm (phoneme and syllable duration)
MyProsody is a Python library for measuring these acoustic features of speech (simultaneous speech, high entropy)
compared to ones of native speech. The acoustic features of native speech patterns have been observed and
established by employing Machine Learning algorithms. An acoustic model (algorithm) breaks recorded utterances
(48 kHz & 32 bit sampling rate and bit depth respectively) and detects syllable boundaries, fundamental frequency
contours, and formants. Its built-in functions recognize/measures:
Average_syll_pause_duration
No._long_pause
Speaking-time
No._of_words_in_minutes
Articulation_rate
No._words_in_minutes
formants_index
f0_index ((f0 is for fundamental frequency)
f0_quantile_25_index
f0_quantile_50_index
f0_quantile_75_index
f0_std
f0_max
f0_min
No._detected_vowel
perc%._correct_vowel
(f2/f1)_mean (1st and 2nd formant frequencies)
(f2/f1)_std
no._of_words
no._of_pauses
intonation_index
(voiced_syll_count)/(no_of_pause)
TOEFL_Scale_Score
Score_Shannon_index
speaking_rate
gender recognition
speech mood (semantic analysis)
pronunciation posterior score
articulation-rate
speech rate
filler words
f0 statistics
-------------
NEW
--------------
level (CEFR level)
prosody-aspects (comparison, native level)
The library was developed based upon the idea introduced by Klaus Zechner et al
*Automatic scoring of non-native spontaneous speech in tests of spoken English* Speech Communicaion vol
51-2009, Nivja DeJong and Ton Wempe [1], Paul Boersma and David Weenink [2], Carlo Gussenhoven [3],
S.M Witt and S.J. Young [4] and Yannick Jadoul [5].
Peaks in intensity (dB) that are preceded and followed by dips in intensity are considered as potential
syllable cores.
MyProsody is unique in its aim to provide a complete quantitative and analytical way to study acoustic
features of a speech. Moreover, those features could be analysed further by employing Python's
functionality to provide more fascinating insights into speech patterns.
This library is for Linguists, scientists, developers, speech and language therapy clinics and researchers.
Please note that MyProsody Analysis is currently in initial state though in active development. While the
amount of functionality that is currently present is not huge, more will be added over the next few months.
Installation
=============
Myprosody can be installed like any other Python library, using (a recent version of) the Python package
manager pip, on Linux, macOS, and Windows:
pip install myprosody
or, to update your installed version to the latest release:
pip install -u myprosody
NOTE:
=============
After installing Myprosody, download the folder called:
myprosody
from https://github.com/Shahabks/myprosody
and save on your computer. The folder includes the audio files folder where you will save your audio files
for analysis.
Audio files must be in *.wav format, recorded at 48 kHz sample frame and 24-32 bits of resolution.
To check how the myprosody functions behave, please check
EXAMPLES.pdf
on https://github.com/Shahabks/myprosody
Myprosody was developed by MYOLUTIONS Lab in Japan. It is part of New Generation of Voice Recognition and Acoustic & Language modelling
Project in MYSOLUTIONS Lab. That is planned to enrich the functionality of Myprosody by adding more advanced
functions."""
setup(name='myprosody',
version='10',
description='NEW VERSION: the prosodic features of speech (simultaneous speech) compared to the features of native speech +++ spoken language proficiency level estimator',
long_description=long_description,
url='https://github.com/Shahabks/myprosody',
author='Shahab Sabahi',
author_email='sabahi.s@mysol-gc.jp',
license='MIT',
classifiers=[
'Intended Audience :: Developers',
'Intended Audience :: Science/Research',
'Programming Language :: Python',
'Programming Language :: Python :: 3.7',
],
keywords='praat speech signal processing phonetics',
install_requires=[
'numpy>=1.15.2',
'praat-parselmouth>=0.3.2',
'pandas>=0.23.4',
'scipy>=1.1.0',
'pickleshare>=0.7.5',
'scikit-learn>=0.20.2',
],
packages=['myprosody'],
zip_safe=False)
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/myprosody/myprosody.py | features/audio_features/helpers/myprosody/myprosody.py | import parselmouth
from parselmouth.praat import call, run_file
import glob
import pandas as pd
import numpy as np
import scipy
from scipy.stats import binom
from scipy.stats import ks_2samp
from scipy.stats import ttest_ind
import os
def myspsyl(m,p):
sound=p+"/"+"dataset"+"/"+"audioFiles"+"/"+m+".wav"
sourcerun=p+"/"+"dataset"+"/"+"essen"+"/"+"myspsolution.praat"
path=p+"/"+"dataset"+"/"+"audioFiles"+"/"
try:
objects= run_file(sourcerun, -20, 2, 0.3, "yes",sound,path, 80, 400, 0.01, capture_output=True)
print (objects[0]) # This will print the info from the sound object, and objects[0] is a parselmouth.Sound object
z1=str( objects[1]) # This will print the info from the textgrid object, and objects[1] is a parselmouth.Data object with a TextGrid inside
z2=z1.strip().split()
z3=int(z2[0]) # will be the integer number 10
z4=float(z2[3]) # will be the floating point number 8.3
print ("number_ of_syllables=",z3)
except:
z3=0
print ("Try again the sound of the audio was not clear")
return;
def mysppaus(m,p):
sound=p+"/"+"dataset"+"/"+"audioFiles"+"/"+m+".wav"
sourcerun=p+"/"+"dataset"+"/"+"essen"+"/"+"myspsolution.praat"
path=p+"/"+"dataset"+"/"+"audioFiles"+"/"
try:
objects= run_file(sourcerun, -20, 2, 0.3, "yes",sound,path, 80, 400, 0.01, capture_output=True)
print (objects[0]) # This will print the info from the sound object, and objects[0] is a parselmouth.Sound object
z1=str( objects[1]) # This will print the info from the textgrid object, and objects[1] is a parselmouth.Data object with a TextGrid inside
z2=z1.strip().split()
z3=int(z2[1]) # will be the integer number 10
z4=float(z2[3]) # will be the floating point number 8.3
print ("number_of_pauses=",z3)
except:
z3=0
print ("Try again the sound of the audio was not clear")
return;
def myspsr(m,p):
sound=p+"/"+"dataset"+"/"+"audioFiles"+"/"+m+".wav"
sourcerun=p+"/"+"dataset"+"/"+"essen"+"/"+"myspsolution.praat"
path=p+"/"+"dataset"+"/"+"audioFiles"+"/"
try:
objects= run_file(sourcerun, -20, 2, 0.3, "yes",sound,path, 80, 400, 0.01, capture_output=True)
print (objects[0]) # This will print the info from the sound object, and objects[0] is a parselmouth.Sound object
z1=str( objects[1]) # This will print the info from the textgrid object, and objects[1] is a parselmouth.Data object with a TextGrid inside
z2=z1.strip().split()
z3=int(z2[2]) # will be the integer number 10
z4=float(z2[3]) # will be the floating point number 8.3
print ("rate_of_speech=",z3,"# syllables/sec original duration")
except:
z3=0
print ("Try again the sound of the audio was not clear")
return;
def myspatc(m,p):
sound=p+"/"+"dataset"+"/"+"audioFiles"+"/"+m+".wav"
sourcerun=p+"/"+"dataset"+"/"+"essen"+"/"+"myspsolution.praat"
path=p+"/"+"dataset"+"/"+"audioFiles"+"/"
try:
objects= run_file(sourcerun, -20, 2, 0.3, "yes",sound,path, 80, 400, 0.01, capture_output=True)
print (objects[0]) # This will print the info from the sound object, and objects[0] is a parselmouth.Sound object
z1=str( objects[1]) # This will print the info from the textgrid object, and objects[1] is a parselmouth.Data object with a TextGrid inside
z2=z1.strip().split()
z3=int(z2[3]) # will be the integer number 10
z4=float(z2[3]) # will be the floating point number 8.3
print ("articulation_rate=",z3,"# syllables/sec speaking duration")
except:
z3=0
print ("Try again the sound of the audio was not clear")
return;
def myspst(m,p):
sound=p+"/"+"dataset"+"/"+"audioFiles"+"/"+m+".wav"
sourcerun=p+"/"+"dataset"+"/"+"essen"+"/"+"myspsolution.praat"
path=p+"/"+"dataset"+"/"+"audioFiles"+"/"
try:
objects= run_file(sourcerun, -20, 2, 0.3, "yes",sound,path, 80, 400, 0.01, capture_output=True)
print (objects[0]) # This will print the info from the sound object, and objects[0] is a parselmouth.Sound object
z1=str( objects[1]) # This will print the info from the textgrid object, and objects[1] is a parselmouth.Data object with a TextGrid inside
z2=z1.strip().split()
z3=int(z2[3]) # will be the integer number 10
z4=float(z2[4]) # will be the floating point number 8.3
print ("speaking_duration=",z4,"# sec only speaking duration without pauses")
except:
z4=0
print ("Try again the sound of the audio was not clear")
return;
def myspod(m,p):
sound=p+"/"+"dataset"+"/"+"audioFiles"+"/"+m+".wav"
sourcerun=p+"/"+"dataset"+"/"+"essen"+"/"+"myspsolution.praat"
path=p+"/"+"dataset"+"/"+"audioFiles"+"/"
try:
objects= run_file(sourcerun, -20, 2, 0.3, "yes",sound,path, 80, 400, 0.01, capture_output=True)
print (objects[0]) # This will print the info from the sound object, and objects[0] is a parselmouth.Sound object
z1=str( objects[1]) # This will print the info from the textgrid object, and objects[1] is a parselmouth.Data object with a TextGrid inside
z2=z1.strip().split()
z3=int(z2[3]) # will be the integer number 10
z4=float(z2[5]) # will be the floating point number 8.3
print ("original_duration=",z4,"# sec total speaking duration with pauses")
except:
z4=0
print ("Try again the sound of the audio was not clear")
return;
def myspbala(m,p):
sound=p+"/"+"dataset"+"/"+"audioFiles"+"/"+m+".wav"
sourcerun=p+"/"+"dataset"+"/"+"essen"+"/"+"myspsolution.praat"
path=p+"/"+"dataset"+"/"+"audioFiles"+"/"
try:
objects= run_file(sourcerun, -20, 2, 0.3, "yes",sound,path, 80, 400, 0.01, capture_output=True)
print (objects[0]) # This will print the info from the sound object, and objects[0] is a parselmouth.Sound object
z1=str( objects[1]) # This will print the info from the textgrid object, and objects[1] is a parselmouth.Data object with a TextGrid inside
z2=z1.strip().split()
z3=int(z2[3]) # will be the integer number 10
z4=float(z2[6]) # will be the floating point number 8.3
print ("balance=",z4,"# ratio (speaking duration)/(original duration)")
except:
z4=0
print ("Try again the sound of the audio was not clear")
return;
def myspf0mean(m,p):
sound=p+"/"+"dataset"+"/"+"audioFiles"+"/"+m+".wav"
sourcerun=p+"/"+"dataset"+"/"+"essen"+"/"+"myspsolution.praat"
path=p+"/"+"dataset"+"/"+"audioFiles"+"/"
try:
objects= run_file(sourcerun, -20, 2, 0.3, "yes",sound,path, 80, 400, 0.01, capture_output=True)
print (objects[0]) # This will print the info from the sound object, and objects[0] is a parselmouth.Sound object
z1=str( objects[1]) # This will print the info from the textgrid object, and objects[1] is a parselmouth.Data object with a TextGrid inside
z2=z1.strip().split()
z3=int(z2[3]) # will be the integer number 10
z4=float(z2[7]) # will be the floating point number 8.3
print ("f0_mean=",z4,"# Hz global mean of fundamental frequency distribution")
except:
z4=0
print ("Try again the sound of the audio was not clear")
return;
def myspf0sd(m,p):
sound=p+"/"+"dataset"+"/"+"audioFiles"+"/"+m+".wav"
sourcerun=p+"/"+"dataset"+"/"+"essen"+"/"+"myspsolution.praat"
path=p+"/"+"dataset"+"/"+"audioFiles"+"/"
try:
objects= run_file(sourcerun, -20, 2, 0.3, "yes",sound,path, 80, 400, 0.01, capture_output=True)
print (objects[0]) # This will print the info from the sound object, and objects[0] is a parselmouth.Sound object
z1=str( objects[1]) # This will print the info from the textgrid object, and objects[1] is a parselmouth.Data object with a TextGrid inside
z2=z1.strip().split()
z3=int(z2[3]) # will be the integer number 10
z4=float(z2[8]) # will be the floating point number 8.3
print ("f0_SD=",z4,"# Hz global standard deviation of fundamental frequency distribution")
except:
z4=0
print ("Try again the sound of the audio was not clear")
return;
def myspf0med(m,p):
sound=p+"/"+"dataset"+"/"+"audioFiles"+"/"+m+".wav"
sourcerun=p+"/"+"dataset"+"/"+"essen"+"/"+"myspsolution.praat"
path=p+"/"+"dataset"+"/"+"audioFiles"+"/"
try:
objects= run_file(sourcerun, -20, 2, 0.3, "yes",sound,path, 80, 400, 0.01, capture_output=True)
print (objects[0]) # This will print the info from the sound object, and objects[0] is a parselmouth.Sound object
z1=str( objects[1]) # This will print the info from the textgrid object, and objects[1] is a parselmouth.Data object with a TextGrid inside
z2=z1.strip().split()
z3=int(z2[3]) # will be the integer number 10
z4=float(z2[9]) # will be the floating point number 8.3
print ("f0_MD=",z4,"# Hz global median of fundamental frequency distribution")
except:
z4=0
print ("Try again the sound of the audio was not clear")
return;
def myspf0min(m,p):
sound=p+"/"+"dataset"+"/"+"audioFiles"+"/"+m+".wav"
sourcerun=p+"/"+"dataset"+"/"+"essen"+"/"+"myspsolution.praat"
path=p+"/"+"dataset"+"/"+"audioFiles"+"/"
try:
objects= run_file(sourcerun, -20, 2, 0.3, "yes",sound,path, 80, 400, 0.01, capture_output=True)
print (objects[0]) # This will print the info from the sound object, and objects[0] is a parselmouth.Sound object
z1=str( objects[1]) # This will print the info from the textgrid object, and objects[1] is a parselmouth.Data object with a TextGrid inside
z2=z1.strip().split()
z3=int(z2[10]) # will be the integer number 10
z4=float(z2[10]) # will be the floating point number 8.3
print ("f0_min=",z3,"# Hz global minimum of fundamental frequency distribution")
except:
z3=0
print ("Try again the sound of the audio was not clear")
return;
def myspf0max(m,p):
sound=p+"/"+"dataset"+"/"+"audioFiles"+"/"+m+".wav"
sourcerun=p+"/"+"dataset"+"/"+"essen"+"/"+"myspsolution.praat"
path=p+"/"+"dataset"+"/"+"audioFiles"+"/"
try:
objects= run_file(sourcerun, -20, 2, 0.3, "yes",sound,path, 80, 400, 0.01, capture_output=True)
print (objects[0]) # This will print the info from the sound object, and objects[0] is a parselmouth.Sound object
z1=str( objects[1]) # This will print the info from the textgrid object, and objects[1] is a parselmouth.Data object with a TextGrid inside
z2=z1.strip().split()
z3=int(z2[11]) # will be the integer number 10
z4=float(z2[11]) # will be the floating point number 8.3
print ("f0_max=",z3,"# Hz global maximum of fundamental frequency distribution")
except:
z3=0
print ("Try again the sound of the audio was not clear")
return;
def myspf0q25(m,p):
sound=p+"/"+"dataset"+"/"+"audioFiles"+"/"+m+".wav"
sourcerun=p+"/"+"dataset"+"/"+"essen"+"/"+"myspsolution.praat"
path=p+"/"+"dataset"+"/"+"audioFiles"+"/"
try:
objects= run_file(sourcerun, -20, 2, 0.3, "yes",sound,path, 80, 400, 0.01, capture_output=True)
print (objects[0]) # This will print the info from the sound object, and objects[0] is a parselmouth.Sound object
z1=str( objects[1]) # This will print the info from the textgrid object, and objects[1] is a parselmouth.Data object with a TextGrid inside
z2=z1.strip().split()
z3=int(z2[12]) # will be the integer number 10
z4=float(z2[11]) # will be the floating point number 8.3
print ("f0_quan25=",z3,"# Hz global 25th quantile of fundamental frequency distribution")
except:
z3=0
print ("Try again the sound of the audio was not clear")
return;
def myspf0q75(m,p):
sound=p+"/"+"dataset"+"/"+"audioFiles"+"/"+m+".wav"
sourcerun=p+"/"+"dataset"+"/"+"essen"+"/"+"myspsolution.praat"
path=p+"/"+"dataset"+"/"+"audioFiles"+"/"
try:
objects= run_file(sourcerun, -20, 2, 0.3, "yes",sound,path, 80, 400, 0.01, capture_output=True)
print (objects[0]) # This will print the info from the sound object, and objects[0] is a parselmouth.Sound object
z1=str( objects[1]) # This will print the info from the textgrid object, and objects[1] is a parselmouth.Data object with a TextGrid inside
z2=z1.strip().split()
z3=int(z2[13]) # will be the integer number 10
z4=float(z2[11]) # will be the floating point number 8.3
print ("f0_quan75=",z3,"# Hz global 75th quantile of fundamental frequency distribution")
except:
z3=0
print ("Try again the sound of the audio was not clear")
return;
def mysptotal(m,p):
sound=p+"/"+"dataset"+"/"+"audioFiles"+"/"+m+".wav"
sourcerun=p+"/"+"dataset"+"/"+"essen"+"/"+"myspsolution.praat"
path=p+"/"+"dataset"+"/"+"audioFiles"+"/"
try:
objects= run_file(sourcerun, -20, 2, 0.3, "yes",sound,path, 80, 400, 0.01, capture_output=True)
print (objects[0]) # This will print the info from the sound object, and objects[0] is a parselmouth.Sound object
z1=str( objects[1]) # This will print the info from the textgrid object, and objects[1] is a parselmouth.Data object with a TextGrid inside
z2=z1.strip().split()
z3=np.array(z2)
z4=np.array(z3)[np.newaxis]
z5=z4.T
dataset=pd.DataFrame({"number_ of_syllables":z5[0,:],"number_of_pauses":z5[1,:],"rate_of_speech":z5[2,:],"articulation_rate":z5[3,:],"speaking_duration":z5[4,:],
"original_duration":z5[5,:],"balance":z5[6,:],"f0_mean":z5[7,:],"f0_std":z5[8,:],"f0_median":z5[9,:],"f0_min":z5[10,:],"f0_max":z5[11,:],
"f0_quantile25":z5[12,:],"f0_quan75":z5[13,:]})
print (dataset.T)
except:
print ("Try again the sound of the audio was not clear")
return;
def mysppron(m,p):
sound=p+"/"+"dataset"+"/"+"audioFiles"+"/"+m+".wav"
sourcerun=p+"/"+"dataset"+"/"+"essen"+"/"+"myspsolution.praat"
path=p+"/"+"dataset"+"/"+"audioFiles"+"/"
try:
objects= run_file(sourcerun, -20, 2, 0.3, "yes",sound,path, 80, 400, 0.01, capture_output=True)
print (objects[0]) # This will print the info from the sound object, and objects[0] is a parselmouth.Sound object
z1=str( objects[1]) # This will print the info from the textgrid object, and objects[1] is a parselmouth.Data object with a TextGrid inside
z2=z1.strip().split()
z3=int(z2[13]) # will be the integer number 10
z4=float(z2[14]) # will be the floating point number 8.3
db= binom.rvs(n=10,p=z4,size=10000)
a=np.array(db)
b=np.mean(a)*100/10
print ("Pronunciation_posteriori_probability_score_percentage= :%.2f" % (b))
except:
print ("Try again the sound of the audio was not clear")
return;
def myspgend(m,p):
sound=p+"/"+"dataset"+"/"+"audioFiles"+"/"+m+".wav"
sourcerun=p+"/"+"dataset"+"/"+"essen"+"/"+"myspsolution.praat"
path=p+"/"+"dataset"+"/"+"audioFiles"+"/"
try:
objects= run_file(sourcerun, -20, 2, 0.3, "yes",sound,path, 80, 400, 0.01, capture_output=True)
print (objects[0]) # This will print the info from the sound object, and objects[0] is a parselmouth.Sound object
z1=str( objects[1]) # This will print the info from the textgrid object, and objects[1] is a parselmouth.Data object with a TextGrid inside
z2=z1.strip().split()
z3=float(z2[8]) # will be the integer number 10
z4=float(z2[7]) # will be the floating point number 8.3
if z4<=114:
g=101
j=3.4
elif z4>114 and z4<=135:
g=128
j=4.35
elif z4>135 and z4<=163:
g=142
j=4.85
elif z4>163 and z4<=197:
g=182
j=2.7
elif z4>197 and z4<=226:
g=213
j=4.5
elif z4>226:
g=239
j=5.3
else:
print("Voice not recognized")
exit()
def teset(a,b,c,d):
d1=np.random.wald(a, 1, 1000)
d2=np.random.wald(b,1,1000)
d3=ks_2samp(d1, d2)
c1=np.random.normal(a,c,1000)
c2=np.random.normal(b,d,1000)
c3=ttest_ind(c1,c2)
y=([d3[0],d3[1],abs(c3[0]),c3[1]])
return y
nn=0
mm=teset(g,j,z4,z3)
while (mm[3]>0.05 and mm[0]>0.04 or nn<5):
mm=teset(g,j,z4,z3)
nn=nn+1
nnn=nn
if mm[3]<=0.09:
mmm=mm[3]
else:
mmm=0.35
if z4>97 and z4<=114:
print("a Male, mood of speech: Showing no emotion, normal, p-value/sample size= :%.2f" % (mmm), (nnn))
elif z4>114 and z4<=135:
print("a Male, mood of speech: Reading, p-value/sample size= :%.2f" % (mmm), (nnn))
elif z4>135 and z4<=163:
print("a Male, mood of speech: speaking passionately, p-value/sample size= :%.2f" % (mmm), (nnn))
elif z4>163 and z4<=197:
print("a female, mood of speech: Showing no emotion, normal, p-value/sample size= :%.2f" % (mmm), (nnn))
elif z4>197 and z4<=226:
print("a female, mood of speech: Reading, p-value/sample size= :%.2f" % (mmm), (nnn))
elif z4>226 and z4<=245:
print("a female, mood of speech: speaking passionately, p-value/sample size= :%.2f" % (mmm), (nnn))
else:
print("Voice not recognized")
except:
print ("Try again the sound of the audio was not clear")
def myprosody(m,p):
sound=p+"/"+"dataset"+"/"+"audioFiles"+"/"+m+".wav"
sourcerun=p+"/"+"dataset"+"/"+"essen"+"/"+"MLTRNL.praat"
path=p+"/"+"dataset"+"/"+"audioFiles"+"/"
outo=p+"/"+"dataset"+"/"+"datanewchi22.csv"
outst=p+"/"+"dataset"+"/"+"datanewchi44.csv"
outsy=p+"/"+"dataset"+"/"+"datanewchi33.csv"
pa2=p+"/"+"dataset"+"/"+"stats.csv"
pa7=p+"/"+"dataset"+"/"+"datanewchi44.csv"
result_array = np.empty((0, 100))
files = glob.glob(path)
result_array = np.empty((0, 27))
try:
objects= run_file(sourcerun, -20, 2, 0.3, "yes",sound,path, 80, 400, 0.01, capture_output=True)
z1=( objects[1]) # This will print the info from the textgrid object, and objects[1] is a parselmouth.Data object with a TextGrid inside
z3=z1.strip().split()
z2=np.array([z3])
result_array=np.append(result_array,[z3], axis=0)
#print(z3)
np.savetxt(outo,result_array, fmt='%s',delimiter=',')
#Data and features analysis
df = pd.read_csv(outo,
names = ['avepauseduratin','avelongpause','speakingtot','avenumberofwords','articulationrate','inpro','f1norm','mr','q25',
'q50','q75','std','fmax','fmin','vowelinx1','vowelinx2','formantmean','formantstd','nuofwrds','npause','ins',
'fillerratio','xx','xxx','totsco','xxban','speakingrate'],na_values='?')
scoreMLdataset=df.drop(['xxx','xxban'], axis=1)
scoreMLdataset.to_csv(outst, header=False,index = False)
newMLdataset=df.drop(['avenumberofwords','f1norm','inpro','q25','q75','vowelinx1','nuofwrds','npause','xx','totsco','xxban','speakingrate','fillerratio'], axis=1)
newMLdataset.to_csv(outsy, header=False,index = False)
namess=nms = ['avepauseduratin','avelongpause','speakingtot','articulationrate','mr',
'q50','std','fmax','fmin','vowelinx2','formantmean','formantstd','ins',
'xxx']
df1 = pd.read_csv(outsy, names = namess)
nsns=['average_syll_pause_duration','No._long_pause','speaking_time','ave_No._of_words_in_minutes','articulation_rate','No._words_in_minutes','formants_index','f0_index','f0_quantile_25_index',
'f0_quantile_50_index','f0_quantile_75_index','f0_std','f0_max','f0_min','No._detected_vowel','perc%._correct_vowel','(f2/f1)_mean','(f2/f1)_std',
'no._of_words','no._of_pauses','intonation_index',
'(voiced_syll_count)/(no_of_pause)','TOEFL_Scale_Score','Score_Shannon_index','speaking_rate']
dataframe = pd.read_csv(pa2)
df55 = pd.read_csv(outst,names=nsns)
dataframe=dataframe.values
array = df55.values
print("Compared to native speech, here are the prosodic features of your speech:")
for i in range(25):
sl0=dataframe[4:7:1,i+1]
score = array[0,i]
he=scipy.stats.percentileofscore(sl0, score, kind='strict')
if he==0:
he=25
dfout = "%s:\t %f (%s)" % (nsns[i],he,"% percentile ")
print(dfout)
elif he>=25 and he<=75:
dfout = "%s:\t %f (%s)" % (nsns[i],he,"% percentile ")
print(dfout)
else:
dfout = "%s:\t (%s)" % (nsns[i],":Out of Range")
print(dfout)
except:
print ("Try again the sound of the audio was not clear")
def mysplev(m,p):
import sys
def my_except_hook(exctype, value, traceback):
print('There has been an error in the system')
sys.excepthook = my_except_hook
import warnings
if not sys.warnoptions:
warnings.simplefilter("ignore")
sound=p+"/"+"dataset"+"/"+"audioFiles"+"/"+m+".wav"
sourcerun=p+"/"+"dataset"+"/"+"essen"+"/"+"MLTRNL.praat"
path=p+"/"+"dataset"+"/"+"audioFiles"+"/"
pa1=p+"/"+"dataset"+"/"+"datanewchi23.csv"
pa7=p+"/"+"dataset"+"/"+"datanewchi45.csv"
pa5=p+"/"+"dataset"+"/"+"datanewchi34.csv"
result_array = np.empty((0, 100))
ph = sound
files = glob.glob(ph)
result_array = np.empty((0, 27))
try:
for soundi in files:
objects= run_file(sourcerun, -20, 2, 0.3, "yes", soundi, path, 80, 400, 0.01, capture_output=True)
#print (objects[0]) # This will print the info from the sound object, and objects[0] is a parselmouth.Sound object
z1=( objects[1]) # This will print the info from the textgrid object, and objects[1] is a parselmouth.Data object with a TextGrid inside
z3=z1.strip().split()
z2=np.array([z3])
result_array=np.append(result_array,[z3], axis=0)
np.savetxt(pa1,result_array, fmt='%s',delimiter=',')
#Data and features analysis
df = pd.read_csv(pa1, names = ['avepauseduratin','avelongpause','speakingtot','avenumberofwords','articulationrate','inpro','f1norm','mr','q25',
'q50','q75','std','fmax','fmin','vowelinx1','vowelinx2','formantmean','formantstd','nuofwrds','npause','ins',
'fillerratio','xx','xxx','totsco','xxban','speakingrate'],na_values='?')
scoreMLdataset=df.drop(['xxx','xxban'], axis=1)
scoreMLdataset.to_csv(pa7, header=False,index = False)
newMLdataset=df.drop(['avenumberofwords','f1norm','inpro','q25','q75','vowelinx1','nuofwrds','npause','xx','totsco','xxban','speakingrate','fillerratio'], axis=1)
newMLdataset.to_csv(pa5, header=False,index = False)
namess=nms = ['avepauseduratin','avelongpause','speakingtot','articulationrate','mr',
'q50','std','fmax','fmin','vowelinx2','formantmean','formantstd','ins',
'xxx']
df1 = pd.read_csv(pa5,
names = namess)
df33=df1.drop(['xxx'], axis=1)
array = df33.values
array=np.log(array)
x = array[:,0:13]
def myspp(bp,bg):
sound=bg+"/"+"dataset"+"/"+"audioFiles"+"/"+bp+".wav"
sourcerun=bg+"/"+"dataset"+"/"+"essen"+"/"+"myspsolution.praat"
path=bg+"/"+"dataset"+"/"+"audioFiles"+"/"
objects= run_file(sourcerun, -20, 2, 0.3, "yes",sound,path, 80, 400, 0.01, capture_output=True)
print (objects[0]) # This will print the info from the sound object, and objects[0] is a parselmouth.Sound object
z1=str( objects[1]) # This will print the info from the textgrid object, and objects[1] is a parselmouth.Data object with a TextGrid inside
z2=z1.strip().split()
z3=int(z2[13]) # will be the integer number 10
z4=float(z2[14]) # will be the floating point number 8.3
db= binom.rvs(n=10,p=z4,size=10000)
a=np.array(db)
b=np.mean(a)*100/10
return b
bp=m
bg=p
bi=myspp(bp,bg)
if bi<85:
input("Try again, unnatural-sounding speech detected. No further result. Press any key to exit.")
exit()
filename=p+"/"+"dataset"+"/"+"essen"+"/"+"CART_model.sav"
model = pickle.load(open(filename, 'rb'))
predictions = model.predict(x)
print("58% accuracy ",predictions)
filename=p+"/"+"dataset"+"/"+"essen"+"/"+"KNN_model.sav"
model = pickle.load(open(filename, 'rb'))
predictions = model.predict(x)
print("65% accuracy ",predictions)
filename=p+"/"+"dataset"+"/"+"essen"+"/"+"LDA_model.sav"
model = pickle.load(open(filename, 'rb'))
predictions = model.predict(x)
print("70% accuracy ",predictions)
filename=p+"/"+"dataset"+"/"+"essen"+"/"+"LR_model.sav"
model = pickle.load(open(filename, 'rb'))
predictions = model.predict(x)
print("67% accuracy ",predictions)
filename=p+"/"+"dataset"+"/"+"essen"+"/"+"NB_model.sav"
model = pickle.load(open(filename, 'rb'))
predictions = model.predict(x)
print("64% accuracy ",predictions)
filename=p+"/"+"dataset"+"/"+"essen"+"/"+"SVN_model.sav"
model = pickle.load(open(filename, 'rb'))
predictions = model.predict(x)
print("63% accuracy ",predictions)
except:
print ("Try again the sound of the audio was not clear")
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/surfboard/setup.py | features/audio_features/helpers/surfboard/setup.py | from setuptools import setup
setup(
name="surfboard",
version="0.2.0",
description="Novoic's audio feature extraction library https://novoic.com",
url="http://github.com/novoic/surfboard",
author="Raphael Lenain",
author_email="raphael@novoic.com",
license="GPL-3.0",
packages=["surfboard"],
keywords=[
"feature-extraction",
"audio",
"machine-learning",
"audio-processing",
"python",
"speech-processing",
"healthcare",
"signal-processing",
"alzheimers-disease",
"parkinsons-disease",
],
download_url="https://github.com/novoic/surfboard/archive/v0.2.0.tar.gz",
install_requires=[
"librosa>=0.7.2",
"numba==0.48.0", # Needed until Librosa deploys fix to mute warnings.
"pysptk>=0.1.18",
"PeakUtils>=1.3.3",
"pyloudnorm==0.1.0",
"pandas>=1.0.1",
"tqdm>=4.42.1",
"pyyaml>=5.3",
"Cython>=0.29.15",
"pytest>=5.4.1",
"SoundFile>=0.10.3.post1",
],
scripts=['bin/surfboard'],
zip_safe=False,
classifiers=[
'Development Status :: 4 - Beta', # Chose either "3 - Alpha", "4 - Beta" or "5 - Production/Stable" as the current state of your package
'Intended Audience :: Developers', # Define that your audience are developers
'Topic :: Software Development :: Build Tools',
'License :: OSI Approved :: GNU General Public License v3 (GPLv3)', # Again, pick a license
'Programming Language :: Python :: 3', #Specify which pyhton versions that you want to support
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
],
)
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/surfboard/docs/conf.py | features/audio_features/helpers/surfboard/docs/conf.py | # Configuration file for the Sphinx documentation builder.
#
# This file only contains a selection of the most common options. For a full
# list see the documentation:
# https://www.sphinx-doc.org/en/master/usage/configuration.html
# -- Path setup --------------------------------------------------------------
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
import os
import sys
from unittest.mock import MagicMock
sys.path.insert(0, os.path.abspath('.'))
# Mock module to bypass pip install
class Mock(MagicMock):
@classmethod
def __getattr__(cls, name):
return MagicMock()
MOCK_MODULES = [
'librosa', 'librosa.display', 'librosa.core',
]
sys.modules.update((mod_name, Mock()) for mod_name in MOCK_MODULES)
master_doc = 'index'
# Needed to not sort alphabetically.
autodoc_member_order = 'bysource'
# -- Project information -----------------------------------------------------
project = 'Surfboard'
copyright = '2020, Raphael Lenain'
author = 'Raphael Lenain'
# -- General configuration ---------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = ['sphinx.ext.autodoc', 'sphinx.ext.coverage', 'sphinx.ext.napoleon'
]
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This pattern also affects html_static_path and html_extra_path.
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
# -- Options for HTML output -------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
html_theme = 'sphinx_rtd_theme'
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/surfboard/surfboard/feature_extraction_multiprocessing.py | features/audio_features/helpers/surfboard/surfboard/feature_extraction_multiprocessing.py | #!/usr/bin/env python
"""This file contains functions to compute features with multiprocessing."""
from multiprocessing import Pool
from functools import partial
import pandas as pd
from tqdm import tqdm
from .feature_extraction import (
extract_features_from_waveform,
)
from .sound import Waveform
def load_waveform_from_path(sample_rate, path):
"""Helper function to access constructor with Pool
Args:
sample_rate (int): The sample rate to load the Waveform object
path (str): The path to the audio file to load
Returns:
Waveform: The loaded Waveform object
"""
return Waveform(path=path, sample_rate=sample_rate)
def load_waveforms_from_paths(paths, sample_rate, num_proc=1):
"""Loads waveforms from paths using multiprocessing
Args:
paths (list of str): A list of paths to audio files
sample_rate (int): The sample rate to load the audio files
num_proc (int >= 1): The number of parallel processes to run
Returns:
list of Waveform: List of loaded Waveform objects
"""
assert (num_proc > 0 and isinstance(num_proc, int)), 'The number of parallel \
processes should be a >= 1 integer.'
load_helper = partial(load_waveform_from_path, sample_rate)
with Pool(num_proc) as pool:
waveforms_iter = tqdm(
pool.imap(load_helper, paths), total=len(paths), desc='Loading waveforms...'
)
# Converting to list runs the iterator.
output_waveforms = list(waveforms_iter)
return output_waveforms
def extract_features_from_path(components_list, statistics_list, sample_rate, path):
"""Function which loads a waveform, computes the components and statistics and returns them,
without the need to store the waveforms in memory. This is to prevent accumulating too
much memory.
Args:
components_list (list of str/dict): This is a list of the methods which
should be applied to all the waveform objects in waveforms. If a dict,
this also contains arguments to the sound.Waveform methods.
statistics_list (list of str): This is a list of the methods which
should be applied to all the "time-dependent" features computed
from the waveforms.
sample_rate (int > 0): sampling rate to load the waveforms
path (str): path to audio file to extract features from
Returns:
dict: Dictionary mapping feature names to values.
"""
try:
wave = Waveform(path=path, sample_rate=sample_rate)
feats = extract_features_from_waveform(components_list, statistics_list, wave)
return feats
except Exception as extraction_exception:
print(f'Found exception "{extraction_exception}". Skipping {path}')
return {}
except:
print(f'Unknown error. Skipping {path}')
return {}
def extract_features_from_paths(paths, components_list, statistics_list=None, sample_rate=44100, num_proc=1):
"""Function which loads waveforms, computes the features and statistics and returns them,
without the need to store the waveforms in memory. This is to prevent accumulating too
much memory.
Args:
paths (list of str): .wav to compute
components_list (list of str or dict): This is a list of the methods which
should be applied to all the waveform objects in waveforms. If a dict,
this also contains arguments to the sound.Waveform methods.
statistics_list (list of str): This is a list of the methods which
should be applied to all the "time-dependent" features computed
from the waveforms.
sample_rate (int > 0): sampling rate to load the waveforms
Returns:
pandas DataFrame: pandas dataframe where every row corresponds
to features extracted for one of the waveforms and columns
represent individual features.
"""
extractor_helper = partial(
extract_features_from_path, components_list, statistics_list, sample_rate
)
with Pool(num_proc) as pool:
output_feats_iter = tqdm(
pool.imap(extractor_helper, paths), total=len(paths),
desc='Extracting features from paths...'
)
# Converting to list runs the iterator.
output_feats = list(output_feats_iter)
output_df = pd.DataFrame(output_feats)
# Ensure the output DataFrame has the same length as input paths. That way, we can
# guarantee that the names correspond to the correct rows.
assert len(output_df) == len(paths), "Output DataFrame does not have same length as \
input list of paths."
return output_df
def extract_features(waveforms, components_list, statistics_list=None, num_proc=1):
"""This is an important function. Given a list of Waveform objects, a list of
Waveform methods in the form of strings and a list of Barrel methods in the
form of strings, compute the time-independent features resulting. This function
does multiprocessing.
Args:
waveforms (list of Waveform): This is a list of waveform objects
components_list (list of str or dict): This is a list of the methods which
should be applied to all the waveform objects in waveforms. If a dict,
this also contains arguments to the sound.Waveform methods.
statistics_list (list of str): This is a list of the methods which
should be applied to all the "time-dependent" features computed
from the waveforms.
num_proc (int >= 1): The number of parallel processes to run
Returns:
pandas DataFrame: pandas dataframe where every row corresponds
to features extracted for one of the waveforms and columns
represent individual features.
"""
extractor_helper = partial(
extract_features_from_waveform, components_list, statistics_list
)
with Pool(num_proc) as pool:
output_feats_iter = tqdm(
pool.imap(extractor_helper, waveforms), total=len(waveforms),
desc='Extracting features...'
)
# Converting to list runs the iterator.
output_feats = list(output_feats_iter)
return pd.DataFrame(output_feats)
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/surfboard/surfboard/sound.py | features/audio_features/helpers/surfboard/surfboard/sound.py | #!/usr/bin/env python
"""This file contains the central Waveform class of the surfboard package, and all the corresponding methods"""
import librosa
from scipy.signal import cwt, morlet
import numpy as np
from . import (
jitters,
shimmers,
formants,
spectrum,
hnr,
dfa,
)
from .misc_components import (
get_shannon_entropy,
get_shannon_entropy_slidingwindow,
get_loudness,
get_loudness_slidingwindow,
get_kurtosis_slidingwindow,
get_ppe,
get_f0,
get_crest_factor,
get_log_energy,
get_log_energy_slidingwindow,
get_bark_spectrogram,
)
from .utils import (
numseconds_to_numsamples,
lpc_to_lsf,
parse_component,
)
class Waveform:
"""
The central class of the package. This class instantiates with a path to a sound file and
a sample rate to load it or a signal and a sample rate. We can then use methods of this class to
compute various components.
"""
def __init__(self, path=None, signal=None, sample_rate=44100):
"""
Instantiate an object of this class. This loads the audio into a (T,) np.array: self.waveform.
Args:
path (str): Path to a sound file (eg .wav or .mp3).
sample_rate (int): Sample rate used to load the sound file.
OR:
signal (np.array, [T, ]): Waveform signal.
sample_rate (int): Sample rate of the waveform.
"""
if signal is None:
assert isinstance(path, str), "The path argument to the constructor must be a string."
if path is None:
assert isinstance(signal, np.ndarray), "The signal argument to the constructor must be a np.array."
assert len(signal.shape) == 1, "The signal argument to the constructor must be a 1D [T, ] array."
if (signal is not None) and (path is not None):
raise ValueError("Cannot give both a path to a sound file and a signal. Take your pick!")
assert isinstance(sample_rate, int), "The sample_rate argument to the constructor must be an integer."
if path is not None:
self._waveform = librosa.core.load(path, sr=sample_rate)[0]
else:
self._waveform = signal
assert self.waveform.shape[0] > 1, "Your waveform must have more than one element."
self._sample_rate = sample_rate
# Make the variables instantiated in __init__ as properties: this hides them behind a "shadow attribute".
@property
def waveform(self):
"""Properties written in this way prevent users to assign to self.waveform"""
return self._waveform
@property
def sample_rate(self):
"""Properties written in this way prevent users to assign to self.sample_rate"""
return self._sample_rate
def compute_components(self, component_list):
"""
Compute components from self.waveform and self.sample_rate using a list of strings
which identify which components to compute. You can pass in arguments to the
components (e.g. frame_length_seconds) by passing in the components as dictionaries.
For example: {'mfcc': {'n_mfcc': 26}}. See README.md for more details.
Args:
component_list (list of str or dict): The methods to be computed.
If elements are str, then the method uses default arguments.
If dict, the arguments are passed to the methods.
Returns:
dict: Dictionary mapping component names to computed components.
"""
components = {}
for component in component_list:
component_name, arguments = parse_component(component)
try:
method_to_call = getattr(self, component_name)
if arguments is not None:
result = method_to_call(**arguments)
else:
result = method_to_call()
# Set result as None so as not to skip an entire Waveform object.
except AttributeError:
raise NotImplementedError(f'The component {component_name} does not exist.')
except:
result = float("nan")
components[component_name] = result
return components
def mfcc(self, n_mfcc=13, n_fft_seconds=0.04, hop_length_seconds=0.01):
"""
Given a number of MFCCs, use the librosa.feature.mfcc method to compute the correct
number of MFCCs on self.waveform and returns the array.
Args:
n_mfcc (int): number of MFCCs to compute
n_fft_seconds (float): length of the FFT window in seconds.
hop_length_seconds (float): how much the window shifts for every timestep,
in seconds.
Returns:
np.array, [n_mfcc, T / hop_length]: MFCCs.
"""
n_fft = numseconds_to_numsamples(n_fft_seconds, self.sample_rate)
hop_length = numseconds_to_numsamples(hop_length_seconds, self.sample_rate)
return librosa.feature.mfcc(
self.waveform, sr=self.sample_rate, n_mfcc=n_mfcc, n_fft=n_fft, hop_length=hop_length,
)
def log_melspec(self, n_mels=128, n_fft_seconds=0.04, hop_length_seconds=0.01):
"""Given a number of filter banks, this uses the librosa.feature.melspectrogram method to
compute the log melspectrogram of self.waveform.
Args:
n_mels (int): Number of filter banks per time step in the log melspectrogram.
n_fft_seconds (float): Length of the FFT window in seconds.
hop_length_seconds (float): How much the window shifts for every timestep, in seconds.
Returns:
np.array, [n_mels, T_mels]: Log mel spectrogram.
"""
n_fft = numseconds_to_numsamples(n_fft_seconds, self.sample_rate)
hop_length = numseconds_to_numsamples(hop_length_seconds, self.sample_rate)
melspec = librosa.feature.melspectrogram(
self.waveform, sr=self.sample_rate, n_mels=n_mels, n_fft=n_fft, hop_length=hop_length,
)
return librosa.power_to_db(melspec, ref=np.max)
def magnitude_spectrum(self, n_fft_seconds=0.04, hop_length_seconds=0.01):
"""Compute the STFT of self.waveform. This is used for further spectral analysis.
Args:
n_fft_seconds (float): Length of the FFT window in seconds.
hop_length_seconds (float): How much the window shifts for every timestep, in seconds.
Returns:
np.array, [n_fft / 2 + 1, T / hop_length]: The magnitude spectrogram
"""
n_fft = numseconds_to_numsamples(n_fft_seconds, self.sample_rate)
hop_length = numseconds_to_numsamples(hop_length_seconds, self.sample_rate)
mag_spectrum, _ = librosa.core.spectrum._spectrogram(self.waveform, n_fft=n_fft, hop_length=hop_length)
return mag_spectrum
def bark_spectrogram(self, n_fft_seconds=0.04, hop_length_seconds=0.01):
"""Compute the magnitude spectrum of self.waveform and arrange the frequency bins
in the Bark scale. See https://en.wikipedia.org/wiki/Bark_scale
Args:
n_fft_seconds (float): Length of the FFT window in seconds.
hop_length_seconds (float): How much the window shifts for every timestep, in seconds.
Returns:
np.array, [n_bark_bands, T / hop_length]: The Bark spectrogram
"""
return get_bark_spectrogram(
self.waveform, self.sample_rate, n_fft_seconds, hop_length_seconds,
)
def morlet_cwt(self, widths=None):
"""Compute the Morlet Continuous Wavelet Transform of self.waveform. Note that this
method returns a large matrix. Shown relevant in Vasquez-Correa et Al, 2016.
Args:
wavelet (str): Wavelet to use. Currently only support "morlet".
widhts (None or list): If None, uses default of 32 evenly spaced widths
as [i * sample_rate / 500 for i in range(1, 33)]
Returns:
np.array, [len(widths), T]: The continuous wavelet transform
"""
# This comes from having [32, 64, ..., 1024] at 16 kHz.
if widths is None:
widths = [int(i * self.sample_rate / 500) for i in range(1, 33)]
# Take absolute value because the output of cwt is complex for morlet wavelet.
return np.abs(cwt(self.waveform, morlet, widths))
def chroma_stft(self, n_fft_seconds=0.04, hop_length_seconds=0.01, n_chroma=12):
"""See librosa.feature documentation for more details on this component. This computes
a chromagram from a waveform.
Args:
n_fft_seconds (float): Length of the FFT window in seconds.
hop_length_seconds (float): How much the window shifts for every timestep,
in seconds.
n_chroma (int): Number of chroma bins to compute.
Returns:
np.array, [n_chroma, T / hop_length]: The chromagram
"""
n_fft = numseconds_to_numsamples(n_fft_seconds, self.sample_rate)
hop_length = numseconds_to_numsamples(hop_length_seconds, self.sample_rate)
assert isinstance(n_chroma, int) and n_chroma > 0, "n_chroma must be a >0 integer."
return librosa.feature.chroma_stft(
y=self.waveform, sr=self.sample_rate, n_fft=n_fft, hop_length=hop_length, n_chroma=n_chroma,
)
def chroma_cqt(self, hop_length_seconds=0.01, n_chroma=12):
"""See librosa.feature documentation for more details on this component. This computes
a constant-Q chromagram from a waveform.
Args:
hop_length_seconds (float): How much the window shifts for every timestep,
in seconds.
n_chroma (int): Number of chroma bins to compute.
Returns:
np.array, [n_chroma, T / hop_length]: Constant-Q transform mode
"""
hop_length = numseconds_to_numsamples(hop_length_seconds, self.sample_rate)
assert isinstance(n_chroma, int) and n_chroma > 0, "n_chroma must be a >0 integer."
return librosa.feature.chroma_cqt(
y=self.waveform, sr=self.sample_rate, hop_length=hop_length, n_chroma=n_chroma,
)
def chroma_cens(self, hop_length_seconds=0.01, n_chroma=12):
"""See librosa.feature documentation for more details on this component. This computes
the CENS chroma variant from a waveform.
Args:
hop_length_seconds (float): How much the window shifts for every timestep,
in seconds.
n_chroma (int): Number of chroma bins to compute.
Returns:
np.array, [n_chroma, T / hop_length]: CENS-chromagram
"""
hop_length = numseconds_to_numsamples(hop_length_seconds, self.sample_rate)
assert isinstance(n_chroma, int) and n_chroma > 0, "n_chroma must be a >0 integer."
return librosa.feature.chroma_cens(
y=self.waveform, sr=self.sample_rate, hop_length=hop_length, n_chroma=n_chroma,
)
def spectral_slope(self, n_fft_seconds=0.04, hop_length_seconds=0.01):
"""Compute the magnitude spectrum, and compute the spectral slope from that. This is a
basic approximation of the spectrum by a linear regression line. There is one coefficient
per timestep.
Args:
n_fft_seconds (float): Length of the FFT window in seconds.
hop_length_seconds (float): How much the window shifts for every timestep,
in seconds.
Returns:
np.array, [1, T / hop_length]: Linear regression slope, for every timestep.
"""
magnitude_spectrum = self.magnitude_spectrum(n_fft_seconds, hop_length_seconds)
return spectrum.get_spectral_slope(magnitude_spectrum, self.sample_rate)
def spectral_flux(self, n_fft_seconds=0.04, hop_length_seconds=0.01):
"""Compute the magnitude spectrum, and compute the spectral flux from that. This is a
basic metric, measuring the rate of change of the spectrum.
Args:
n_fft_seconds (float): Length of the FFT window in seconds.
hop_length_seconds (float): How much the window shifts for every timestep,
in seconds.
Returns:
np.array, [1, T / hop_length]: The spectral flux array.
"""
magnitude_spectrum = self.magnitude_spectrum(n_fft_seconds, hop_length_seconds)
return spectrum.get_spectral_flux(magnitude_spectrum, self.sample_rate)
def spectral_entropy(self, n_fft_seconds=0.04, hop_length_seconds=0.01):
"""Compute the magnitude spectrum, and compute the spectral entropy from that. To compute
that, simply normalize each frame of the spectrum, so that they are a probability
distribution, then compute the entropy from that.
Args:
n_fft_seconds (float): Length of the FFT window in seconds.
hop_length_seconds (float): How much the window shifts for every timestep,
in seconds.
Returns:
np.array, [1, T / hop_length]: The entropy of each normalized frame.
"""
# [n_frequency_bins, T / hop_length]
magnitude_spectrum = self.magnitude_spectrum(n_fft_seconds, hop_length_seconds)
# Normalize at each time frame. Compute the mean over the first dimension (i.e. sum of
# each column).
col_sums = magnitude_spectrum.sum(axis=0)
normalized_spectrum = magnitude_spectrum / col_sums[np.newaxis, :]
return (- normalized_spectrum * np.log(normalized_spectrum + 1e-9)).sum(0)[np.newaxis, :]
def spectral_centroid(self, n_fft_seconds=0.04, hop_length_seconds=0.01):
"""Compute spectral centroid from magnitude spectrum. "First moment".
Args:
n_fft_seconds (float): Length of the FFT window in seconds.
hop_length_seconds (float): How much the window shifts for every timestep,
in seconds.
Returns:
np.array, [1, T / hop_length]: Spectral centroid of the magnitude spectrum (first moment).
"""
magnitude_spectrum = self.magnitude_spectrum(n_fft_seconds, hop_length_seconds)
return spectrum.get_spectral_centroid(magnitude_spectrum, self.sample_rate)
def spectral_spread(self, n_fft_seconds=0.04, hop_length_seconds=0.01):
"""Compute spectral spread (also spectral variance) from magnitude spectrum.
Args:
n_fft_seconds (float): Length of the FFT window in seconds.
hop_length_seconds (float): How much the window shifts for every timestep,
in seconds.
Returns:
np.array, [1, T / hop_length: Spectral skewness of the magnitude spectrum (second moment).
"""
magnitude_spectrum = self.magnitude_spectrum(n_fft_seconds, hop_length_seconds)
return spectrum.get_spectral_spread(magnitude_spectrum, self.sample_rate)
def spectral_skewness(self, n_fft_seconds=0.04, hop_length_seconds=0.01):
"""Compute spectral skewness from magnitude spectrum.
Args:
n_fft_seconds (float): Length of the FFT window in seconds.
hop_length_seconds (float): How much the window shifts for every timestep,
in seconds.
Returns:
np.array, [1, T / hop_length: Spectral skewness of the magnitude spectrum (third moment).
"""
magnitude_spectrum = self.magnitude_spectrum(n_fft_seconds, hop_length_seconds)
return spectrum.get_spectral_skewness(magnitude_spectrum, self.sample_rate)
def spectral_kurtosis(self, n_fft_seconds=0.04, hop_length_seconds=0.01):
"""Compute spectral kurtosis from magnitude spectrum.
Args:
n_fft_seconds (float): Length of the FFT window in seconds.
hop_length_seconds (float): How much the window shifts for every timestep,
in seconds.
Returns:
np.array, [1, T / hop_length]: Spectral kurtosis of the magnitude spectrum (fourth moment).
"""
magnitude_spectrum = self.magnitude_spectrum(n_fft_seconds, hop_length_seconds)
return spectrum.get_spectral_kurtosis(magnitude_spectrum, self.sample_rate)
def spectral_flatness(self, n_fft_seconds=0.04, hop_length_seconds=0.01):
"""Given an FFT window size and a hop length, uses the librosa feature package to compute the spectral
flatness of self.waveform. This component is a measure to quantify how "noise-like" a sound is. The closer
to 1, the closer the sound is to white noise.
Args:
n_fft_seconds (float): Length of the FFT window in seconds.
hop_length_seconds (float): How much the window shifts for every timestep,
in seconds.
Returns:
np.array, [1, T/hop_length]: Spectral flatness vector computed over windows.
"""
n_fft = numseconds_to_numsamples(n_fft_seconds, self.sample_rate)
hop_length = numseconds_to_numsamples(hop_length_seconds, self.sample_rate)
return librosa.feature.spectral_flatness(
self.waveform, n_fft=n_fft, hop_length=hop_length
)
def spectral_rolloff(self, roll_percent=0.85, n_fft_seconds=0.04, hop_length_seconds=0.01):
"""Given an FFT window size and a hop length, uses the librosa component package to compute the spectral
roll-off of self.waveform. It is the point below which most energy of a signal is contained and is
useful in distinguishing sounds with different energy distributions.
Args:
roll_percent (float): The roll-off percentage:
https://essentia.upf.edu/reference/streaming_RollOff.html
n_fft_seconds (float): Length of the FFT window in seconds.
hop_length_seconds (float): How much the window shifts for every timestep,
in seconds.
Returns:
np.array, [1, T/hop_length]: Spectral rolloff vector computed over windows.
"""
n_fft = numseconds_to_numsamples(n_fft_seconds, self.sample_rate)
hop_length = numseconds_to_numsamples(hop_length_seconds, self.sample_rate)
return librosa.feature.spectral_rolloff(
self.waveform, sr=self.sample_rate, n_fft=n_fft, hop_length=hop_length,
roll_percent=roll_percent,
)
def loudness(self):
"""Compute the loudness of self.waveform using the pyloudnorm package.
See https://github.com/csteinmetz1/pyloudnorm for more details on potential
arguments to the functions below.
Returns:
float: The loudness of self.waveform
"""
return get_loudness(self.waveform, self.sample_rate)
def loudness_slidingwindow(self, frame_length_seconds=1, hop_length_seconds=0.25):
"""Compute the loudness of self.waveform over time. See self.loudness for
more details.
Args:
frame_length_seconds (float): Length of the sliding window in seconds.
hop_length_seconds (float): How much the sliding window moves by
Returns:
[1, T / hop_length]: The loudness on frames of self.waveform
"""
try:
return get_loudness_slidingwindow(
self.waveform, self.sample_rate, frame_length_seconds, hop_length_seconds
)
except ValueError:
raise ValueError(
"Frames for loudness computation are too short. Consider decreasing the frame length."
)
def shannon_entropy(self):
"""Compute the Shannon entropy of self.waveform,
as per https://ijssst.info/Vol-16/No-4/data/8258a127.pdf
Returns:
float: Shannon entropy of the waveform.
"""
return get_shannon_entropy(self.waveform)
def shannon_entropy_slidingwindow(self, frame_length_seconds=0.04, hop_length_seconds=0.01):
"""Compute the Shannon entropy of subblocks of a waveform into a newly created time series,
as per https://ijssst.info/Vol-16/No-4/data/8258a127.pdf
Args:
frame_length_seconds (float): Length of the sliding window, in seconds.
hop_length_seconds (float): How much the window shifts for every timestep,
in seconds.
Returns:
np.array, [1, T / hop_length]: Shannon entropy for each frame
"""
return get_shannon_entropy_slidingwindow(
self.waveform, self.sample_rate, frame_length_seconds, hop_length_seconds
)
def zerocrossing(self):
"""Compute the zero crossing rate on self.waveform and return it as per
https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0162128&type=printable
Note: can also compute zero crossing rate as a time series -- see librosa.feature.zero_crossing_rate,
and self.get_zcr_sequence.
Returns:
dictionary: Keys "num_zerocrossings" and "rate" mapping to: zerocrossing["num_zerocrossings"]:
number of zero crossings in self.waveform zerocrossing["rate"]: number of zero crossings
divided by number of samples.
"""
num_zerocrossings = librosa.core.zero_crossings(self.waveform).sum()
rate = num_zerocrossings / self.waveform.shape[0]
return {"num_zerocrossings": num_zerocrossings, "zerocrossing_rate": rate}
def zerocrossing_slidingwindow(self, frame_length_seconds=0.04, hop_length_seconds=0.01):
"""Compute the zero crossing rate sequence on self.waveform and return it. This is now a sequence where every entry is
computed on frame_length samples. There is a sliding window of length hop_length.
Args:
frame_length_seconds (float): Length of the sliding window, in seconds.
hop_length_seconds (float): How much the window shifts for every timestep,
in seconds.
Returns:
np.array, [1, T / hop_length]: Fraction of zero crossings for each frame.
"""
frame_length = numseconds_to_numsamples(frame_length_seconds, self.sample_rate)
hop_length = numseconds_to_numsamples(hop_length_seconds, self.sample_rate)
return librosa.feature.zero_crossing_rate(
self.waveform, frame_length=frame_length, hop_length=hop_length
)
def rms(self, frame_length_seconds=0.04, hop_length_seconds=0.01):
"""Get the root mean square value for each frame, with a specific frame length and hop length. This used
to be called RMSE, or root mean square energy in the jargon?
Args:
frame_length_seconds (float): Length of the sliding window, in seconds.
hop_length_seconds (float): How much the window shifts for every timestep,
in seconds.
Returns:
np.array, [1, T / hop_length]: RMS value for each frame.
"""
frame_length = numseconds_to_numsamples(frame_length_seconds, self.sample_rate)
hop_length = numseconds_to_numsamples(hop_length_seconds, self.sample_rate)
return librosa.feature.rms(self.waveform, frame_length=frame_length, hop_length=hop_length)
def intensity(self, frame_length_seconds=0.04, hop_length_seconds=0.01):
"""Get a value proportional to the intensity for each frame, with a specific frame length and hop length.
Note that the intensity is proportional to the RMS amplitude squared.
Args:
frame_length_seconds (float): Length of the sliding window, in seconds.
hop_length_seconds (float): How much the window shifts for every timestep,
in seconds.
Returns:
np.array, [1, T / hop_length]: Proportional intensity value for each frame.
"""
return self.rms(
frame_length_seconds=frame_length_seconds,
hop_length_seconds=hop_length_seconds
) ** 2
def crest_factor(self, frame_length_seconds=0.04, hop_length_seconds=0.01):
"""Get the crest factor of this waveform, on sliding windows. This value measures the local intensity
of peaks in a waveform. Implemented as per: https://en.wikipedia.org/wiki/Crest_factor
Args:
frame_length_seconds (float): Length of the sliding window, in seconds.
hop_length_seconds (float): How much the window shifts for every timestep,
in seconds.
Returns:
np.array, [1, T / hop_length]: Crest factor for each frame.
"""
rms_array = self.rms(
frame_length_seconds=frame_length_seconds, hop_length_seconds=hop_length_seconds
)
return get_crest_factor(
self.waveform, self.sample_rate, rms_array, frame_length_seconds=frame_length_seconds,
hop_length_seconds=hop_length_seconds,
)
def f0_contour(self, hop_length_seconds=0.01, method='swipe', f0_min=60, f0_max=300):
"""Compute the F0 contour using PYSPTK: https://github.com/r9y9/pysptk/.
Args:
hop_length_seconds (float): Hop size argument in pysptk. Corresponds to hopsize
in the window sliding of the computation of f0. This is in seconds and gets
converted.
method (str): One of 'swipe' or 'rapt'. Define which method to use for f0
calculation. See https://github.com/r9y9/pysptk
f0_min (float): minimum acceptable f0.
f0_max (float): maximum acceptable f0.
Returns:
np.array, [1, t1]: F0 contour of self.waveform. Contains unvoiced
frames.
"""
return get_f0(
self.waveform, self.sample_rate, hop_length_seconds=hop_length_seconds, method=method,
f0_min=f0_min, f0_max=f0_max,
)["contour"]
def f0_statistics(self, hop_length_seconds=0.01, method='swipe'):
"""Compute the F0 mean and standard deviation of self.waveform. Note that we cannot
simply rely on using statistics applied to the f0_contour since we do not want to
include the zeros in the mean and standard deviation calculations.
Args:
hop_length_seconds (float): Hop size argument in pysptk. Corresponds to hopsize
in the window sliding of the computation of f0. This is in seconds and gets
converted.
method (str): One of 'swipe' or 'rapt'. Define which method to use for f0
calculation. See https://github.com/r9y9/pysptk
Returns:
dict: Dictionary mapping: "mean": f0 mean of self.waveform.
"std": f0 standard deviation of self.waveform.
"""
f0_dict = get_f0(
self.waveform, self.sample_rate, hop_length_seconds=hop_length_seconds, method=method
)
f0_mean, f0_std = f0_dict["mean"], f0_dict["std"]
return {'f0_mean': f0_mean, 'f0_std': f0_std}
def ppe(self):
"""Compute pitch period entropy. This is an adaptation of the following Matlab code:
https://github.com/Mak-Sim/Troparion/blob/5126f434b96e0c1a4a41fa99dd9148f3c959cfac/Perturbation_analysis/pitch_period_entropy.m
Note that computing the PPE relies on the existence of voiced portions in the F0 trajectory.
Returns:
float: The pitch period entropy, as per http://www.maxlittle.net/students/thesis_tsanas.pdf
"""
f0_dict = get_f0(self.waveform, self.sample_rate)
if not np.isnan(f0_dict["mean"]):
f_min = f0_dict["mean"] / np.sqrt(2)
else:
raise ValueError("F0 mean is NaN. Check that the waveform has voiced portions.")
if len(f0_dict["values"]) > 0:
rat_f0 = f0_dict["values"] / f_min
else:
raise ValueError("F0 does not contain any voiced portions")
return get_ppe(rat_f0)
def jitters(self, p_floor=0.0001, p_ceil=0.02, max_p_factor=1.3):
"""Compute the jitters mathematically, according to certain conditions
given by p_floor, p_ceil and max_p_factor. See jitters.py for more details.
Args:
p_floor (float): Minimum acceptable period.
p_ceil (float): Maximum acceptable period.
max_p_factor (float): value to use for the period factor principle
Returns:
dict: dictionary mapping strings to floats, with keys "localJitter",
"localabsoluteJitter", "rapJitter", "ppq5Jitter", "ddpJitter"
"""
jitters_dict = jitters.get_jitters(
self.f0_contour()[0], p_floor=p_floor,
p_ceil=p_ceil, max_p_factor=max_p_factor,
)
return jitters_dict
def shimmers(self, max_a_factor=1.6, p_floor=0.0001, p_ceil=0.02, max_p_factor=1.3):
"""Compute the shimmers mathematically, according to certain conditions
given by max_a_factor, p_floor, p_ceil and max_p_factor.
See shimmers.py for more details.
Args:
max_a_factor (float): Value to use for amplitude factor principle
p_floor (float): Minimum acceptable period.
p_ceil (float): Maximum acceptable period.
max_p_factor (float): value to use for the period factor principle
Returns:
dict: Dictionary mapping strings to floats, with keys "localShimmer",
"localdbShimmer", "apq3Shimmer", "apq5Shimmer", "apq11Shimmer"
"""
shimmers_dict = shimmers.get_shimmers(
self.waveform, self.sample_rate, self.f0_contour()[0], max_a_factor=max_a_factor,
p_floor=p_floor, p_ceil=p_ceil, max_p_factor=max_p_factor,
)
return shimmers_dict
def hnr(self):
"""See https://www.ncbi.nlm.nih.gov/pubmed/12512635 for more thorough description
of why HNR is important in the scope of healthcare.
Returns:
float: The harmonics to noise ratio computed on self.waveform.
"""
return hnr.get_harmonics_to_noise_ratio(self.waveform, self.sample_rate)
def dfa(self, window_lengths=[64, 128, 256, 512, 1024, 2048, 4096]):
"""See Tsanas et al, 2011:
Novel speech signal processing algorithms for high-accuracy classification of Parkinson‟s disease
Detrended Fluctuation Analysis
Args:
window_lengths (list of int > 0): List of L to use in DFA computation.
See dfa.py for more details.
Returns:
float: The detrended fluctuation analysis alpha value.
"""
return dfa.get_dfa(self.waveform, window_lengths)
def lpc(self, order=4, return_np_array=False):
"""This uses the librosa backend to get the Linear Prediction Coefficients via Burg's
method. See librosa.core.lpc for more details.
Args:
order (int > 0): Order of the linear filter
return_np_array (bool): If False, returns a dictionary. Otherwise a
numpy array.
Returns:
dict or np.array, [order + 1, ]: Dictionary mapping 'LPC_{i}' to the i'th lpc coefficient,
for i = 0...order. Or: LP prediction error coefficients (np array case)
"""
lpcs = librosa.core.lpc(self.waveform, order=order)
if return_np_array:
return lpcs
return {f'LPC_{i}': lpc for i, lpc in enumerate(lpcs)}
def lsf(self, order=4, return_np_array=False):
"""Compute the LPC coefficients, then convert them to LSP frequencies. The conversion is
done using https://github.com/cokelaer/spectrum/blob/master/src/spectrum/linear_prediction.py
Args:
order (int > 0): Order of the linear filter for LPC calculation
return_np_array (bool): If False, returns a dictionary. Otherwise a
numpy array.
Returns:
dict or np.array, [order, ]: Dictionary mapping 'LPC_{i}' to the
i'th lpc coefficient, for i = 0...order. Or LSP frequencies (np array case).
"""
lpc_poly = self.lpc(order, return_np_array=True)
lsfs = np.array(lpc_to_lsf(lpc_poly))
if return_np_array:
return lsfs
return {f'LSF_{i}': lsf for i, lsf in enumerate(lsfs)}
def formants(self):
"""Estimate the first four formant frequencies using LPC (see formants.py)
Returns:
dict: Dictionary mapping {'f1', 'f2', 'f3', 'f4'} to
corresponding {first, second, third, fourth} formant frequency.
"""
formants_dict = formants.get_formants(self.waveform, self.sample_rate)
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | true |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/surfboard/surfboard/statistics.py | features/audio_features/helpers/surfboard/surfboard/statistics.py | """This file contains the class which computes statistics from numpy arrays to turn components into features."""
from scipy.stats import (
kurtosis,
skew,
)
import numpy as np
class Barrel:
"""This class is used to instantiate components computed in the surfboard package.
It helps us compute statistics on these components.
"""
def __init__(self, component):
"""Instantiate a barrel with a component. Note that we require the component to be
an np array. The first dimension represents the number of output features.
and the second dimension represents time.
Args:
component (np.array, [n_feats, T]):
"""
assert isinstance(component, np.ndarray), 'Barrels are instantiated with np arrays.'
assert len(component.shape) == 2, f'Barrels must have shape [n_feats, T]. component is {component.shape}'
self.type = type(component)
self.component = component
def __call__(self):
return self.component
def compute_statistics(self, statistic_list):
"""Compute statistics on self.component using a list of strings which identify
which statistics to compute.
Args:
statistic_list (list of str): list of strings representing Barrel methods to
be called.
Returns:
dict: Dictionary mapping str to float.
"""
statistic_outputs = {}
for statistic in statistic_list:
method_to_call = getattr(self, statistic)
result = method_to_call()
# Case of only one component.
if len(result) == 1:
statistic_outputs[statistic] = float(result)
else:
for i, value in enumerate(result):
statistic_outputs[f"{statistic}_{i + 1}"] = value
return statistic_outputs
def get_first_derivative(self):
"""Compute the "first derivative" of self.component.
Remember that self.component is of the shape [n_feats, T].
Returns:
np.array, [n_feats, T - 1]: First empirical derivative.
"""
delta = self.component[:, 1:] - self.component[:, :-1]
return delta
def get_second_derivative(self):
"""Compute the "second derivative" of self.component.
Remember that self.component is of the shape [n_feats, T].
Returns:
np.array, [n_feats, T - 2]: second empirical derivative.
"""
delta = self.get_first_derivative()
delta2 = delta[:, 1:] - delta[:, :-1]
return delta2
def max(self):
"""Compute the max of self.component on the last dimensions.
Returns:
np.array, [n_feats, ]: The maximum of each individual dimension
in self.component
"""
return np.max(self.component, -1)
def min(self):
"""Compute the min of self.component on the last dimension.
Returns:
np.array, [n_feats, ]: The minimum of each individual dimension
in self.component
"""
return np.min(self.component, -1)
def mean(self):
"""Compute the mean of self.component on the last dimension (time).
Returns:
np.array, [n_feats, ]: The mean of each individual dimension
in self.component
"""
return np.mean(self.component, -1)
def first_derivative_mean(self):
"""Compute the mean of the first empirical derivative (delta coefficient)
on the last dimension (time).
Returns:
np.array, [n_feats, ]: The mean of the first delta coefficient
of each individual dimension in self.component
"""
delta = self.get_first_derivative()
return np.mean(delta, -1)
def second_derivative_mean(self):
"""Compute the mean of the second empirical derivative (2nd delta coefficient)
on the last dimension (time).
Returns:
np.array, [n_feats, ]: The mean of the second delta coefficient
of each individual dimension in self.component
"""
delta2 = self.get_second_derivative()
return np.mean(delta2, -1)
def std(self):
"""Compute the standard deviation of self.component on the last dimension
(time).
Returns:
np.array, [n_feats, ]: The standard deviation of each individual
dimension in self.component
"""
return np.std(self.component, -1)
def first_derivative_std(self):
"""Compute the std of the first empirical derivative (delta coefficient)
on the last dimension (time).
Returns:
np.array, [n_feats, ]: The std of the first delta coefficient
of each individual dimension in self.component
"""
delta = self.get_first_derivative()
return np.std(delta, -1)
def second_derivative_std(self):
"""Compute the std of the second empirical derivative (2nd delta coefficient)
on the last dimension (time).
Returns:
np.array, [n_feats, ]: The std of the second delta coefficient
of each individual dimension in self.component
"""
delta2 = self.get_second_derivative()
return np.std(delta2, -1)
def skewness(self):
"""Compute the skewness of self.component on the last dimension (time)
Returns:
np.array, [n_feats, ]: The skewness of each individual
dimension in self.component
"""
return skew(self.component, -1)
def first_derivative_skewness(self):
"""Compute the skewness of the first empirical derivative (delta coefficient)
on the last dimension (time).
Returns:
np.array, [n_feats, ]: The skewness of the first delta coefficient
of each individual dimension in self.component
"""
delta = self.get_first_derivative()
return skew(delta, -1)
def second_derivative_skewness(self):
"""Compute the skewness of the second empirical derivative (2nd delta coefficient)
on the last dimension (time).
Returns:
np.array, [n_feats, ]: The skewness of the second delta coefficient
of each individual dimension in self.component
"""
delta2 = self.get_second_derivative()
return skew(delta2, -1)
def kurtosis(self):
"""Compute the kurtosis of self.component on the last dimension (time)
Returns:
np.array, [n_feats, ]: The kurtosis of each individual
dimension in self.component
"""
return kurtosis(self.component, -1)
def first_derivative_kurtosis(self):
"""Compute the kurtosis of the first empirical derivative (delta coefficient)
on the last dimension (time).
Returns:
np.array, [n_feats, ]: The kurtosis of the first delta coefficient
of each individual dimension in self.component
"""
delta = self.get_first_derivative()
return kurtosis(delta, -1)
def second_derivative_kurtosis(self):
"""Compute the kurtosis of the second empirical derivative (2nd delta coefficient)
on the last dimension (time).
Returns:
np.array, [n_feats, ]: The kurtosis of the second delta coefficient
of each individual dimension in self.component
"""
delta2 = self.get_second_derivative()
return kurtosis(delta2, -1)
def first_quartile(self):
"""Compute the first quartile on the last dimension (time).
Returns:
np.array, [n_feats, ]: The first quartile of each individual dimension
in self.component
"""
return np.quantile(self.component, 0.25, axis=-1)
def second_quartile(self):
"""Compute the second quartile on the last dimension (time). Same
as the median.
Returns:
np.array, [n_feats, ]: The second quartile of each individual
dimension in self.component (same as the median)
"""
return np.quantile(self.component, 0.5, axis=-1)
def third_quartile(self):
"""Compute the third quartile on the last dimension (time)
Returns:
np.array, [n_feats, ]: The third quartile of each individual
dimension in self.component
"""
return np.quantile(self.component, 0.75, axis=-1)
def q2_q1_range(self):
"""Compute second and first quartiles. Return q2 - q1
Returns:
np.array, [n_feats, ]: The q2 - q1 range of each individual
dimension in self.component
"""
return self.second_quartile() - self.first_quartile()
def q3_q2_range(self):
"""Compute third and second quartiles. Return q3 - q2
Returns:
np.array, [n_feats, ]: The q3 - q2 range of each individual
dimension in self.component
"""
return self.third_quartile() - self.second_quartile()
def q3_q1_range(self):
"""Compute third and first quartiles. Return q3 - q1
Returns:
np.array, [n_feats, ]: The q3 - q1 range of each individual
dimension in self.component
"""
return self.third_quartile() - self.first_quartile()
def percentile_1(self):
"""Compute the 1% percentile.
Returns:
np.array, [n_feats, ]: The 1st percentile of each individual
dimension in self.component
"""
return np.quantile(self.component, 0.01, axis=-1)
def percentile_99(self):
"""Compute the 99% percentile.
Returns:
np.array, [n_feats, ]: The 99th percentile of each individual
dimension in self.component
"""
return np.quantile(self.component, 0.99, axis=-1)
def percentile_1_99_range(self):
"""Compute 99% percentile and 1% percentile. Return the range.
Returns:
np.array, [n_feats, ]: The 99th - 1st percentile range of each
individual dimension in self.component
"""
return self.percentile_99() - self.percentile_1()
def linear_regression_offset(self):
"""Consider each row of self.component as a time series over which we fit a line.
Return the offset of that fitted line.
Returns:
np.array, [n_feats, ]: The linear regression offset of each
individual dimension in self.component
"""
_, offset = np.polyfit(
np.arange(self.component.shape[-1]), self.component.T, deg=1
)
return offset
def linear_regression_slope(self):
"""Consider each row of self.component as a time series over which we fit a line.
Return the slope of that fitted line.
Returns:
np.array, [n_feats, ]: The linear regression slope of each
individual dimension in self.component
"""
slope, _ = np.polyfit(
np.arange(self.component.shape[-1]), self.component.T, deg=1
)
return slope
def linear_regression_mse(self):
"""Fit a line to the data. Compute the MSE.
Returns:
np.array, [n_feats, ]: The linear regression MSE of each
individual dimension in self.component
"""
slope, offset = np.polyfit(
np.arange(self.component.shape[-1]), self.component.T, deg=1
)
linear_approximation = offset[:, np.newaxis] + slope[:, np.newaxis] \
* np.array([np.arange(self.component.shape[-1]) for _ in range(self.component.shape[0])])
mse = ((linear_approximation - self.component) ** 2).mean(-1)
return mse
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/surfboard/surfboard/utils.py | features/audio_features/helpers/surfboard/surfboard/utils.py | #!/usr/bin/env python
"""This file contains a variety of helper functions for the surfboard package."""
import os
import numpy as np
from scipy.signal import (
deconvolve,
)
def metric_slidingwindow(frame_length, hop_length, truncate_end=False):
"""We use this decorator to decorate functions which take a sequence
as an input and return a metric (float). For example the sum of a sequence.
This decorator will enable us to quickly compute the metrics over a sliding
window. Note the existence of the implicit decorator below which allows us
to have arguments to the decorator.
Args:
frame_length (int): The length of the sliding window
hop_length (int): How much to slide the window every time
truncate_end (bool): whether to drop frames which are shorter than
frame_length (the end frames, typically)
Returns:
function: The function which computes the metric over sliding
windows.
"""
def implicit_decorator(func):
def wrapper(*args):
sequence = args[0]
output_list = []
for i in range(int(np.ceil(sequence.shape[0] / hop_length))):
subblock = sequence[i * hop_length: i * hop_length + frame_length]
if len(subblock) < frame_length and truncate_end:
continue
# Change the first argument (the sequence). Keep the rest of the arguments.
new_args = (subblock, *args[1:])
value = func(*new_args)
output_list.append(value)
out = np.expand_dims(np.array(output_list), 0)
return out
return wrapper
return implicit_decorator
def numseconds_to_numsamples(numseconds, sample_rate):
"""Convert a number of seconds a sample rate to the number of samples for n_fft,
frame_length and hop_length computation. Find the closest power of 2 for efficient
computations.
Args:
numseconds (float): number of seconds that we want to convert
sample_rate (int): how many samples per second
Return:
int: closest power of 2 to int(numseconds * sample_rate)
"""
candidate = int(numseconds * sample_rate)
log2 = np.log2(candidate)
out_value = int(2 ** np.round(log2))
assert out_value != 0, "The inputs given gave an output value of 0. This is not acceptable."
return out_value
def max_peak_amplitude(signal):
"""Returns the maximum absolute value of a signal.
Args:
np.array [T, ]: a waveform
Returns:
float: the maximum amplitude of this waveform, in absolute value
"""
return np.max(np.abs(signal))
def peak_amplitude_slidingwindow(signal, sample_rate, frame_length_seconds=0.04, hop_length_seconds=0.01):
"""Apply the metric_slidingwindow decorator to the the peak amplitude computation defined above,
effectively computing frequency from fft over sliding windows.
Args:
signal (np.array [T,]): waveform over which to compute.
sample_rate (int): number of samples per second in the waveform
frame_length_seconds (float): how many seconds in one frame. This
value is defined in seconds instead of number of samples.
hop_length_seconds (float): how many seconds frames shift each step.
This value is defined in seconds instead of number of samples.
Returns:
np.array, [1, T / hop_length]: peak amplitude on each window.
"""
frame_length = numseconds_to_numsamples(frame_length_seconds, sample_rate)
hop_length = numseconds_to_numsamples(hop_length_seconds, sample_rate)
@metric_slidingwindow(frame_length=frame_length, hop_length=hop_length)
def new_max_peak_amplitude(signal):
return max_peak_amplitude(signal)
return new_max_peak_amplitude(signal)
def shifted_sequence(sequence, num_sequences):
"""Given a sequence (say a list) and an integer, returns a zipped iterator
of sequence[:-num_sequences + 1], sequence[1:-num_sequences + 2], etc.
Args:
sequence (list or other iteratable): the sequence over which to iterate
in various orders
num_sequences (int): the number of sequences over which we iterate.
Also the number of elements which come out of the output at each call.
Returns:
iterator: zipped shifted sequences.
"""
return zip(
*(
[list(sequence[i: -num_sequences + 1 + i]) for i in range(
num_sequences - 1)] + [sequence[num_sequences - 1:]]
)
)
def lpc_to_lsf(lpc_polynomial):
"""This code is inspired by the following:
https://uk.mathworks.com/help/dsp/ref/lpctolsflspconversion.html
Args:
lpc_polynomial (list): length n + 1 list of lpc coefficients. Requirements
is that the polynomial is ordered so that lpc_polynomial[0] == 1
Returns:
list: length n list of line spectral frequencies.
"""
lpc_polynomial = np.array(lpc_polynomial)
assert lpc_polynomial[0] == 1, \
'First value in the polynomial must be 1. Considering normalizing.'
assert max(np.abs(np.roots(lpc_polynomial))) <= 1.0, \
'The polynomial must have all roots inside of the unit circle.'
lhs = np.concatenate((lpc_polynomial, np.array([0])))
rhs = lhs[-1::-1]
diff_filter = lhs - rhs
sum_filter = lhs + rhs
poly_1 = deconvolve(diff_filter, [1, 0, -1])[0] if (len(lpc_polynomial) - 1) % 2 else deconvolve(diff_filter, [1, -1])[0]
poly_2 = sum_filter if (len(lpc_polynomial) - 1) % 2 else deconvolve(sum_filter, [1, 1])[0]
roots_poly1 = np.roots(poly_1)
roots_poly2 = np.roots(poly_2)
angles_poly1 = np.angle(roots_poly1[1::2])
angles_poly2 = np.angle(roots_poly2[1::2])
return sorted(
np.concatenate((-angles_poly1, -angles_poly2))
)
def parse_component(component):
"""Parse the component coming from the .yaml file.
Args:
component (str or dict): Can be either a str, or a dictionary.
Comes from the .yaml config file. If it is a string,
simply return, since its the component name without arguments.
Otherwise, parse.
Returns:
tuple: tuple containing:
str: name of the method to be called from sound.Waveform
dict: arguments to be unpacked. None if no arguments to
compute.
"""
if isinstance(component, str):
return component, None
elif isinstance(component, dict):
component_name = list(component.keys())[0]
# component[component_name] is a dictionary of arguments.
arguments = component[component_name]
return component_name, arguments
else:
raise ValueError("Argument to the parse_component function must be str or dict.")
def example_audio_file(which_file):
"""Returns the path to one of sustained_a, sustained_o or sustained_e
included with the Surfboard package.
Args:
which_file (str): One of 'a', 'o' or 'e'
Returns:
str: The path to the chosen file.
"""
assert which_file in ['a', 'o', 'e'], 'Input must be one of: "a", "o", "e"'
return os.path.join(
os.path.dirname(os.path.realpath(__file__)),
f'../example_audio_files/sustained_{which_file}.wav'
)
class YamlFileException(Exception):
def __init__(self, message):
self.message = message
def __str__(self):
return self.message
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/surfboard/surfboard/feature_extraction.py | features/audio_features/helpers/surfboard/surfboard/feature_extraction.py | #!/usr/bin/env python
"""This file contains functions to compute features."""
import numpy as np
import pandas as pd
from tqdm import tqdm
from .sound import Waveform
from .statistics import Barrel
def load_waveforms_from_paths(paths, sample_rate):
"""Loads waveforms from paths using multiprocessing"""
progress_bar = tqdm(paths, desc='Loading waveforms...')
return [Waveform(path=p, sample_rate=sample_rate) for p in progress_bar]
def extract_features_from_paths(paths, components_list, statistics_list=None, sample_rate=44100):
"""Function which loads waveforms, computes the components and statistics and returns them,
without the need to store the waveforms in memory. This is to minimize the memory footprint
when running over multiple files.
Args:
paths (list of str): .wav to compute
components_list (list of str/dict): This is a list of the methods which
should be applied to all the waveform objects in waveforms. If a dict,
this also contains arguments to the sound.Waveform methods.
statistics_list (list of str): This is a list of the methods which
should be applied to all the time-dependent features computed
from the waveforms.
sample_rate (int > 0): sampling rate to load the waveforms
Returns:
pandas DataFrame: pandas dataframe where every row corresponds
to features extracted for one of the waveforms and columns
represent individual features.
"""
output_feats = []
paths = tqdm(paths, desc='Extracting features from paths...')
for path in paths:
wave = Waveform(path=path, sample_rate=sample_rate)
output_feats.append(
extract_features_from_waveform(
components_list, statistics_list, wave
)
)
return pd.DataFrame(output_feats)
def extract_features_from_waveform(components_list, statistics_list, waveform):
"""Given one waveform, a list of components and statistics, extract the
features from the waveform.
Args:
components_list (list of str or dict): This is a list of the methods which
should be applied to all the waveform objects in waveforms. If a dict,
this also contains arguments to the sound.Waveform methods.
statistics_list (list of str): This is a list of the methods which
should be applied to all the "time-dependent" components computed
from the waveforms.
waveform (Waveform): the waveform object to extract components from.
Returns:
dict: Dictionary mapping names to numerical components extracted
for this waveform.
"""
feats_this_waveform = {}
try:
# Compute components with surfboard.
components = waveform.compute_components(components_list)
# Loop over computed components to either prepare for output, or to apply statistics.
for component_name in components:
# Case of a dictionary: unpack dictionary and merge with existing set of components.
if isinstance(components[component_name], dict) and statistics_list is not None:
feats_this_waveform = {
**feats_this_waveform,
**components[component_name]
}
# Case of a float -- simply add that as a single value to the dictionary.
# Or: case of a np array when statistics list is None. In order to be able to obtain
# the numpy array from the pandas DataFrame, we must pass the np array as a list.
elif isinstance(components[component_name], float) or (
isinstance(components[component_name], np.ndarray) and statistics_list is None
):
feats_this_waveform[component_name] = components[component_name]
# Case of a np.array (the component is a time series). Apply Barrel.
elif isinstance(components[component_name], np.ndarray) and statistics_list is not None:
barrel = Barrel(components[component_name])
function_outputs = barrel.compute_statistics(statistics_list)
# Merge dictionaries...
feats_this_waveform = {
**feats_this_waveform,
**{"{}_{}".format(component_name, fun_name): v for fun_name, v in function_outputs.items()}
}
except Exception as extraction_exception:
print(f'Found exception "{extraction_exception}"... Skipping...')
return {}
except:
print('Unknow error. Skipping')
return {}
# Return an empty dict in the case of None.
feats_this_waveform = feats_this_waveform if feats_this_waveform is not None else {}
return feats_this_waveform
def extract_features(waveforms, components_list, statistics_list=None):
"""This is an important function. Given a list of Waveform objects, a list of
Waveform methods in the form of strings and a list of Barrel methods in the
form of strings, compute the time-independent features resulting. This function
does multiprocessing.
Args:
waveforms (list of Waveform): This is a list of waveform objects
components_list (list of str/dict): This is a list of the methods which
should be applied to all the waveform objects in waveforms. If a dict,
this also contains arguments to the sound.Waveform methods.
statistics_list (list of str): This is a list of the methods which
should be applied to all the time-dependent features computed
from the waveforms.
Returns:
pandas DataFrame: pandas dataframe where every row corresponds
to features extracted for one of the waveforms and columns
represent individual features.
"""
output_feats = []
waveforms = tqdm(waveforms, desc='Extracting features...')
for wave in waveforms:
output_feats.append(
extract_features_from_waveform(
components_list, statistics_list, wave
)
)
return pd.DataFrame(output_feats)
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/surfboard/surfboard/dfa.py | features/audio_features/helpers/surfboard/surfboard/dfa.py | import numpy as np
def get_deviation_for_dfa(signal, window_length):
"""Given a signal, compute the trend value for one window length, as per
https://link.springer.com/article/10.1186/1475-925X-6-23
In order to get the overall DFA (detrended fluctuation analysis),
compute this for a variety of window lengths, then plot that on a
log-log graph, and get the slope.
Args:
signal (np.array, [T, ]): waveform
window_length (int > 0): L in the paper linked above. Length of windows for trend.
Returns:
float: average rmse for fitting lines on chunks of window lengths on the
cumulative sums of this signal.
"""
rmse = 0
# Step 1, integrate time series (cumulative sum)
Y = np.cumsum(signal)
# Step 2, separate into chunks of length window_length +/- 1.
chunks = np.array_split(Y, np.ceil(Y.shape[0] / window_length))
# Step 3, fit a line for all of these chunks.
for chunk in chunks:
slope, offset = np.polyfit(np.arange(chunk.shape[0]), chunk, 1)
rmse += np.sqrt(
(((offset + slope * np.arange(chunk.shape[0])) - chunk) ** 2).sum()
)
rmse /= len(chunks)
return rmse
def get_dfa(signal, window_lengths):
"""Given a signal, compute the DFA (detrended fluctuation analysis)
as per https://link.springer.com/article/10.1186/1475-925X-6-23
See paper equations (13) to (16) for more information.
"""
fl_list = []
for length in window_lengths:
fl_list.append(get_deviation_for_dfa(signal, length))
log_l = [np.log(length) for length in window_lengths]
log_fl = [np.log(fl) for fl in fl_list]
# Fit a line.
slope, _ = np.polyfit(log_l, log_fl, 1)
return slope
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/surfboard/surfboard/__init__.py | features/audio_features/helpers/surfboard/surfboard/__init__.py | python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false | |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/surfboard/surfboard/formants.py | features/audio_features/helpers/surfboard/surfboard/formants.py | """This code is inspired by the following repository:
https://github.com/manishmalik/Voice-Classification/blob/master/rootavish/formant.py
More importantly, by the following Matlab code:
https://uk.mathworks.com/help/signal/ug/formant-estimation-with-lpc-coefficients.html
The implementation is ours. These values have been validated against the corresponding methods in Praat and agree to a small error margin.
"""
import numpy as np
import math
from scipy.signal import lfilter
from librosa.core import lpc
from .utils import (
metric_slidingwindow,
numseconds_to_numsamples,
)
"""
Estimate formants using LPC.
"""
def estimate_formants_lpc(waveform, sample_rate, num_formants=5):
hamming_win = np.hamming(len(waveform))
# Apply window and high pass filter.
x_win = waveform * hamming_win
x_filt = lfilter([1], [1.0, 0.63], x_win)
# Get LPC. From mathworks link above, the general rule is that the
# order is two times the expected number of formants plus 2. We use
# 5 as a base because we discard the formant f0 and want f1...f4.
lpc_rep = lpc(x_filt, 2 + int(sample_rate / 1000))
# Calculate the frequencies.
roots = [r for r in np.roots(lpc_rep) if np.imag(r) >= 0]
angles = np.arctan2(np.imag(roots), np.real(roots))
return sorted(angles * (sample_rate / (2 * math.pi)))
def get_formants(waveform, sample_rate):
"""Estimate the first four formant frequencies using LPC
Args:
waveform (np.array, [T, ]): waveform over which to compute formants
sample_rate (int): sampling rate of waveform
Returns:
dict: Dictionary mapping {'f1', 'f2', 'f3', 'f4'} to
corresponding {first, second, third, fourth} formant frequency.
"""
myformants = estimate_formants_lpc(waveform, sample_rate)
formants_dict = {
"f1": myformants[2],
"f2": myformants[3],
"f3": myformants[4],
"f4": myformants[5],
}
return formants_dict
def get_unique_formant(waveform, sample_rate, formant):
"""Same as get_formants, but return in a np array.
"""
dict_out = get_formants(waveform, sample_rate)
return dict_out[formant]
def get_formants_slidingwindow(waveform, sample_rate, formant, frame_length_seconds=0.04, hop_length_seconds=0.01):
"""Apply the metric_slidingwindow decorator to the get_formants function above.
We slightly change the get_formants in order to return a [4, T / hop_length] array instead
of dictionaries.
Args:
waveform (np.array [T,]): waveform over which to compute.
sample_rate (int): number of samples per second in the waveform
frame_length_seconds (float): how many seconds in one frame. This
value is defined in seconds instead of number of samples.
hop_length_seconds (float): how many seconds frames shift each step.
This value is defined in seconds instead of number of samples.
Returns:
np.array, [4, T / hop_length]: f1, f2, f3, f4 formants on each window.
"""
frame_length = numseconds_to_numsamples(frame_length_seconds, sample_rate)
hop_length = numseconds_to_numsamples(hop_length_seconds, sample_rate)
@metric_slidingwindow(frame_length=frame_length, hop_length=hop_length)
def new_get_unique_formant(waveform, sample_rate, formant):
return get_unique_formant(waveform, sample_rate, formant)
return new_get_unique_formant(waveform, sample_rate, formant)
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/surfboard/surfboard/spectrum.py | features/audio_features/helpers/surfboard/surfboard/spectrum.py | """Spectrum features. The code in this file is inspired by audiocontentanalysis.org
For more details, visit the pyACA package: https://github.com/alexanderlerch/pyACA
"""
import numpy as np
def summed_magnitude_spectrum(magnitude_spectrum, keepdims=True):
summed_magnitude_spectrum = magnitude_spectrum.sum(0, keepdims=keepdims)
summed_magnitude_spectrum[summed_magnitude_spectrum == 0] = 1
return summed_magnitude_spectrum
def get_spectral_centroid(magnitude_spectrum, sample_rate):
"""Given the magnitude spectrum and the sample rate of the
waveform from which it came, compute the spectral centroid.
Args:
magnitude_spectrum (np.array, [n_frequencies, T / hop_length]): the spectrogram
sample_rate (int): The sample rate of the waveform
Returns:
np.array [1, T / hop_length]: the spectral centroid sequence in Hz.
"""
time_dimension = magnitude_spectrum.shape[0]
summed = summed_magnitude_spectrum(magnitude_spectrum)
return np.squeeze(
np.dot(
np.arange(0, time_dimension), magnitude_spectrum
) / summed / (time_dimension - 1) * sample_rate / 2
)[np.newaxis, :]
def get_spectral_slope(magnitude_spectrum, sample_rate):
"""Given the magnitude spectrum and the sample rate of the
waveform from which it came, compute the spectral slope.
Args:
magnitude_spectrum (np.array, [n_frequencies, T / hop_length]): the spectrogram
sample_rate (int): The sample rate of the waveform
Returns:
np.array [1, T / hop_length]: the spectral slope sequence.
"""
time_dimension = magnitude_spectrum.shape[0]
spectrum_mean = magnitude_spectrum.mean(0, keepdims=True)
centralized_spectrum = magnitude_spectrum - spectrum_mean
index = np.arange(0, time_dimension) - time_dimension / 2
slope = np.dot(index, centralized_spectrum) / np.dot(index, index)
return slope[np.newaxis, :]
def get_spectral_flux(magnitude_spectrum, sample_rate):
"""Given the magnitude spectrum and the sample rate of the
waveform from which it came, compute the spectral flux.
Args:
magnitude_spectrum (np.array, [n_frequencies, T / hop_length]): the spectrogram
sample_rate (int): The sample rate of the waveform
Returns:
np.array [1, T / hop_length]: the spectral flux sequence.
"""
first_column = magnitude_spectrum[:, 0][:, np.newaxis]
# Replicate first column to set first delta coeff to 0.
new_magnitude_spectrum = np.concatenate(
(first_column, magnitude_spectrum), axis=-1
)
delta_coefficient = np.diff(new_magnitude_spectrum, 1, axis=1)
flux = np.sqrt(
(delta_coefficient ** 2).sum(0)
) / new_magnitude_spectrum.shape[0]
return flux[np.newaxis, :]
def get_spectral_spread(magnitude_spectrum, sample_rate):
"""Given the magnitude spectrum and the sample rate of the
waveform from which it came, compute the spectral spread.
Args:
magnitude_spectrum (np.array, [n_frequencies, T / hop_length]): the spectrogram
sample_rate (int): The sample rate of the waveform
Returns:
np.array [1, T / hop_length]: the spectral spread (Hz).
"""
summed = summed_magnitude_spectrum(magnitude_spectrum, keepdims=False)
n_frequencies, n_timesteps = magnitude_spectrum.shape
Hz_scaling_factor = 2 * (n_frequencies - 1) / sample_rate
spectral_centroids = get_spectral_centroid(magnitude_spectrum, sample_rate)[0]
scaled_centroids = spectral_centroids * Hz_scaling_factor
index_iter = np.arange(0, n_frequencies)
spectral_spread = np.array([
np.sqrt(
np.dot((index_iter - centroid) ** 2, magnitude_spectrum[:, time_index]) / summed[time_index]
) for time_index, centroid in enumerate(scaled_centroids)
])
# Conversion back to Hz.
return (spectral_spread / Hz_scaling_factor)[np.newaxis, :]
def get_spectral_skewness(magnitude_spectrum, sample_rate):
"""Given the magnitude spectrum and the sample rate of the
waveform from which it came, compute the spectral skewness.
Args:
magnitude_spectrum (np.array, [n_frequencies, T / hop_length]): the spectrogram
sample_rate (int): The sample rate of the waveform
Returns:
np.array [1, T / hop_length]: the spectral skewness.
"""
n_frequencies, n_timesteps = magnitude_spectrum.shape
Hz_scaling_factor = 2 * (n_frequencies - 1) / sample_rate
summed = summed_magnitude_spectrum(magnitude_spectrum, keepdims=False)
spectral_centroids = get_spectral_centroid(magnitude_spectrum, sample_rate)[0]
spectral_spreads = get_spectral_spread(magnitude_spectrum, sample_rate)[0]
# Replace zero spreads by 1.
spectral_spreads[spectral_spreads == 0] = 1
index_iter = np.arange(0, n_frequencies) / Hz_scaling_factor
spectral_skewness = [
np.dot((index_iter - centroid) ** 3, magnitude_spectrum[:, time_index]) / (
n_frequencies * spread ** 3 * summed[time_index]
) for time_index, (centroid, spread) in enumerate(zip(spectral_centroids, spectral_spreads))
]
return np.array(spectral_skewness)[np.newaxis, :]
def get_spectral_kurtosis(magnitude_spectrum, sample_rate):
"""Given the magnitude spectrum and the sample rate of the
waveform from which it came, compute the spectral skewness.
Args:
magnitude_spectrum (np.array, [n_frequencies, T / hop_length]): the spectrogram
sample_rate (int): The sample rate of the waveform
Returns:
np.array [1, T / hop_length]: the spectral kurtosis.
"""
n_frequencies, n_timesteps = magnitude_spectrum.shape
Hz_scaling_factor = 2 * (n_frequencies - 1) / sample_rate
summed = summed_magnitude_spectrum(magnitude_spectrum, keepdims=False)
spectral_centroids = get_spectral_centroid(magnitude_spectrum, sample_rate)[0]
spectral_spreads = get_spectral_spread(magnitude_spectrum, sample_rate)[0]
# Replace zero spreads by 1.
spectral_spreads[spectral_spreads == 0] = 1
index_iter = np.arange(0, n_frequencies) / Hz_scaling_factor
spectral_skewness = [
np.dot((index_iter - centroid) ** 4, magnitude_spectrum[:, time_index]) / (
n_frequencies * spread ** 4 * summed[time_index]
) for time_index, (centroid, spread) in enumerate(zip(spectral_centroids, spectral_spreads))
]
return (np.array(spectral_skewness) - 3)[np.newaxis, :]
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/surfboard/surfboard/misc_components.py | features/audio_features/helpers/surfboard/surfboard/misc_components.py | #!/usr/bin/env python
"""This file contains components which do not fall under one category."""
import librosa
import pyloudnorm as pyln
import numpy as np
from scipy.stats import kurtosis
from scipy.signal import lfilter
from pysptk import swipe, rapt
from .utils import (
metric_slidingwindow,
numseconds_to_numsamples,
)
def get_crest_factor(waveform, sample_rate, rms, frame_length_seconds=0.04, hop_length_seconds=0.01):
"""Get the crest factor of this waveform, on sliding windows. This value measures the local intensity
of peaks in a waveform. Implemented as per: https://en.wikipedia.org/wiki/Crest_factor
Args:
waveform (np.array, [T, ]): waveform over which to compute crest factor
sample_rate (int > 0): number of samples per second in waveform
rms (np.array, [1, T / hop_length]): energy values.
frame_length_seconds (float): length of the sliding window, in seconds.
hop_length_seconds (float): how much the window shifts for every timestep,
in seconds.
Returns:
np.array, [1, T / hop_length]: Crest factor for each frame.
"""
frame_length = numseconds_to_numsamples(frame_length_seconds, sample_rate)
hop_length = numseconds_to_numsamples(hop_length_seconds, sample_rate)
crest_factor_list = []
# Iterate over values in rms. Each of these correspond to a window of the waveform.
for i, rms_value in enumerate(rms[0]):
waveform_window = waveform[i * hop_length: i * hop_length + frame_length]
maxvalue = np.abs(waveform_window.max())
crest_factor_list.append(maxvalue / rms_value)
crest_factor = np.array(crest_factor_list)[np.newaxis, :]
return crest_factor
def get_f0(
waveform, sample_rate, hop_length_seconds=0.01, method='swipe', f0_min=60, f0_max=300
):
"""Compute the F0 contour using PYSPTK: https://github.com/r9y9/pysptk/.
Args:
waveform (np.array, [T, ]): waveform over which to compute f0
sample_rate (int > 0): number of samples per second in waveform
hop_length (int): hop size argument in pysptk.swipe. Corresponds to hopsize
in the window sliding of the computation of f0.
method (str): is one of 'swipe' or 'rapt'. Define which method to use for f0
calculation. See https://github.com/r9y9/pysptk
Returns:
dict: Dictionary containing keys:
"contour" (np.array, [1, t1]): f0 contour of waveform. Contains unvoiced
frames.
"values" (np.array, [1, t2]): nonzero f0 values waveform. Note that this
discards all unvoiced frames. Use to compute mean, std, and other statistics.
"mean" (float): mean of the f0 contour.
"std" (float): standard deviation of the f0 contour.
"""
assert method in ('swipe', 'rapt'), "The method argument should be one of 'swipe' or 'rapt'."
hop_length = numseconds_to_numsamples(hop_length_seconds, sample_rate)
if method == 'swipe':
f0_contour = swipe(
waveform.astype(np.float64),
fs=sample_rate,
hopsize=hop_length,
min=f0_min,
max=f0_max,
otype="f0",
)[np.newaxis, :]
elif method == 'rapt':
# For this estimation, waveform needs to be in the int PCM format.
f0_contour = rapt(
np.round(waveform * 32767).astype(np.float32),
fs=sample_rate,
hopsize=hop_length,
min=f0_min,
max=f0_max,
otype="f0",
)[np.newaxis, :]
# Remove unvoiced frames.
f0_values = f0_contour[:, np.where(f0_contour[0, :] != 0)][0]
f0_mean = np.mean(f0_values[0])
f0_std = np.std(f0_values[0])
return {
"contour": f0_contour,
"values": f0_values,
"mean": f0_mean,
"std": f0_std,
}
def get_ppe(rat_f0):
"""Compute pitch period entropy. Here is a reference MATLAB implementation:
https://github.com/Mak-Sim/Troparion/blob/5126f434b96e0c1a4a41fa99dd9148f3c959cfac/Perturbation_analysis/pitch_period_entropy.m
Note that computing the PPE relies on the existence of voiced portions in the F0 trajectory.
Args:
rat_f0 (np.array): f0 voiced frames divided by f_min
Returns:
float: The pitch period entropy, as per http://www.maxlittle.net/students/thesis_tsanas.pdf
"""
semitone_f0 = np.log(rat_f0) / np.log(2 ** (1 / 12))
# Whitening
coefficients = librosa.core.lpc(semitone_f0[0], 2)
semi_f0 = lfilter(coefficients, [1], semitone_f0)[0]
# Filter to the [-1.5, 1.5] range.
semi_f0 = semi_f0[np.where(semi_f0 > -1.5)]
semi_f0 = semi_f0[np.where(semi_f0 < 1.5)]
distrib = np.histogram(semi_f0, bins=30, density=True)[0]
# Remove empty bins as these break the entropy calculation.
distrib = distrib[distrib != 0]
# Discrete probability distribution
ppe = np.sum(-distrib * (np.log(distrib) / np.log(2)))
return ppe
def get_shannon_entropy(sequence):
"""Given a sequence, compute the Shannon Entropy, defined in
https://ijssst.info/Vol-16/No-4/data/8258a127.pdf
Args:
sequence (np.array, [t, ]): sequence over which to compute.
Returns:
float: shannon entropy.
"""
# Remove zero entries in sequence as entropy is undefined.
sequence = sequence[sequence != 0]
return float((-(sequence ** 2) * np.log(sequence ** 2)).sum())
def get_shannon_entropy_slidingwindow(waveform, sample_rate, frame_length_seconds=0.04, hop_length_seconds=0.01):
"""Same function as above, but decorated by the metric_slidingwindow decorator.
See above for documentation on this.
Args:
waveform (np.array, [T, ]): waveform over which to compute the shannon entropy array
sample_rate (int > 0): number of samples per second in waveform
frame_length_seconds (float): length of the sliding window, in seconds.
hop_length_seconds (float): how much the window shifts for every timestep,
in seconds.
Returns:
np.array, [1, T/hop_length]: Shannon entropy over windows.
"""
frame_length = numseconds_to_numsamples(frame_length_seconds, sample_rate)
hop_length = numseconds_to_numsamples(hop_length_seconds, sample_rate)
@metric_slidingwindow(frame_length=frame_length, hop_length=hop_length)
def new_shannon_entropy(waveform):
return get_shannon_entropy(waveform)
return new_shannon_entropy(waveform)
def get_loudness(waveform, sample_rate):
"""Compute the loudness of waveform using the pyloudnorm package.
See https://github.com/csteinmetz1/pyloudnorm for more details on potential
arguments to the functions below.
Args:
waveform (np.array, [T, ]): waveform to compute loudness on
sample_rate (int > 0): sampling rate of waveform
Returns:
float: the loudness of self.waveform
"""
meter = pyln.Meter(sample_rate)
return meter.integrated_loudness(waveform)
def get_loudness_slidingwindow(waveform, sample_rate, frame_length_seconds=0.04, hop_length_seconds=0.01):
"""Same function as get_loudness, but decorated by the metric_slidingwindow decorator.
See get_loudness documentation for this.
Args:
waveform (np.array, [T, ]): waveform over which to compute the kurtosis array
sample_rate (int > 0): number of samples per second in waveform
frame_length_seconds (float): length of the sliding window, in seconds.
hop_length_seconds (float): how much the window shifts for every timestep,
in seconds.
Returns:
np.array, [1, T / hop_length]: Frame level loudness
"""
frame_length = numseconds_to_numsamples(frame_length_seconds, sample_rate)
hop_length = numseconds_to_numsamples(hop_length_seconds, sample_rate)
# We use truncate end = True here because the loudness calculation fails on short sequences.
@metric_slidingwindow(frame_length=frame_length, hop_length=hop_length, truncate_end=True)
def new_loudness(waveform):
return get_loudness(waveform, sample_rate)
return new_loudness(waveform)
def get_kurtosis_slidingwindow(waveform, sample_rate, frame_length_seconds=0.04, hop_length_seconds=0.01):
"""Same function as above, but decorated by the metric_slidingwindow decorator.
See above documentation for this.
Args:
waveform (np.array, [T, ]): waveform over which to compute the kurtosis array
sample_rate (int > 0): number of samples per second in waveform
frame_length_seconds (float): length of the sliding window, in seconds.
hop_length_seconds (float): how much the window shifts for every timestep,
in seconds.
Returns:
np.array, [1, T/hop_length]: Kurtosis over windows
"""
frame_length = numseconds_to_numsamples(frame_length_seconds, sample_rate)
hop_length = numseconds_to_numsamples(hop_length_seconds, sample_rate)
@metric_slidingwindow(frame_length=frame_length, hop_length=hop_length)
def new_kurtosis(waveform):
return kurtosis(waveform)
return new_kurtosis(waveform)
def get_log_energy(matrix, time_axis=-1):
"""Compute the log energy of a matrix as per Abeyrante et al. 2013.
Args:
matrix (np.array): matrix over which to compute. This
has to be a 1 or 2-dimensional np.array
time_axis (int >= 0): the axis in matrix which corresponds
to time.
Returns:
float: The log energy of matrix, computed as per
the paper above.
"""
assert len(matrix.shape) <= 2, "This function only works on 1d or 2d signals."
return 10 * np.log10(1e-9 + np.sum(matrix ** 2, time_axis) / matrix.shape[time_axis])
def get_log_energy_slidingwindow(waveform, sample_rate, frame_length_seconds=0.04, hop_length_seconds=0.01):
"""Same function as above, but decorated by the metric_slidingwindow decorator.
See above documentation for this.
Args:
waveform (np.array, [T, ]): waveform over which to compute the log energy array
sample_rate (int > 0): number of samples per second in waveform
frame_length_seconds (float): length of the sliding window, in seconds.
hop_length_seconds (float): how much the window shifts for every timestep,
in seconds.
Returns:
np.array, [1, T/hop_length]: log_energy over windows
"""
frame_length = numseconds_to_numsamples(frame_length_seconds, sample_rate)
hop_length = numseconds_to_numsamples(hop_length_seconds, sample_rate)
@metric_slidingwindow(frame_length=frame_length, hop_length=hop_length)
def new_log_energy(waveform):
return get_log_energy(waveform)
return new_log_energy(waveform)
def get_bark_spectrogram(waveform, sample_rate, n_fft_seconds, hop_length_seconds):
"""Convert a spectrogram to a bark-band spectrogram.
Args:
waveform (np.array, [T, ]): waveform over which to compute the bark
spectrogram.
sample_rate (int > 0): number of samples per second in waveform.
n_fft_seconds (float > 0): length of the fft window, in seconds
Returns:
np.array, [n_bark_bands, t]: The original spectrogram
with bins converted into the Bark scale.
"""
bark_bands = [
100, 200, 300, 400, 510, 630, 770, 920, 1080, 1270,
1480, 1720, 2000, 2320, 2700, 3150, 3700, 4400, 5300,
6400, 7700, 9500, 12000, 15500,
]
n_fft = numseconds_to_numsamples(n_fft_seconds, sample_rate)
hop_length = numseconds_to_numsamples(hop_length_seconds, sample_rate)
# [n_frequency_bins, t]
spectrogram, _ = librosa.core.spectrum._spectrogram(waveform, n_fft=n_fft, hop_length=hop_length)
frequencies = librosa.core.fft_frequencies(sr=sample_rate, n_fft=n_fft)
assert spectrogram.shape[0] == frequencies.shape[0], "Different number of frequencies..."
# Initialise the output. It will be of shape [n_bark_bands, t]
output = np.zeros((len(bark_bands), spectrogram.shape[1]), dtype=spectrogram.dtype)
for band_idx in range(len(bark_bands) - 1):
# Sum everything that falls in this bucket.
output[band_idx] = np.sum(
spectrogram[((frequencies >= bark_bands[band_idx]) & (frequencies < bark_bands[band_idx + 1]))], axis=0
)
return output
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/surfboard/surfboard/hnr.py | features/audio_features/helpers/surfboard/surfboard/hnr.py | """This function is inspired by the Speech Analysis repository at
https://github.com/brookemosby/Speech_Analysis
"""
import numpy as np
import peakutils as pu
def get_harmonics_to_noise_ratio(
waveform, sample_rate, min_pitch=75.0, silence_threshold=0.1, periods_per_window=4.5
):
"""Given a waveform, its sample rate, some conditions for voiced and unvoiced
frames (including min pitch and silence threshold), and a "periods per window"
argument, compute the harmonics to noise ratio. This is a good measure of
voice quality and is an important metric in cognitively impaired patients.
Compute the mean hnr_vector: harmonics to noise ratio.
Args:
waveform (np.array, [T, ]): waveform signal
sample_rate (int > 0): sampling rate of the waveform
min_pitch (float > 0): minimum acceptable pitch. converts to
maximum acceptable period.
silence_threshold (1 >= float >= 0): needs to be in [0, 1]. Below this
amplitude, does not consider frames.
periods_per_window (float > 0): 4.5 is best for speech.
Returns:
float: Harmonics to noise ratio of the entire considered
waveform.
"""
assert min_pitch > 0, "Min pitch needs to be > 0"
assert 0 <= silence_threshold <= 1, "Silence threshold need to be in [0, 1]"
hop_length_seconds = periods_per_window / (4.0 * min_pitch)
window_length_seconds = periods_per_window / min_pitch
hop_length = int(hop_length_seconds * sample_rate)
window_length = int(window_length_seconds * sample_rate)
# Now we need to segment the waveform.
frames_iterator = range(max(1, int(waveform.shape[0] / hop_length + 0.5)) + 1)
segmented_waveform = [
waveform[i * hop_length: i * hop_length + window_length] for i in frames_iterator
]
waveform_peak = max(abs(waveform - waveform.mean()))
hnr_vector = []
# Start looping
for index, chunk in enumerate(segmented_waveform):
if chunk.shape[0] > 0:
thischunk_length = chunk.shape[0] / sample_rate
chunk = chunk - chunk.mean()
thischunk_peak = np.max(np.abs(chunk))
if thischunk_peak == 0:
hnr_vector.append(0.5)
else:
chunk_len = len(chunk)
hanning_window = np.hanning(chunk_len)
chunk *= hanning_window
# We start going ahead with FFT. Get n_fft.
n_fft = 2 ** int(np.log2(chunk_len) + 1)
hanning_window = np.hstack(
(hanning_window, np.zeros(n_fft - chunk_len))
)
chunk = np.hstack(
(chunk, np.zeros(n_fft - chunk_len))
)
ffts_outputs = []
for fft_input in [chunk, hanning_window]:
fft_output = np.fft.fft(fft_input)
r = np.nan_to_num(
np.real(
np.fft.fft(fft_output * np.conjugate(fft_output))
)[: chunk_len]
)
ffts_outputs.append(r)
r_x = ffts_outputs[0] / ffts_outputs[1]
r_x /= r_x[0]
indices = pu.indexes(r_x)
# Now we index into r_x and into a linspace with these computed indices.
time_array = np.linspace(0, thischunk_length, r_x.shape[0])
myfilter = time_array[indices]
candidate_values = r_x[indices]
# Perform basic filtering according to period.
# One side. 1.0 / max_pitch is min period.
candidate_values = candidate_values[myfilter >= 1.0 / (sample_rate / 2.0)]
# Update filter.
myfilter = myfilter[myfilter >= 1.0 / (sample_rate / 2.0)]
# Second side: 1.0 / min_pitch is max period.
candidate_values = candidate_values[myfilter <= 1.0 / min_pitch]
for i, v in enumerate(candidate_values):
if v > 1.0:
candidate_values[i] = 1.0 / v
if candidate_values.shape[0] > 0:
strengths = [
np.max(candidate_values), np.max((
0, 2 - (thischunk_peak / waveform_peak) / silence_threshold
))
]
if np.argmax(strengths):
hnr_vector.append(0.5)
else:
hnr_vector.append(strengths[0])
else:
hnr_vector.append(0.5)
hnr_vector = np.array(hnr_vector)[np.array(hnr_vector) > 0.5]
if hnr_vector.shape[0] == 0:
return 0
else:
# Convert to dB.
hnr_vector = 10.0 * np.log10(hnr_vector / (1.0 - hnr_vector))
return np.mean(hnr_vector)
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/surfboard/surfboard/jitters.py | features/audio_features/helpers/surfboard/surfboard/jitters.py | #!/usr/bin/env python
"""This file contains all the functions needed to compute the jitters of a waveform."""
import numpy as np
from .utils import (
shifted_sequence,
)
def validate_frequencies(frequencies, p_floor, p_ceil, max_p_factor):
"""Given a sequence of frequencies, [f1, f2, ..., fn], a minimum period,
maximum period, and maximum period factor, first remove all frequencies computed as 0.
Then, if periods are the inverse frequencies, this function returns
True if the sequence of periods satisfies the conditions, otherwise
returns False. In order to satisfy the maximum period factor, the periods
have to satisfy pi / pi+1 < max_p_factor and pi+1 / pi < max_p_factor.
Args:
frequencies (sequence, eg list, of floats): sequence of frequencies == 1 / period.
p_floor (float): minimum acceptable period.
p_ceil (float): maximum acceptable period.
max_p_factor (float): value to use for the period factor principle
Returns:
bool: True if the conditions are met, False otherwise.
"""
for freq in frequencies:
if freq == 0:
return False
periods = [1 / f for f in frequencies]
for period in periods:
if period < p_floor or period > p_ceil:
return False
if len(periods) > 1 and max_p_factor is not None:
for period1, period2 in zip(periods[:-1], periods[1:]):
if period1 / period2 > max_p_factor or period2 / period1 > max_p_factor:
return False
return True
def get_mean_period(frequencies, p_floor, p_ceil, max_p_factor):
"""Given a sequence of frequencies, passes these through the validation phase,
then computes the mean of the remaining periods. Note period = 1/f.
Args:
frequencies (sequence, eg list, of floats): sequence of frequencies
p_floor (float): minimum acceptable period.
p_ceil (float): maximum acceptable period.
max_p_factor (float): value to use for the period factor principle
Returns:
float: The mean of the acceptable periods.
"""
cumsum = 0
counter = 0
for freq in frequencies:
if validate_frequencies([freq], p_floor, p_ceil, max_p_factor):
cumsum += 1 / freq
counter += 1
mean_period = cumsum / counter if counter != 0 else None
return mean_period
def get_local_absolute_jitter(frequencies, p_floor, p_ceil, max_p_factor):
"""Given a sequence of frequencies, and some period conditions,
compute the local absolute jitter, as per
https://royalsocietypublishing.org/action/downloadSupplement?doi=10.1098%2Frsif.2010.0456&file=rsif20100456supp1.pdf
Args:
frequencies (sequence, eg list, of floats): sequence of estimated frequencies
p_floor (float): minimum acceptable period.
p_ceil (float): maximum acceptable period.
max_p_factor (float): value to use for the period factor principle
Returns:
float: the local absolute jitter.
"""
cumsum = 0
counter = 0
for pair in shifted_sequence(frequencies, 2):
freq1, freq2 = pair
if validate_frequencies([freq1, freq2], p_floor, p_ceil, max_p_factor):
counter += 1
cumsum += np.abs((1 / freq1) - (1 / freq2))
return cumsum / counter if counter != 0 else None
def get_local_jitter(frequencies, p_floor, p_ceil, max_p_factor):
"""Given a sequence of frequencies, and some period conditions, compute the local
jitter, as per https://royalsocietypublishing.org/action/downloadSupplement?doi=10.1098%2Frsif.2010.0456&file=rsif20100456supp1.pdf
Args:
frequencies (sequence, eg list, of floats): sequence of estimated frequencies
p_floor (float): minimum acceptable period.
p_ceil (float): maximum acceptable period.
max_p_factor (float): value to use for the period factor principle
Returns:
float: the local jitter.
"""
mean_period = get_mean_period(frequencies, p_floor, p_ceil, max_p_factor)
local_absolute_jitter = get_local_absolute_jitter(frequencies, p_floor, p_ceil, max_p_factor)
if mean_period is not None and local_absolute_jitter is not None:
return local_absolute_jitter / mean_period if mean_period != 0 else None
return None
def get_rap_jitter(frequencies, p_floor, p_ceil, max_p_factor):
"""Given a sequence of frequencies, and some period conditions,
compute the rap jitter, as per
https://royalsocietypublishing.org/action/downloadSupplement?doi=10.1098%2Frsif.2010.0456&file=rsif20100456supp1.pdf
Args:
frequencies (sequence, eg list, of floats): sequence of estimated frequencies
p_floor (float): minimum acceptable period.
p_ceil (float): maximum acceptable period.
max_p_factor (float): value to use for the period factor principle
Returns:
float: the rap jitter.
"""
counter = 0
cumsum = 0
mean_period = get_mean_period(frequencies, p_floor, p_ceil, max_p_factor)
for freq1, freq2, freq3 in shifted_sequence(frequencies, 3):
if validate_frequencies([freq1, freq2, freq3], p_floor, p_ceil, max_p_factor):
cumsum += np.abs(1 / freq2 - (1 / freq1 + 1 / freq2 + 1 / freq3) / 3)
counter += 1
if counter != 0:
rap_jitter = (cumsum / counter) / mean_period if mean_period != 0 else None
return rap_jitter
return None
def get_ppq5_jitter(frequencies, p_floor, p_ceil, max_p_factor):
"""Given a sequence of frequencies, and some period conditions,
compute the ppq5 jitter, as per
https://royalsocietypublishing.org/action/downloadSupplement?doi=10.1098%2Frsif.2010.0456&file=rsif20100456supp1.pdf
Args:
frequencies (sequence, eg list, of floats): sequence of estimated frequencies
p_floor (float): minimum acceptable period.
p_ceil (float): maximum acceptable period.
max_p_factor (float): value to use for the period factor principle
Returns:
float: the ppq5 jitter.
"""
counter = 0
cumsum = 0
mean_period = get_mean_period(frequencies, p_floor, p_ceil, max_p_factor)
for freq1, freq2, freq3, freq4, freq5 in shifted_sequence(frequencies, 5):
if validate_frequencies([freq1, freq2, freq3, freq4, freq5], p_floor, p_ceil, max_p_factor):
counter += 1
cumsum += np.abs(1 / freq3 - (1 / freq1 + 1 / freq2 + 1 / freq3 + 1 / freq4 + 1 / freq5) / 5)
if counter != 0:
ppq5_jitter = (cumsum / counter) / mean_period if mean_period != 0 else None
return ppq5_jitter
return None
def get_ddp_jitter(frequencies, p_floor, p_ceil, max_p_factor):
"""Given a sequence of frequencies, and some period conditions,
compute the ddp jitter, as per
http://www.fon.hum.uva.nl/praat/manual/PointProcess__Get_jitter__ddp____.html
Args:
frequencies (sequence, eg list, of floats): sequence of estimated frequencies
p_floor (float): minimum acceptable period.
p_ceil (float): maximum acceptable period.
max_p_factor (float): value to use for the period factor principle
Returns:
float: the ddp jitter.
"""
counter = 0
cumsum = 0
mean_period = get_mean_period(frequencies, p_floor, p_ceil, max_p_factor)
for freq1, freq2, freq3 in shifted_sequence(frequencies, 3):
if validate_frequencies([freq1, freq2, freq3], p_floor, p_ceil, max_p_factor):
counter += 1
cumsum += np.abs((1 / freq3 - 1 / freq2) - (1 / freq2 - 1 / freq1))
if counter != 0:
ddp_jitter = (cumsum / counter) / mean_period if mean_period != 0 else None
return ddp_jitter
return None
def get_jitters(f0_contour, p_floor=0.0001, p_ceil=0.02, max_p_factor=1.3):
"""Compute the jitters mathematically, according to certain conditions
given by p_floor, p_ceil and max_p_factor.
Args:
f0_contour (np.array [T / hop_length, ]): the fundamental frequency contour.
p_floor (float): minimum acceptable period.
p_ceil (float): maximum acceptable period.
max_p_factor (float): value to use for the period factor principle
Returns:
dict: Dictionary mapping strings to floats, with keys
"localJitter", "localabsoluteJitter", "rapJitter", "ppq5Jitter",
"ddpJitter"
"""
local_absolute_jitter = get_local_absolute_jitter(f0_contour, p_floor, p_ceil, max_p_factor)
local_jitter = get_local_jitter(f0_contour, p_floor, p_ceil, max_p_factor)
rap_jitter = get_rap_jitter(f0_contour, p_floor, p_ceil, max_p_factor)
ppq5_jitter = get_ppq5_jitter(f0_contour, p_floor, p_ceil, max_p_factor)
ddp_jitter = get_ddp_jitter(f0_contour, p_floor, p_ceil, max_p_factor)
jitters_dict = {
"localJitter": local_jitter,
"localabsoluteJitter": local_absolute_jitter,
"rapJitter": rap_jitter,
"ppq5Jitter": ppq5_jitter,
"ddpJitter": ddp_jitter,
}
return jitters_dict
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/surfboard/surfboard/shimmers.py | features/audio_features/helpers/surfboard/surfboard/shimmers.py | #!/usr/bin/env python
"""This file contains all the functions needed to compute the shimmers of a waveform."""
import numpy as np
from .jitters import validate_frequencies
from .utils import (
shifted_sequence,
peak_amplitude_slidingwindow,
)
def validate_amplitudes(amplitudes, frequencies, max_a_factor, p_floor, p_ceil, max_p_factor):
"""First check that frequencies corresponding to this set of amplitudes are valid. Then
Returns True if this set of amplitudes is validated as per the maximum
amplitude factor principle, i.e. if amplitudes = [a1, a2, ... , an], this
functions returns false if any two successive amplitudes alpha, beta satisfy
alpha / beta > max_a_factor or beta / alpha > max_a_factor. False otherwise.
Args:
amplitudes (list): ordered list of amplitudes to run by this principle.
frequencies (sequence, eg list, of floats): sequence of frequencies == 1 / period.
max_a_factor (float): the threshold to run the principle.
p_floor (float): minimum acceptable period.
p_ceil (float): maximum acceptable period.
max_p_factor (float): value to use for the period factor principle
Returns:
bool: True if this set of amplitudes satisifies the principle
and this set of frequencies satisfies the period condition,
False otherwise.
"""
# First check that these frequencies satisfy the periods principle.
if not validate_frequencies(frequencies, p_floor, p_ceil, max_p_factor):
return False
if max_a_factor is not None:
for amp1, amp2 in zip(amplitudes[:-1], amplitudes[1:]):
if amp1 / amp2 > max_a_factor or amp2 / amp1 > max_a_factor:
return False
return True
def get_local_shimmer(amplitudes, frequencies, max_a_factor, p_floor, p_ceil, max_p_factor):
"""Given a list of amplitudes, returns the localShimmer as per
https://royalsocietypublishing.org/action/downloadSupplement?doi=10.1098%2Frsif.2010.0456&file=rsif20100456supp1.pdf
Args:
amplitudes (list of floats): The list of peak amplitudes in each frame.
max_a_factor (float): The maximum A factor to validate amplitudes. See
validate_amplitudes().
Returns:
float: The local shimmer computed over this sequence of amplitudes.
"""
cumsum = 0
counter = 0
for (freq1, freq2), (amp1, amp2) in zip(shifted_sequence(frequencies, 2), shifted_sequence(amplitudes, 2)):
if validate_amplitudes([amp1, amp2], [freq1, freq2], max_a_factor, p_floor, p_ceil, max_p_factor):
cumsum += np.abs(amp1 - amp2)
counter += 1
mean_amplitude = np.mean(amplitudes)
if counter != 0:
local_shimmer = (cumsum / counter) / mean_amplitude if mean_amplitude != 0 else None
return local_shimmer
return None
def get_local_db_shimmer(amplitudes, frequencies, max_a_factor, p_floor, p_ceil, max_p_factor):
"""Given a list of amplitudes, returns the localdbShimmer as per
https://royalsocietypublishing.org/action/downloadSupplement?doi=10.1098%2Frsif.2010.0456&file=rsif20100456supp1.pdf
Args:
amplitudes (list of floats): The list of peak amplitudes in each frame.
max_a_factor (float): The maximum A factor to validate amplitudes. See
validate_amplitudes().
Returns:
float: The local DB shimmer computed over this sequence of amplitudes.
"""
cumsum = 0
counter = 0
for (freq1, freq2), (amp1, amp2) in zip(shifted_sequence(frequencies, 2), shifted_sequence(amplitudes, 2)):
if validate_amplitudes([amp1, amp2], [freq1, freq2], max_a_factor, p_floor, p_ceil, max_p_factor):
cumsum += np.abs(20 * np.log10(amp2 / amp1))
counter += 1
local_db_shimmer = cumsum / counter if counter != 0 else None
return local_db_shimmer
def get_apq_shimmer(amplitudes, frequencies, max_a_factor, p_floor, p_ceil, max_p_factor, apq_no):
"""Given a list of amplitudes, returns the apq{apq_no}Shimmer as per
https://royalsocietypublishing.org/action/downloadSupplement?doi=10.1098%2Frsif.2010.0456&file=rsif20100456supp1.pdf
Args:
amplitudes (list of floats): The list of peak amplitudes in each frame.
max_a_factor (float): The maximum A factor to validate amplitudes. See
validate_amplitudes().
apq_no (int): an odd number which corresponds to the number of neighbors
used to compute the shimmer.
Returns:
float: The apqShimmer computed over this sequence of amplitudes
with this APQ number.
"""
assert apq_no % 2 == 1, "To compute these, must have an odd APQ number."
counter = 0
cumsum = 0
for freqs, amps in zip(shifted_sequence(frequencies, apq_no), shifted_sequence(amplitudes, apq_no)):
if validate_amplitudes(amps, freqs, max_a_factor, p_floor, p_ceil, max_p_factor):
counter += 1
# apq_no is always odd, so int((apq_no - 1) / 2) doesn't assume which integer to return.
cumsum += np.abs(amps[int((apq_no - 1) / 2)] - np.sum(amps) / apq_no)
mean_amplitude = np.mean(amplitudes)
if counter != 0:
apq_shimmer = (cumsum / counter) / mean_amplitude if mean_amplitude != 0 else None
return apq_shimmer
return None
def get_shimmers(
waveform, sample_rate, f0_contour, max_a_factor=1.6, p_floor=0.0001,
p_ceil=0.02, max_p_factor=1.3
):
"""Compute five different types of shimmers using functions defined above.
Args:
waveform (np.array, [T, ]): waveform over which to compute shimmers
sample_rate (int): sampling rate of waveform.
f0_contour (np.array, [T / hop_length, ]): the fundamental frequency contour.
max_a_factor (float): value to use for amplitude factor principle
p_floor (float): minimum acceptable period.
p_ceil (float): maximum acceptable period.
max_p_factor (float): value to use for the period factor principle
Returns:
dict: Dictionary mapping strings to floats, with keys
"localShimmer", "localdbShimmer", "apq3Shimmer", "apq5Shimmer",
"apq11Shimmer"
"""
amplitudes = peak_amplitude_slidingwindow(waveform, sample_rate)[0]
local_shimmer = get_local_shimmer(
amplitudes, f0_contour, max_a_factor, p_floor, p_ceil, max_p_factor,
)
local_db_shimmer = get_local_db_shimmer(
amplitudes, f0_contour, max_a_factor, p_floor, p_ceil, max_p_factor,
)
apq3_shimmer = get_apq_shimmer(
amplitudes, f0_contour, max_a_factor, p_floor, p_ceil, max_p_factor, apq_no=3
)
apq5_shimmer = get_apq_shimmer(
amplitudes, f0_contour, max_a_factor, p_floor, p_ceil, max_p_factor, apq_no=5
)
apq11_shimmer = get_apq_shimmer(
amplitudes, f0_contour, max_a_factor, p_floor, p_ceil, max_p_factor, apq_no=11
)
shimmers_dict = {
"localShimmer": local_shimmer,
"localdbShimmer": local_db_shimmer,
"apq3Shimmer": apq3_shimmer,
"apq5Shimmer": apq5_shimmer,
"apq11Shimmer": apq11_shimmer,
}
return shimmers_dict
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/surfboard/surfboard/tests/__init__.py | features/audio_features/helpers/surfboard/surfboard/tests/__init__.py | python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false | |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/surfboard/surfboard/tests/test_feature_extraction.py | features/audio_features/helpers/surfboard/surfboard/tests/test_feature_extraction.py | import os
import pytest
import yaml
import pandas as pd
import numpy as np
from surfboard import sound
from surfboard.utils import example_audio_file
from surfboard.feature_extraction import (
extract_features,
extract_features_from_paths
)
@pytest.fixture
def flat_waveform():
wave = np.ones((24000,))
return sound.Waveform(signal=wave, sample_rate=24000)
@pytest.fixture
def waveform():
filename = example_audio_file('a')
return sound.Waveform(
path=filename, sample_rate=24000
)
def test_extract_features(waveform, flat_waveform):
"""Test the extract features function with and without statistics from the
all_features.yaml example config.
Args:
waveform (Waveform): The waveform PyTest fixture returning an
example audio file.
flat_waveform (Waveform): The flat_waveform PyTest fixture returning
a flat wave of ones.
"""
config_path = os.path.join(
os.path.dirname(os.path.realpath(__file__)),
'../../example_configs/all_features.yaml'
)
config = yaml.full_load(open(config_path, 'r'))
components_list = list(config['components'])
statistics_list = list(config['statistics'])
output_without_statistics = extract_features(
[waveform, flat_waveform], components_list
)
# Check correct return type.
assert isinstance(
output_without_statistics, pd.DataFrame
)
# Check that not all values are NaNs.
assert not output_without_statistics.isnull().values.all()
output_with_statistics = extract_features(
[waveform, flat_waveform], components_list, statistics_list
)
# Check correct return type.
assert isinstance(output_with_statistics, pd.DataFrame)
# Check that not all values are NaNs.
assert not output_with_statistics.isnull().values.all()
def test_extract_features_from_paths():
"""Test the extract features from paths function with and without
statistics from the all_features.yaml example config.
"""
config_path = os.path.join(
os.path.dirname(os.path.realpath(__file__)),
'../../example_configs/all_features.yaml'
)
config = yaml.full_load(open(config_path, 'r'))
components_list = list(config['components'])
statistics_list = list(config['statistics'])
output_without_statistics = extract_features_from_paths(
[example_audio_file('e')], components_list
)
# Check correct return type.
assert isinstance(
output_without_statistics, pd.DataFrame
)
# Check that not all values are NaNs.
assert not output_without_statistics.isnull().values.all()
output_with_statistics = extract_features_from_paths(
[example_audio_file('o')], components_list , statistics_list
)
# Check correct return type.
assert isinstance(output_with_statistics, pd.DataFrame)
# Check that not all values are NaNs.
assert not output_with_statistics.isnull().values.all() | python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/surfboard/surfboard/tests/test_barrel.py | features/audio_features/helpers/surfboard/surfboard/tests/test_barrel.py | import pytest
import numpy as np
from surfboard import statistics
@pytest.fixture
def barrel():
feature = np.ones((25, 267))
return statistics.Barrel(feature)
def test_barrel_constructor(barrel):
feat = barrel()
assert (feat == np.ones((25, 267))).all()
def test_first_derivative(barrel):
d1 = barrel.get_first_derivative()
assert isinstance(d1, np.ndarray)
def test_second_derivative(barrel):
d2 = barrel.get_second_derivative()
assert isinstance(d2, np.ndarray)
def test_max(barrel):
m = barrel.max()
assert isinstance(m, np.ndarray)
def test_min(barrel):
m = barrel.min()
assert isinstance(m, np.ndarray)
def test_mean(barrel):
m = barrel.mean()
assert isinstance(m, np.ndarray)
def test_d1_mean(barrel):
m = barrel.first_derivative_mean()
assert isinstance(m, np.ndarray)
def test_d2_mean(barrel):
m = barrel.second_derivative_mean()
assert isinstance(m, np.ndarray)
def test_std(barrel):
std = barrel.std()
assert isinstance(std, np.ndarray)
def test_d1_std(barrel):
std = barrel.first_derivative_std()
assert isinstance(std, np.ndarray)
def test_d2_std(barrel):
std = barrel.second_derivative_std()
assert isinstance(std, np.ndarray)
def test_skewness(barrel):
skew = barrel.skewness()
assert isinstance(skew, np.ndarray)
def test_d1_skewness(barrel):
skew = barrel.first_derivative_skewness()
assert isinstance(skew, np.ndarray)
def test_d2_skewness(barrel):
skew = barrel.second_derivative_skewness()
assert isinstance(skew, np.ndarray)
def test_kurtosis(barrel):
kurt = barrel.kurtosis()
assert isinstance(kurt, np.ndarray)
def test_d1_kurt(barrel):
kurt = barrel.first_derivative_kurtosis()
assert isinstance(kurt, np.ndarray)
def test_d2_kurt(barrel):
kurt = barrel.second_derivative_kurtosis()
assert isinstance(kurt, np.ndarray)
def test_q1(barrel):
q1 = barrel.first_quartile()
assert isinstance(q1, np.ndarray)
def test_q2(barrel):
q2 = barrel.second_quartile()
assert isinstance(q2, np.ndarray)
def test_q3(barrel):
q3 = barrel.third_quartile()
assert isinstance(q3, np.ndarray)
def test_q2_q1_range(barrel):
q2q1range = barrel.q2_q1_range()
assert isinstance(q2q1range, np.ndarray)
def test_q3_q2_range(barrel):
q3q2range = barrel.q3_q2_range()
assert isinstance(q3q2range, np.ndarray)
def test_q3_q1_range(barrel):
q3q1range = barrel.q3_q1_range()
assert isinstance(q3q1range, np.ndarray)
def test_percentile_1(barrel):
percentile_1 = barrel.percentile_1()
assert isinstance(percentile_1, np.ndarray)
def test_percentile_99(barrel):
percentile_99 = barrel.percentile_99()
assert isinstance(percentile_99, np.ndarray)
def test_percentile_1_99_range(barrel):
r = barrel.percentile_1_99_range()
assert isinstance(r, np.ndarray)
def test_linear_regression_offset(barrel):
x0 = barrel.linear_regression_offset()
assert isinstance(x0, np.ndarray)
def test_linear_regression_slope(barrel):
x1 = barrel.linear_regression_slope()
assert isinstance(x1, np.ndarray)
def test_linear_regression_mse(barrel):
mse = barrel.linear_regression_mse()
assert isinstance(mse, np.ndarray)
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/audio_features/helpers/surfboard/surfboard/tests/test_sound.py | features/audio_features/helpers/surfboard/surfboard/tests/test_sound.py | import pytest
import numpy as np
from surfboard import sound
from surfboard.utils import example_audio_file
@pytest.fixture
def flat_waveform():
wave = np.ones((24000,))
return sound.Waveform(signal=wave, sample_rate=24000)
@pytest.fixture
def waveform():
filename = example_audio_file('a')
return sound.Waveform(
path=filename, sample_rate=24000
)
def test_constructor_signal(flat_waveform):
"""Test Waveform constructor from np array"""
sound = flat_waveform.waveform
assert isinstance(sound, np.ndarray)
def test_constructor_path(waveform):
"""Test Waveform constructor from a path"""
sound = waveform.waveform
assert isinstance(sound, np.ndarray)
def test_mfcc(waveform):
"""Test MFCCs"""
mfcc = waveform.mfcc()
assert isinstance(mfcc, np.ndarray)
def test_log_melspec(waveform):
"""Test Log Mel Spectrogram"""
melspec = waveform.log_melspec()
assert isinstance(melspec, np.ndarray)
def test_magnitude_spectrum(waveform):
"""Test Magnitude Spectrum"""
S = waveform.magnitude_spectrum()
assert isinstance(S, np.ndarray)
def test_bark_spectrogram(waveform):
"""Test Bark Spectrogram"""
S = waveform.bark_spectrogram()
assert isinstance(S, np.ndarray)
def test_morlet_cwt(flat_waveform):
"""Test Morlet Continuous Wavelet Transform
Flat waveform because too long otherwise.
"""
cwt = flat_waveform.morlet_cwt()
assert isinstance(cwt, np.ndarray)
def test_chroma_stft(waveform):
"""Test Chromagram from STFT"""
chroma_stft = waveform.chroma_stft()
assert isinstance(chroma_stft, np.ndarray)
def test_chroma_cqt(waveform):
"""Test Chromagram from CQT"""
chroma_cqt = waveform.chroma_cqt()
assert isinstance(chroma_cqt, np.ndarray)
def test_chroma_cens(waveform):
"""Test Chroma CENS"""
chroma_cens = waveform.chroma_cens()
assert isinstance(chroma_cens, np.ndarray)
def test_spectral_slope(waveform):
"""Test spectral slope"""
spectral_slope = waveform.spectral_slope()
assert isinstance(spectral_slope, np.ndarray)
def test_spectral_flux(waveform):
"""Test spectral flux"""
spectral_flux = waveform.spectral_flux()
assert isinstance(spectral_flux, np.ndarray)
def test_spectral_entropy(waveform):
"""Test spectral entropy"""
spectral_entropy = waveform.spectral_entropy()
assert isinstance(spectral_entropy, np.ndarray)
def test_spectral_centroid(waveform):
"""Test spectral centroid"""
spectral_centroid = waveform.spectral_centroid()
assert isinstance(spectral_centroid, np.ndarray)
def test_spectral_spread(waveform):
"""Test spectral spread"""
spectral_spread = waveform.spectral_spread()
assert isinstance(spectral_spread, np.ndarray)
def test_spectral_skewness(waveform):
"""Test spectral skewness"""
spectral_skewness = waveform.spectral_skewness()
assert isinstance(spectral_skewness, np.ndarray)
def test_spectral_kurtosis(waveform):
"""Test spectral kurtosis"""
spectral_kurtosis = waveform.spectral_kurtosis()
assert isinstance(spectral_kurtosis, np.ndarray)
def test_spectral_flatness(waveform):
"""Test spectral flatness"""
spectral_flatness = waveform.spectral_flatness()
assert isinstance(spectral_flatness, np.ndarray)
def test_spectral_rolloff(waveform):
"""Test spectral rolloff"""
spectral_rolloff = waveform.spectral_rolloff()
assert isinstance(spectral_rolloff, np.ndarray)
def test_loudness(waveform):
"""Test loudness"""
loudness = waveform.loudness()
assert isinstance(loudness, float)
def test_loudness_slidingwindow(waveform):
"""Test loudness over sliding windows"""
loudness_slidingwindow = waveform.loudness_slidingwindow()
assert isinstance(loudness_slidingwindow, np.ndarray)
def test_shannon_entropy(waveform):
"""Test Shannon entropy"""
shannon_entropy = waveform.shannon_entropy()
assert isinstance(shannon_entropy, float)
def test_shannon_entropy_slidingwindow(waveform):
"""Test Shannon entropy over sliding windows"""
shannon_entropy = waveform.shannon_entropy_slidingwindow()
assert isinstance(shannon_entropy, np.ndarray)
def test_zerocrossing(waveform):
"""Test zero crossing"""
zerocrossing = waveform.zerocrossing()
assert isinstance(zerocrossing, dict)
def test_zerocrossing_slidingwindow(waveform):
"""Test zero crossing over sliding windows"""
zerocrossing_slidingwindow = waveform.zerocrossing_slidingwindow()
assert isinstance(zerocrossing_slidingwindow, np.ndarray)
def test_rms(waveform):
"""Test energy"""
rms = waveform.rms()
assert isinstance(rms, np.ndarray)
def test_intensity(waveform):
"""Test intensity"""
intensity = waveform.intensity()
assert isinstance(intensity, np.ndarray)
def test_crest_factor(waveform):
"""Test crest factor"""
crest_factor = waveform.crest_factor()
assert isinstance(crest_factor, np.ndarray)
def test_f0_swipe(waveform):
"""Test f0 with SWIPE method"""
f0 = waveform.f0_contour(method='swipe')
assert isinstance(f0, np.ndarray)
def test_f0_rapt(waveform):
"""Test f0 with RAPT method"""
f0 = waveform.f0_contour(method='rapt')
assert isinstance(f0, np.ndarray)
def test_f0_statistics(waveform):
"""Test f0 statistics"""
f0_statistics = waveform.f0_statistics(method='rapt')
assert isinstance(f0_statistics, dict)
def test_ppe(waveform):
"""Test pitch period entropy"""
try:
ppe = waveform.ppe()
except ValueError:
return
assert isinstance(ppe, float)
def test_jitters(waveform):
"""Test jitters"""
jitters = waveform.jitters()
assert isinstance(jitters, dict)
def test_shimmers(waveform):
"""Test shimmers"""
shimmers = waveform.shimmers()
assert isinstance(shimmers, dict)
def test_hnr(waveform):
"""Test harmonics to noise ratio"""
hnr = waveform.hnr()
assert isinstance(hnr, float)
def test_dfa(waveform):
"""Test detrended fluctuation analysis"""
dfa = waveform.dfa()
assert isinstance(dfa, float)
def test_lpc(waveform):
"""Test Linear Prediction Coefficients"""
lpc = waveform.lpc(order=200)
assert isinstance(lpc, dict)
def test_lsf(waveform):
"""Test Linear Spectral Frequencies"""
lsf = waveform.lsf(order=200)
assert isinstance(lsf, dict)
def test_formants(waveform):
"""Test Formants"""
formants = waveform.formants()
assert isinstance(formants, dict)
def test_formants_slidingwindow(waveform):
"""Test Formants computes over sliding windows"""
try:
formants_slidingwindow = waveform.formants_slidingwindow()
except ValueError:
return
assert isinstance(formants_slidingwindow, np.ndarray)
def test_kurtosis_slidingwindow(waveform):
"""Test kurtosis computed over sliding windows"""
kurtosis = waveform.kurtosis_slidingwindow()
assert isinstance(kurtosis, np.ndarray)
def test_log_energy(waveform):
"""Test log energy"""
log_energy = waveform.log_energy()
assert isinstance(log_energy, float)
def test_log_energy_slidingwindow(waveform):
"""Test log energy computed over sliding windows"""
log_energy = waveform.log_energy_slidingwindow()
assert isinstance(log_energy, np.ndarray)
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/video_features/y8m_features.py | features/video_features/y8m_features.py | '''
AAA lllllll lllllll iiii
A:::A l:::::l l:::::l i::::i
A:::::A l:::::l l:::::l iiii
A:::::::A l:::::l l:::::l
A:::::::::A l::::l l::::l iiiiiii eeeeeeeeeeee
A:::::A:::::A l::::l l::::l i:::::i ee::::::::::::ee
A:::::A A:::::A l::::l l::::l i::::i e::::::eeeee:::::ee
A:::::A A:::::A l::::l l::::l i::::i e::::::e e:::::e
A:::::A A:::::A l::::l l::::l i::::i e:::::::eeeee::::::e
A:::::AAAAAAAAA:::::A l::::l l::::l i::::i e:::::::::::::::::e
A:::::::::::::::::::::A l::::l l::::l i::::i e::::::eeeeeeeeeee
A:::::AAAAAAAAAAAAA:::::A l::::l l::::l i::::i e:::::::e
A:::::A A:::::A l::::::ll::::::li::::::ie::::::::e
A:::::A A:::::A l::::::ll::::::li::::::i e::::::::eeeeeeee
A:::::A A:::::A l::::::ll::::::li::::::i ee:::::::::::::e
AAAAAAA AAAAAAAlllllllllllllllliiiiiiii eeeeeeeeeeeeee
______ _ ___ ______ _____
| ___| | | / _ \ | ___ \_ _| _
| |_ ___ __ _| |_ _ _ _ __ ___ ___ / /_\ \| |_/ / | | (_)
| _/ _ \/ _` | __| | | | '__/ _ \/ __| | _ || __/ | |
| || __/ (_| | |_| |_| | | | __/\__ \ | | | || | _| |_ _
\_| \___|\__,_|\__|\__,_|_| \___||___/ \_| |_/\_| \___/ (_)
_ _ _ _
| | | (_) | |
| | | |_ __| | ___ ___
| | | | |/ _` |/ _ \/ _ \
\ \_/ / | (_| | __/ (_) |
\___/|_|\__,_|\___|\___/
Featurize folders of videos if the default_video_features = ['y8m_features']
To read more about the y8m embedding, check out https://research.google.com/youtube8m/
Note that this embedding is modified to include the y8m feature set along with audio
and text features.
'''
import os, sys, tarfile, numpy
from six.moves import urllib
import tensorflow as tf
from PIL import Image
import numpy
import cv2, os, random, json, sys, getpass, pickle, datetime, time, librosa, shutil, gensim, nltk
from nltk import word_tokenize
from nltk.classify import apply_features, SklearnClassifier, maxent
import speech_recognition as sr
from pydub import AudioSegment
from sklearn import preprocessing
from sklearn import svm
from sklearn import metrics
from textblob import TextBlob
from operator import itemgetter
from matplotlib import pyplot as plt
from PIL import Image
import skvideo.io
import skvideo.motion
import skvideo.measure
from moviepy.editor import VideoFileClip
from matplotlib import pyplot as plt
from pydub import AudioSegment
def prev_dir(directory):
g=directory.split('/')
# print(g)
lastdir=g[len(g)-1]
i1=directory.find(lastdir)
directory=directory[0:i1]
return directory
# import custom audioset directory
basedir=os.getcwd()
prevdir=prev_dir(basedir)
audioset_dir=prevdir+'audio_features'
sys.path.append(audioset_dir)
import audioset_features as af
print('imported audioset features!')
os.chdir(basedir)
#### to extract tesseract features
sys.path.append(prevdir+ '/image_features')
import tesseract_features as tff
os.chdir(basedir)
# import fast featurize
text_dir=prevdir+'text_features'
os.chdir(text_dir)
sys.path.append(text_dir)
import fast_features as ff
print('imported fast features!')
os.chdir(basedir)
INCEPTION_TF_GRAPH = 'http://download.tensorflow.org/models/image/imagenet/inception-2015-12-05.tgz'
YT8M_PCA_MAT = 'http://data.yt8m.org/yt8m_pca.tgz'
MODEL_DIR = os.path.join(os.getenv('HOME'), 'yt8m')
class YouTube8MFeatureExtractor(object):
"""Extracts YouTube8M features for RGB frames.
First time constructing this class will create directory `yt8m` inside your
home directory, and will download inception model (85 MB) and YouTube8M PCA
matrix (15 MB). If you want to use another directory, then pass it to argument
`model_dir` of constructor.
If the model_dir exist and contains the necessary files, then files will be
re-used without download.
Usage Example:
from PIL import Image
import numpy
# Instantiate extractor. Slow if called first time on your machine, as it
# needs to download 100 MB.
extractor = YouTube8MFeatureExtractor()
image_file = os.path.join(extractor._model_dir, 'cropped_panda.jpg')
im = numpy.array(Image.open(image_file))
features = extractor.extract_rgb_frame_features(im)
** Note: OpenCV reverses the order of channels (i.e. orders channels as BGR
instead of RGB). If you are using OpenCV, then you must do:
im = im[:, :, ::-1] # Reverses order on last (i.e. channel) dimension.
then call `extractor.extract_rgb_frame_features(im)`
"""
def __init__(self, model_dir=MODEL_DIR):
# Create MODEL_DIR if not created.
self._model_dir = model_dir
if not os.path.exists(model_dir):
os.makedirs(model_dir)
# Load PCA Matrix.
download_path = self._maybe_download(YT8M_PCA_MAT)
pca_mean = os.path.join(self._model_dir, 'mean.npy')
if not os.path.exists(pca_mean):
tarfile.open(download_path, 'r:gz').extractall(model_dir)
self._load_pca()
# Load Inception Network
download_path = self._maybe_download(INCEPTION_TF_GRAPH)
inception_proto_file = os.path.join(self._model_dir,
'classify_image_graph_def.pb')
if not os.path.exists(inception_proto_file):
tarfile.open(download_path, 'r:gz').extractall(model_dir)
self._load_inception(inception_proto_file)
def extract_rgb_frame_features(self, frame_rgb, apply_pca=True):
"""Applies the YouTube8M feature extraction over an RGB frame.
This passes `frame_rgb` to inception3 model, extracting hidden layer
activations and passing it to the YouTube8M PCA transformation.
Args:
frame_rgb: numpy array of uint8 with shape (height, width, channels) where
channels must be 3 (RGB), and height and weight can be anything, as the
inception model will resize.
apply_pca: If not set, PCA transformation will be skipped.
Returns:
Output of inception from `frame_rgb` (2048-D) and optionally passed into
YouTube8M PCA transformation (1024-D).
"""
assert len(frame_rgb.shape) == 3
assert frame_rgb.shape[2] == 3 # 3 channels (R, G, B)
with self._inception_graph.as_default():
if apply_pca:
frame_features = self.session.run(
'pca_final_feature:0', feed_dict={'DecodeJpeg:0': frame_rgb})
else:
frame_features = self.session.run(
'pool_3/_reshape:0', feed_dict={'DecodeJpeg:0': frame_rgb})
frame_features = frame_features[0]
return frame_features
def apply_pca(self, frame_features):
"""Applies the YouTube8M PCA Transformation over `frame_features`.
Args:
frame_features: numpy array of floats, 2048 dimensional vector.
Returns:
1024 dimensional vector as a numpy array.
"""
# Subtract mean
feats = frame_features - self.pca_mean
# Multiply by eigenvectors.
feats = feats.reshape((1, 2048)).dot(self.pca_eigenvecs).reshape((1024,))
# Whiten
feats /= numpy.sqrt(self.pca_eigenvals + 1e-4)
return feats
def _maybe_download(self, url):
"""Downloads `url` if not in `_model_dir`."""
filename = os.path.basename(url)
download_path = os.path.join(self._model_dir, filename)
if os.path.exists(download_path):
return download_path
def _progress(count, block_size, total_size):
sys.stdout.write(
'\r>> Downloading %s %.1f%%' %
(filename, float(count * block_size) / float(total_size) * 100.0))
sys.stdout.flush()
urllib.request.urlretrieve(url, download_path, _progress)
statinfo = os.stat(download_path)
print('Succesfully downloaded', filename, statinfo.st_size, 'bytes.')
return download_path
def _load_inception(self, proto_file):
graph_def = tf.GraphDef.FromString(open(proto_file, 'rb').read())
self._inception_graph = tf.Graph()
with self._inception_graph.as_default():
_ = tf.import_graph_def(graph_def, name='')
self.session = tf.Session()
Frame_Features = self.session.graph.get_tensor_by_name(
'pool_3/_reshape:0')
Pca_Mean = tf.constant(value=self.pca_mean, dtype=tf.float32)
Pca_Eigenvecs = tf.constant(value=self.pca_eigenvecs, dtype=tf.float32)
Pca_Eigenvals = tf.constant(value=self.pca_eigenvals, dtype=tf.float32)
Feats = Frame_Features[0] - Pca_Mean
Feats = tf.reshape(
tf.matmul(tf.reshape(Feats, [1, 2048]), Pca_Eigenvecs), [
1024,
])
tf.divide(Feats, tf.sqrt(Pca_Eigenvals + 1e-4), name='pca_final_feature')
def _load_pca(self):
self.pca_mean = numpy.load(os.path.join(self._model_dir, 'mean.npy'))[:, 0]
self.pca_eigenvals = numpy.load(
os.path.join(self._model_dir, 'eigenvals.npy'))[:1024, 0]
self.pca_eigenvecs = numpy.load(
os.path.join(self._model_dir, 'eigenvecs.npy')).T[:, :1024]
def transcribe(wavfile):
r = sr.Recognizer()
# use wavfile as the audio source (must be .wav file)
with sr.AudioFile(wavfile) as source:
#extract audio data from the file
audio = r.record(source)
transcript=r.recognize_sphinx(audio)
print(transcript)
return transcript
# Instantiate extractor. Slow if called first time on your machine, as it
# needs to download 100 MB.
def y8m_featurize(videofile, process_dir, help_dir, fast_model):
now=os.getcwd()
# PREPROCESSING
#############################################
# metadata (should be .mp4)
clip = VideoFileClip(videofile)
duration = clip.duration
videodata=skvideo.io.vread(videofile)
frames, rows, cols, channels = videodata.shape
metadata=skvideo.io.ffprobe(videofile)
frame=videodata[0]
r,c,ch=frame.shape
try:
os.mkdir('output')
os.chdir('output')
outputdir=os.getcwd()
except:
shutil.rmtree('output')
os.mkdir('output')
os.chdir('output')
outputdir=os.getcwd()
#write all the images every 10 frames in the video
for i in range(0,len(videodata),25):
#row, col, channels
skvideo.io.vwrite("output"+str(i)+".png", videodata[i])
listdir=os.listdir()
(r,c,ch)=cv2.imread(listdir[0]).shape
img=numpy.zeros((r,c,ch))
iterations=0
#take first image as a background image
background=cv2.imread(listdir[1])
image_features=numpy.zeros(1024)
image_features2=numpy.zeros(63)
image_transcript=''
for i in range(len(listdir)):
if listdir[i][-4:]=='.png':
os.chdir(outputdir)
frame_new=cv2.imread(listdir[i])
print(os.getcwd())
print(listdir[i])
print(frame)
img=img+frame_new
iterations=iterations+1
# get features
extractor = YouTube8MFeatureExtractor(model_dir=help_dir)
im = numpy.array(Image.open(listdir[i]))
image_features_temp = extractor.extract_rgb_frame_features(im)
image_features=image_features+image_features_temp
ttranscript, tfeatures, tlabels = tff.tesseract_featurize(listdir[i])
image_transcript=image_transcript+ttranscript
image_features2=image_features2+tfeatures
#os.remove(listdir[i])
# averaged image features
image_features=(1/iterations)*image_features
image_features2=(1/iterations)*image_features2
# averaged image over background
img=(1/iterations)*img-background
skvideo.io.vwrite("output.png", img)
extractor=YouTube8MFeatureExtractor(model_dir=help_dir)
im = numpy.array(Image.open('output.png'))
avg_image_features = extractor.extract_rgb_frame_features(im)
os.remove('output.png')
os.chdir(now)
video_features=image_features+avg_image_features
video_labels=list()
for i in range(len(video_features)):
video_labels.append('Y8M_feature_%s'%(str(i+1)))
avg_image_labels2=list()
for i in range(len(tlabels)):
avg_image_labels2.append('avg_imgtranscript_'+tlabels[i])
# make wavfile from video file and get average AudioSet embedding features
wavfile = videofile[0:-4]+'.wav'
os.system('ffmpeg -i %s %s'%(videofile,wavfile))
audio_features, audio_labels = af.audioset_featurize(wavfile, audioset_dir, process_dir)
a_features=numpy.zeros(len(audio_features[0]))
for i in range(len(audio_features)):
a_features=a_features+audio_features[i]
# average all the audioset features
audio_features=(1/len(audio_features[0]))*a_features
audio_labels=list()
for i in range(len(audio_features)):
audio_labels.append('audioset_feature_%s'%(str(i+1)))
# extract text and get using FastText model
transcript = transcribe(wavfile)
text_features, text_labels = ff.fast_featurize(transcript, fast_model)
features=numpy.append(video_features,image_features2)
features=numpy.append(features, audio_features)
features=numpy.append(features,text_features)
labels=video_labels+avg_image_labels2+audio_labels+text_labels
os.remove(videofile[0:-4]+'.wav')
shutil.rmtree('output')
return features, labels, transcript, image_transcript
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/video_features/video_features.py | features/video_features/video_features.py | '''
AAA lllllll lllllll iiii
A:::A l:::::l l:::::l i::::i
A:::::A l:::::l l:::::l iiii
A:::::::A l:::::l l:::::l
A:::::::::A l::::l l::::l iiiiiii eeeeeeeeeeee
A:::::A:::::A l::::l l::::l i:::::i ee::::::::::::ee
A:::::A A:::::A l::::l l::::l i::::i e::::::eeeee:::::ee
A:::::A A:::::A l::::l l::::l i::::i e::::::e e:::::e
A:::::A A:::::A l::::l l::::l i::::i e:::::::eeeee::::::e
A:::::AAAAAAAAA:::::A l::::l l::::l i::::i e:::::::::::::::::e
A:::::::::::::::::::::A l::::l l::::l i::::i e::::::eeeeeeeeeee
A:::::AAAAAAAAAAAAA:::::A l::::l l::::l i::::i e:::::::e
A:::::A A:::::A l::::::ll::::::li::::::ie::::::::e
A:::::A A:::::A l::::::ll::::::li::::::i e::::::::eeeeeeee
A:::::A A:::::A l::::::ll::::::li::::::i ee:::::::::::::e
AAAAAAA AAAAAAAlllllllllllllllliiiiiiii eeeeeeeeeeeeee
______ _ ___ ______ _____
| ___| | | / _ \ | ___ \_ _| _
| |_ ___ __ _| |_ _ _ _ __ ___ ___ / /_\ \| |_/ / | | (_)
| _/ _ \/ _` | __| | | | '__/ _ \/ __| | _ || __/ | |
| || __/ (_| | |_| |_| | | | __/\__ \ | | | || | _| |_ _
\_| \___|\__,_|\__|\__,_|_| \___||___/ \_| |_/\_| \___/ (_)
_ _ _ _
| | | (_) | |
| | | |_ __| | ___ ___
| | | | |/ _` |/ _ \/ _ \
\ \_/ / | (_| | __/ (_) |
\___/|_|\__,_|\___|\___/
Featurize folders of videos if the default_video_features = ['video_features']
Note that htis uses a variety of libraries and can have dependency issues if you
have used other Allie functions. This heavily relies on skvideo, opencv, and scikit-learn.
'''
import numpy as np
import cv2, os, random, json, sys, getpass, pickle, datetime, time, librosa, shutil, gensim, nltk
from nltk import word_tokenize
from nltk.classify import apply_features, SklearnClassifier, maxent
import speech_recognition as sr
from pydub import AudioSegment
from sklearn import preprocessing
from sklearn import svm
from sklearn import metrics
from textblob import TextBlob
from operator import itemgetter
from matplotlib import pyplot as plt
from PIL import Image
import skvideo.io
import skvideo.motion
import skvideo.measure
from moviepy.editor import VideoFileClip
from matplotlib import pyplot as plt
from pydub import AudioSegment
def prev_dir(directory):
g=directory.split('/')
# print(g)
lastdir=g[len(g)-1]
i1=directory.find(lastdir)
directory=directory[0:i1]
return directory
#### to extract tesseract features
curdir=os.getcwd()
import helpers.tesseract_features as tf
os.chdir(curdir)
# DEFINE HELPER FUNCTIONS
#############################################################
def convert(file):
clip = VideoFileClip(file)
duration = clip.duration
if duration < 30:
if file[-4:] in ['.mov','.avi','.flv','.wmv']:
filename=file[0:-4]+'.mp4'
os.system('ffmpeg -i %s -an %s'%(file,filename))
os.remove(file)
elif file[-4:] == '.mp4':
filename=file
else:
filename=file
os.remove(file)
else:
filename=file
os.remove(file)
return filename
def haar_featurize(cur_dir, haar_dir, img):
os.chdir(haar_dir)
# load image
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# assumes all files of haarcascades are in current directory
one = cv2.CascadeClassifier('haarcascade_eye_tree_eyeglasses.xml')
one = one.detectMultiScale(gray, 1.3, 5)
one = len(one)
two = cv2.CascadeClassifier('haarcascade_eye.xml')
two = two.detectMultiScale(gray, 1.3, 5)
two = len(two)
three = cv2.CascadeClassifier('haarcascade_frontalcatface_extended.xml')
three = three.detectMultiScale(gray, 1.3, 5)
three = len(three)
four = cv2.CascadeClassifier('haarcascade_frontalcatface.xml')
four = four.detectMultiScale(gray, 1.3, 5)
four = len(four)
five = cv2.CascadeClassifier('haarcascade_frontalface_alt_tree.xml')
five = five.detectMultiScale(gray, 1.3, 5)
five = len(five)
six = cv2.CascadeClassifier('haarcascade_frontalface_alt.xml')
six = six.detectMultiScale(gray, 1.3, 5)
six = len(six)
seven = cv2.CascadeClassifier('haarcascade_frontalface_alt2.xml')
seven = seven.detectMultiScale(gray, 1.3, 5)
seven = len(seven)
eight = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
eight = eight.detectMultiScale(gray, 1.3, 5)
eight = len(eight)
nine = cv2.CascadeClassifier('haarcascade_fullbody.xml')
nine = nine.detectMultiScale(gray, 1.3, 5)
nine = len(nine)
ten = cv2.CascadeClassifier('haarcascade_lefteye_2splits.xml')
ten = ten.detectMultiScale(gray, 1.3, 5)
ten = len(ten)
eleven = cv2.CascadeClassifier('haarcascade_licence_plate_rus_16stages.xml')
eleven = eleven.detectMultiScale(gray, 1.3, 5)
eleven = len(eleven)
twelve = cv2.CascadeClassifier('haarcascade_lowerbody.xml')
twelve = twelve.detectMultiScale(gray, 1.3, 5)
twelve = len(twelve)
thirteen = cv2.CascadeClassifier('haarcascade_profileface.xml')
thirteen = thirteen.detectMultiScale(gray, 1.3, 5)
thirteen = len(thirteen)
fourteen = cv2.CascadeClassifier('haarcascade_righteye_2splits.xml')
fourteen = fourteen.detectMultiScale(gray, 1.3, 5)
fourteen = len(fourteen)
fifteen = cv2.CascadeClassifier('haarcascade_russian_plate_number.xml')
fifteen = fifteen.detectMultiScale(gray, 1.3, 5)
fifteen = len(fifteen)
sixteen = cv2.CascadeClassifier('haarcascade_smile.xml')
sixteen = sixteen.detectMultiScale(gray, 1.3, 5)
sixteen = len(sixteen)
seventeen = cv2.CascadeClassifier('haarcascade_upperbody.xml')
seventeen = seventeen.detectMultiScale(gray, 1.3, 5)
seventeen = len(seventeen)
features=np.array([one,two,three,four,
five,six,seven,eight,
nine,ten,eleven,twelve,
thirteen,fourteen,fifteen,sixteen,
seventeen])
labels=['haarcascade_eye_tree_eyeglasses','haarcascade_eye','haarcascade_frontalcatface_extended','haarcascade_frontalcatface',
'haarcascade_frontalface_alt_tree','haarcascade_frontalface_alt','haarcascade_frontalface_alt2','haarcascade_frontalface_default',
'haarcascade_fullbody','haarcascade_lefteye_2splits','haarcascade_licence_plate_rus_16stages','haarcascade_lowerbody',
'haarcascade_profileface','haarcascade_righteye_2splits','haarcascade_russian_plate_number','haarcascade_smile',
'haarcascade_upperbody']
os.chdir(cur_dir)
return features, labels
def image_featurize(cur_dir,haar_dir,file):
# initialize label array
labels=list()
# only featurize files that are .jpeg, .jpg, or .png (convert all to ping
if file[-5:]=='.jpeg':
filename=convert(file)
elif file[-4:]=='.jpg':
filename=convert(file)
elif file[-4:]=='.png':
filename=file
else:
filename=file
#only featurize .png files after conversion
if filename[-4:]=='.png':
# READ IMAGE
########################################################
img = cv2.imread(filename,1)
# CALCULATE BASIC FEATURES (rows, columns, pixels)
########################################################
#rows, columns, pixel number
rows=img.shape[1]
columns=img.shape[2]
pixels=img.size
basic_features=np.array([rows,columns,pixels])
labels=labels+['rows', 'columns', 'pixels']
# HISTOGRAM FEATURES (avg, stdev, min, max)
########################################################
#blue
blue_hist=cv2.calcHist([img],[0],None,[256],[0,256])
blue_mean=np.mean(blue_hist)
blue_std=np.std(blue_hist)
blue_min=np.amin(blue_hist)
blue_max=np.amax(blue_hist)
#green
green_hist=cv2.calcHist([img],[1],None,[256],[0,256])
green_mean=np.mean(green_hist)
green_std=np.std(green_hist)
green_min=np.amin(green_hist)
green_max=np.amax(green_hist)
#red
red_hist=cv2.calcHist([img],[2],None,[256],[0,256])
red_mean=np.mean(red_hist)
red_std=np.std(red_hist)
red_min=np.amin(red_hist)
red_max=np.amax(red_hist)
hist_features=[blue_mean,blue_std,blue_min,blue_max,
green_mean,green_std,green_min,green_max,
red_mean,red_std,red_min,red_max]
hist_labels=['blue_mean','blue_std','blue_min','blue_max',
'green_mean','green_std','green_min','green_max',
'red_mean','red_std','red_min','red_max']
hist_features=np.array(hist_features)
features=np.append(basic_features,hist_features)
labels=labels+hist_labels
# CALCULATE HAAR FEATURES
########################################################
haar_features, haar_labels=haar_featurize(cur_dir,haar_dir,img)
features=np.append(features,haar_features)
labels=labels+haar_labels
# EDGE FEATURES
########################################################
# SIFT algorithm (scale invariant) - 128 features
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
sift = cv2.xfeatures2d.SIFT_create()
(kps, des) = sift.detectAndCompute(gray, None)
edges=des
edge_features=np.zeros(len(edges[0]))
for i in range(len(edges)):
edge_features=edge_features+edges[i]
edge_features=edge_features/(len(edges))
edge_features=np.array(edge_features)
edge_labels=list()
for i in range(len(edge_features)):
edge_labels.append('edge_feature_%s'%(str(i+1)))
features=np.append(features,edge_features)
labels=labels+edge_labels
else:
os.remove(file)
return features, labels
def featurize_audio(wavfile):
#initialize features
hop_length = 512
n_fft=2048
#load file
y, sr = librosa.load(wavfile)
#extract mfcc coefficients
mfcc = librosa.feature.mfcc(y=y, sr=sr, hop_length=hop_length, n_mfcc=13)
mfcc_delta = librosa.feature.delta(mfcc)
#extract mean, standard deviation, min, and max value in mfcc frame, do this across all mfccs
features=np.array([np.mean(mfcc[0]),np.std(mfcc[0]),np.amin(mfcc[0]),np.amax(mfcc[0]),
np.mean(mfcc[1]),np.std(mfcc[1]),np.amin(mfcc[1]),np.amax(mfcc[1]),
np.mean(mfcc[2]),np.std(mfcc[2]),np.amin(mfcc[2]),np.amax(mfcc[2]),
np.mean(mfcc[3]),np.std(mfcc[3]),np.amin(mfcc[3]),np.amax(mfcc[3]),
np.mean(mfcc[4]),np.std(mfcc[4]),np.amin(mfcc[4]),np.amax(mfcc[4]),
np.mean(mfcc[5]),np.std(mfcc[5]),np.amin(mfcc[5]),np.amax(mfcc[5]),
np.mean(mfcc[6]),np.std(mfcc[6]),np.amin(mfcc[6]),np.amax(mfcc[6]),
np.mean(mfcc[7]),np.std(mfcc[7]),np.amin(mfcc[7]),np.amax(mfcc[7]),
np.mean(mfcc[8]),np.std(mfcc[8]),np.amin(mfcc[8]),np.amax(mfcc[8]),
np.mean(mfcc[9]),np.std(mfcc[9]),np.amin(mfcc[9]),np.amax(mfcc[9]),
np.mean(mfcc[10]),np.std(mfcc[10]),np.amin(mfcc[10]),np.amax(mfcc[10]),
np.mean(mfcc[11]),np.std(mfcc[11]),np.amin(mfcc[11]),np.amax(mfcc[11]),
np.mean(mfcc[12]),np.std(mfcc[12]),np.amin(mfcc[12]),np.amax(mfcc[12]),
np.mean(mfcc_delta[0]),np.std(mfcc_delta[0]),np.amin(mfcc_delta[0]),np.amax(mfcc_delta[0]),
np.mean(mfcc_delta[1]),np.std(mfcc_delta[1]),np.amin(mfcc_delta[1]),np.amax(mfcc_delta[1]),
np.mean(mfcc_delta[2]),np.std(mfcc_delta[2]),np.amin(mfcc_delta[2]),np.amax(mfcc_delta[2]),
np.mean(mfcc_delta[3]),np.std(mfcc_delta[3]),np.amin(mfcc_delta[3]),np.amax(mfcc_delta[3]),
np.mean(mfcc_delta[4]),np.std(mfcc_delta[4]),np.amin(mfcc_delta[4]),np.amax(mfcc_delta[4]),
np.mean(mfcc_delta[5]),np.std(mfcc_delta[5]),np.amin(mfcc_delta[5]),np.amax(mfcc_delta[5]),
np.mean(mfcc_delta[6]),np.std(mfcc_delta[6]),np.amin(mfcc_delta[6]),np.amax(mfcc_delta[6]),
np.mean(mfcc_delta[7]),np.std(mfcc_delta[7]),np.amin(mfcc_delta[7]),np.amax(mfcc_delta[7]),
np.mean(mfcc_delta[8]),np.std(mfcc_delta[8]),np.amin(mfcc_delta[8]),np.amax(mfcc_delta[8]),
np.mean(mfcc_delta[9]),np.std(mfcc_delta[9]),np.amin(mfcc_delta[9]),np.amax(mfcc_delta[9]),
np.mean(mfcc_delta[10]),np.std(mfcc_delta[10]),np.amin(mfcc_delta[10]),np.amax(mfcc_delta[10]),
np.mean(mfcc_delta[11]),np.std(mfcc_delta[11]),np.amin(mfcc_delta[11]),np.amax(mfcc_delta[11]),
np.mean(mfcc_delta[12]),np.std(mfcc_delta[12]),np.amin(mfcc_delta[12]),np.amax(mfcc_delta[12])])
return features
def exportfile(newAudio,time1,time2,filename,i):
#Exports to a wav file in the current path.
newAudio2 = newAudio[time1:time2]
g=os.listdir()
if filename[0:-4]+'_'+str(i)+'.wav' in g:
filename2=str(i)+'_segment'+'.wav'
print('making %s'%(filename2))
newAudio2.export(filename2,format="wav")
else:
filename2=str(i)+'.wav'
print('making %s'%(filename2))
newAudio2.export(filename2, format="wav")
return filename2
def audio_time_features(filename):
#recommend >0.50 seconds for timesplit
timesplit=0.50
hop_length = 512
n_fft=2048
y, sr = librosa.load(filename)
duration=float(librosa.core.get_duration(y))
#Now splice an audio signal into individual elements of 100 ms and extract
#all these features per 100 ms
segnum=round(duration/timesplit)
deltat=duration/segnum
timesegment=list()
time=0
for i in range(segnum):
#milliseconds
timesegment.append(time)
time=time+deltat*1000
newAudio = AudioSegment.from_wav(filename)
filelist=list()
for i in range(len(timesegment)-1):
filename=exportfile(newAudio,timesegment[i],timesegment[i+1],filename,i)
filelist.append(filename)
featureslist=np.array([0,0,0,0,
0,0,0,0,
0,0,0,0,
0,0,0,0,
0,0,0,0,
0,0,0,0,
0,0,0,0,
0,0,0,0,
0,0,0,0,
0,0,0,0,
0,0,0,0,
0,0,0,0,
0,0,0,0,
0,0,0,0,
0,0,0,0,
0,0,0,0,
0,0,0,0,
0,0,0,0,
0,0,0,0,
0,0,0,0,
0,0,0,0,
0,0,0,0,
0,0,0,0,
0,0,0,0,
0,0,0,0,
0,0,0,0])
#save 100 ms segments in current folder (delete them after)
for j in range(len(filelist)):
try:
features=featurize_audio(filelist[i])
featureslist=featureslist+features
os.remove(filelist[j])
except:
print('error splicing')
os.remove(filelist[j])
#now scale the featureslist array by the length to get mean in each category
features=featureslist/segnum
return features
def textfeatures(transcript):
#alphabetical features
a=transcript.count('a')
b=transcript.count('b')
c=transcript.count('c')
d=transcript.count('d')
e=transcript.count('e')
f=transcript.count('f')
g_=transcript.count('g')
h=transcript.count('h')
i=transcript.count('i')
j=transcript.count('j')
k=transcript.count('k')
l=transcript.count('l')
m=transcript.count('m')
n=transcript.count('n')
o=transcript.count('o')
p=transcript.count('p')
q=transcript.count('q')
r=transcript.count('r')
s=transcript.count('s')
t=transcript.count('t')
u=transcript.count('u')
v=transcript.count('v')
w=transcript.count('w')
x=transcript.count('x')
y=transcript.count('y')
z=transcript.count('z')
space=transcript.count(' ')
#numerical features and capital letters
num1=transcript.count('0')+transcript.count('1')+transcript.count('2')+transcript.count('3')+transcript.count('4')+transcript.count('5')+transcript.count('6')+transcript.count('7')+transcript.count('8')+transcript.count('9')
num2=transcript.count('zero')+transcript.count('one')+transcript.count('two')+transcript.count('three')+transcript.count('four')+transcript.count('five')+transcript.count('six')+transcript.count('seven')+transcript.count('eight')+transcript.count('nine')+transcript.count('ten')
number=num1+num2
capletter=sum(1 for c in transcript if c.isupper())
#part of speech
text=word_tokenize(transcript)
g=nltk.pos_tag(transcript)
cc=0
cd=0
dt=0
ex=0
in_=0
jj=0
jjr=0
jjs=0
ls=0
md=0
nn=0
nnp=0
nns=0
pdt=0
pos=0
prp=0
prp2=0
rb=0
rbr=0
rbs=0
rp=0
to=0
uh=0
vb=0
vbd=0
vbg=0
vbn=0
vbp=0
vbp=0
vbz=0
wdt=0
wp=0
wrb=0
for i in range(len(g)):
if g[i][1] == 'CC':
cc=cc+1
elif g[i][1] == 'CD':
cd=cd+1
elif g[i][1] == 'DT':
dt=dt+1
elif g[i][1] == 'EX':
ex=ex+1
elif g[i][1] == 'IN':
in_=in_+1
elif g[i][1] == 'JJ':
jj=jj+1
elif g[i][1] == 'JJR':
jjr=jjr+1
elif g[i][1] == 'JJS':
jjs=jjs+1
elif g[i][1] == 'LS':
ls=ls+1
elif g[i][1] == 'MD':
md=md+1
elif g[i][1] == 'NN':
nn=nn+1
elif g[i][1] == 'NNP':
nnp=nnp+1
elif g[i][1] == 'NNS':
nns=nns+1
elif g[i][1] == 'PDT':
pdt=pdt+1
elif g[i][1] == 'POS':
pos=pos+1
elif g[i][1] == 'PRP':
prp=prp+1
elif g[i][1] == 'PRP$':
prp2=prp2+1
elif g[i][1] == 'RB':
rb=rb+1
elif g[i][1] == 'RBR':
rbr=rbr+1
elif g[i][1] == 'RBS':
rbs=rbs+1
elif g[i][1] == 'RP':
rp=rp+1
elif g[i][1] == 'TO':
to=to+1
elif g[i][1] == 'UH':
uh=uh+1
elif g[i][1] == 'VB':
vb=vb+1
elif g[i][1] == 'VBD':
vbd=vbd+1
elif g[i][1] == 'VBG':
vbg=vbg+1
elif g[i][1] == 'VBN':
vbn=vbn+1
elif g[i][1] == 'VBP':
vbp=vbp+1
elif g[i][1] == 'VBZ':
vbz=vbz+1
elif g[i][1] == 'WDT':
wdt=wdt+1
elif g[i][1] == 'WP':
wp=wp+1
elif g[i][1] == 'WRB':
wrb=wrb+1
#sentiment
tblob=TextBlob(transcript)
polarity=float(tblob.sentiment[0])
subjectivity=float(tblob.sentiment[1])
#word repeats
words=transcript.split()
newlist=transcript.split()
repeat=0
for i in range(len(words)):
newlist.remove(words[i])
if words[i] in newlist:
repeat=repeat+1
features=np.array([a,b,c,d,
e,f,g_,h,
i,j,k,l,
m,n,o,p,
q,r,s,t,
u,v,w,x,
y,z,space,number,
capletter,cc,cd,dt,
ex,in_,jj,jjr,
jjs,ls,md,nn,
nnp,nns,pdt,pos,
prp,prp2,rbr,rbs,
rp,to,uh,vb,
vbd,vbg,vbn,vbp,
vbz,wdt,wp, wrb,polarity,subjectivity,repeat])
labels=['a','b','c','d',
'e','f','g_','h',
'i','j','k','l',
'm','n','o','p',
'q','r','s','t',
'u','v','w','x',
'y','z','space','number',
'capletter','cc','cd','dt',
'ex','in_','jj','jjr',
'jjs','ls','md','nn',
'nnp','nns','pdt','pos',
'prp','prp2','rbr','rbs',
'rp','to','uh','vb',
'vbd','vbg','vbn','vbp',
'vbz','wdt','wp', 'polarity','subjectivity','repeat']
return features, labels
def transcribe(wavfile):
try:
r = sr.Recognizer()
# use wavfile as the audio source (must be .wav file)
with sr.AudioFile(wavfile) as source:
#extract audio data from the file
audio = r.record(source)
transcript=r.recognize_sphinx(audio)
print(transcript)
except:
transcript=''
return transcript
# featurize only a random 20 second slice of the video (or 20 sec splices of videos)
def video_featurize(videofile, cur_dir,haar_dir):
now=os.getcwd()
# PREPROCESSING
#############################################
# metadata (should be .mp4)
clip = VideoFileClip(videofile)
duration = clip.duration
videodata=skvideo.io.vread(videofile)
frames, rows, cols, channels = videodata.shape
metadata=skvideo.io.ffprobe(videofile)
frame=videodata[0]
r,c,ch=frame.shape
try:
os.mkdir('output')
os.chdir('output')
outputdir=os.getcwd()
except:
shutil.rmtree('output')
os.mkdir('output')
os.chdir('output')
outputdir=os.getcwd()
#write all the images every 10 frames in the video
for i in range(0,len(videodata),25):
#row, col, channels
skvideo.io.vwrite("output"+str(i)+".png", videodata[i])
listdir=os.listdir()
(r,c,ch)=cv2.imread(listdir[0]).shape
img=np.zeros((r,c,ch))
iterations=0
#take first image as a background image
background=cv2.imread(listdir[1])
image_features=np.zeros(160)
image_features2=np.zeros(63)
image_transcript=''
for i in range(len(listdir)):
if listdir[i][-4:]=='.png':
os.chdir(outputdir)
frame_new=cv2.imread(listdir[i])
print(os.getcwd())
print(listdir[i])
print(frame)
img=img+frame_new
iterations=iterations+1
image_features_temp, image_labels = image_featurize(cur_dir,haar_dir,listdir[i])
os.chdir(outputdir)
ttranscript, tfeatures, tlabels = tf.tesseract_featurize(listdir[i])
image_transcript=image_transcript+ttranscript
image_features2=image_features2+tfeatures
image_features=image_features+image_features_temp
#os.remove(listdir[i])
# averaged image features
image_features=(1/iterations)*image_features
image_features2=(1/iterations)*image_features2
# averaged image over background
img=(1/iterations)*img-background
skvideo.io.vwrite("output.png", img)
avg_image_features, image_labels =image_featurize(cur_dir,haar_dir, "output.png")
# remove temp directory
os.chdir(now)
# make wavfile from video file
wavfile = videofile[0:-4]+'.wav'
os.system('ffmpeg -i %s %s'%(videofile,wavfile))
print('made wavfile in %s'%(str(os.getcwd())))
# FEATURIZATION
#############################################
# audio features and time features
labels=list()
audio_features=np.append(featurize_audio(wavfile),audio_time_features(wavfile))
labels=['mfcc_1_mean_20ms','mfcc_1_std_20ms', 'mfcc_1_min_20ms', 'mfcc_1_max_20ms',
'mfcc_2_mean_20ms','mfcc_2_std_20ms', 'mfcc_2_min_20ms', 'mfcc_2_max_20ms',
'mfcc_3_mean_20ms','mfcc_3_std_20ms', 'mfcc_3_min_20ms', 'mfcc_3_max_20ms',
'mfcc_4_mean_20ms','mfcc_4_std_20ms', 'mfcc_4_min_20ms', 'mfcc_4_max_20ms',
'mfcc_5_mean_20ms','mfcc_5_std_20ms', 'mfcc_5_min_20ms', 'mfcc_5_max_20ms',
'mfcc_6_mean_20ms','mfcc_6_std_20ms', 'mfcc_6_min_20ms', 'mfcc_6_max_20ms',
'mfcc_7_mean_20ms','mfcc_7_std_20ms', 'mfcc_7_min_20ms', 'mfcc_7_max_20ms',
'mfcc_8_mean_20ms','mfcc_8_std_20ms', 'mfcc_8_min_20ms', 'mfcc_8_max_20ms',
'mfcc_9_mean_20ms','mfcc_9_std_20ms', 'mfcc_9_min_20ms', 'mfcc_9_max_20ms',
'mfcc_10_mean_20ms','mfcc_10_std_20ms', 'mfcc_10_min_20ms', 'mfcc_10_max_20ms',
'mfcc_11_mean_20ms','mfcc_11_std_20ms', 'mfcc_11_min_20ms', 'mfcc_11_max_20ms',
'mfcc_12_mean_20ms','mfcc_12_std_20ms', 'mfcc_12_min_20ms', 'mfcc_12_max_20ms',
'mfcc_13_mean_20ms','mfcc_13_std_20ms', 'mfcc_13_min_20ms', 'mfcc_13_max_20ms',
'mfcc_1_delta_mean_20ms','mfcc_1_delta_std_20ms', 'mfcc_1_delta_min_20ms', 'mfcc_1_delta_max_20ms',
'mfcc_2_delta_mean_20ms','mfcc_2_delta_std_20ms', 'mfcc_2_delta_min_20ms', 'mfcc_2_delta_max_20ms',
'mfcc_3_delta_mean_20ms','mfcc_3_delta_std_20ms', 'mfcc_3_delta_min_20ms', 'mfcc_3_delta_max_20ms',
'mfcc_4_delta_mean_20ms','mfcc_4_delta_std_20ms', 'mfcc_4_delta_min_20ms', 'mfcc_4_delta_max_20ms',
'mfcc_5_delta_mean_20ms','mfcc_5_delta_std_20ms', 'mfcc_5_delta_min_20ms', 'mfcc_5_delta_max_20ms',
'mfcc_6_delta_mean_20ms','mfcc_6_delta_std_20ms', 'mfcc_6_delta_min_20ms', 'mfcc_6_delta_max_20ms',
'mfcc_7_delta_mean_20ms','mfcc_7_delta_std_20ms', 'mfcc_7_delta_min_20ms', 'mfcc_7_delta_max_20ms',
'mfcc_8_delta_mean_20ms','mfcc_8_delta_std_20ms', 'mfcc_8_delta_min_20ms', 'mfcc_8_delta_max_20ms',
'mfcc_9_delta_mean_20ms','mfcc_9_delta_std_20ms', 'mfcc_9_delta_min_20ms', 'mfcc_9_delta_max_20ms',
'mfcc_10_delta_mean_20ms','mfcc_10_delta_std_20ms', 'mfcc_10_delta_min_20ms', 'mfcc_10_delta_max_20ms',
'mfcc_11_delta_mean_20ms','mfcc_11_delta_std_20ms', 'mfcc_11_delta_min_20ms', 'mfcc_11_delta_max_20ms',
'mfcc_12_delta_mean_20ms','mfcc_12_delta_std_20ms', 'mfcc_12_delta_min_20ms', 'mfcc_12_delta_max_20ms',
'mfcc_13_delta_mean_20ms','mfcc_13_delta_std_20ms', 'mfcc_13_delta_min_20ms', 'mfcc_13_delta_max_20ms',
'mfcc_1_mean_500ms','mfcc_1_std_500ms', 'mfcc_1_min_500ms', 'mfcc_1_max_500ms',
'mfcc_2_mean_500ms','mfcc_2_std_500ms', 'mfcc_2_min_500ms', 'mfcc_2_max_500ms',
'mfcc_3_mean_500ms','mfcc_3_std_500ms', 'mfcc_3_min_500ms', 'mfcc_3_max_500ms',
'mfcc_4_mean_500ms','mfcc_4_std_500ms', 'mfcc_4_min_500ms', 'mfcc_4_max_500ms',
'mfcc_5_mean_500ms','mfcc_5_std_500ms', 'mfcc_5_min_500ms', 'mfcc_5_max_500ms',
'mfcc_6_mean_500ms','mfcc_6_std_500ms', 'mfcc_6_min_500ms', 'mfcc_6_max_500ms',
'mfcc_7_mean_500ms','mfcc_7_std_500ms', 'mfcc_7_min_500ms', 'mfcc_7_max_500ms',
'mfcc_8_mean_500ms','mfcc_8_std_500ms', 'mfcc_8_min_500ms', 'mfcc_8_max_500ms',
'mfcc_9_mean_500ms','mfcc_9_std_500ms', 'mfcc_9_min_500ms', 'mfcc_9_max_500ms',
'mfcc_10_mean_500ms','mfcc_10_std_500ms', 'mfcc_10_min_500ms', 'mfcc_10_max_500ms',
'mfcc_11_mean_500ms','mfcc_11_std_500ms', 'mfcc_11_min_500ms', 'mfcc_11_max_500ms',
'mfcc_12_mean_500ms','mfcc_12_std_500ms', 'mfcc_12_min_500ms', 'mfcc_12_max_500ms',
'mfcc_13_mean_500ms','mfcc_13_std_500ms', 'mfcc_13_min_500ms', 'mfcc_13_max_500ms',
'mfcc_1_delta_mean_500ms','mfcc_1_delta_std_500ms', 'mfcc_1_delta_min_500ms', 'mfcc_1_delta_max_500ms',
'mfcc_2_delta_mean_500ms','mfcc_2_delta_std_500ms', 'mfcc_2_delta_min_500ms', 'mfcc_2_delta_max_500ms',
'mfcc_3_delta_mean_500ms','mfcc_3_delta_std_500ms', 'mfcc_3_delta_min_500ms', 'mfcc_3_delta_max_500ms',
'mfcc_4_delta_mean_500ms','mfcc_4_delta_std_500ms', 'mfcc_4_delta_min_500ms', 'mfcc_4_delta_max_500ms',
'mfcc_5_delta_mean_500ms','mfcc_5_delta_std_500ms', 'mfcc_5_delta_min_500ms', 'mfcc_5_delta_max_500ms',
'mfcc_6_delta_mean_500ms','mfcc_6_delta_std_500ms', 'mfcc_6_delta_min_500ms', 'mfcc_6_delta_max_500ms',
'mfcc_7_delta_mean_500ms','mfcc_7_delta_std_500ms', 'mfcc_7_delta_min_500ms', 'mfcc_7_delta_max_500ms',
'mfcc_8_delta_mean_500ms','mfcc_8_delta_std_500ms', 'mfcc_8_delta_min_500ms', 'mfcc_8_delta_max_500ms',
'mfcc_9_delta_mean_500ms','mfcc_9_delta_std_500ms', 'mfcc_9_delta_min_500ms', 'mfcc_9_delta_max_500ms',
'mfcc_10_delta_mean_500ms','mfcc_10_delta_std_500ms', 'mfcc_10_delta_min_500ms', 'mfcc_10_delta_max_500ms',
'mfcc_11_delta_mean_500ms','mfcc_11_delta_std_500ms', 'mfcc_11_delta_min_500ms', 'mfcc_11_delta_max_500ms',
'mfcc_12_delta_mean_500ms','mfcc_12_delta_std_500ms', 'mfcc_12_delta_min_500ms', 'mfcc_12_delta_max_500ms',
'mfcc_13_delta_mean_500ms','mfcc_13_delta_std_500ms', 'mfcc_13_delta_min_500ms', 'mfcc_13_delta_max_500ms']
# text features
transcript = transcribe(wavfile)
text_features, text_labels =textfeatures(transcript)
labels=labels+text_labels
# video features
video_features=np.append(image_features, avg_image_features)
avg_image_labels=list()
for i in range(len(image_labels)):
avg_image_labels.append('avg_'+image_labels[i])
avg_image_labels2=list()
for i in range(len(tlabels)):
avg_image_labels2.append('avg_imgtranscript_'+tlabels[i])
video_labels=image_labels + avg_image_labels + avg_image_labels2
labels=labels+video_labels
# other features
other_features = [frames,duration]
other_labels = ['frames', 'duration']
labels=labels+other_labels
# append all the features together
features = np.append(audio_features, text_features)
features = np.append(features, image_features)
features = np.append(features, image_features2)
features = np.append(features, video_features)
features = np.append(features, other_features)
# remove all temp files
try:
os.remove(wavfile)
except:
pass
try:
shutil.rmtree('output')
except:
pass
os.chdir(cur_dir)
return features, labels, transcript, image_transcript
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/video_features/featurize.py | features/video_features/featurize.py | '''
AAA lllllll lllllll iiii
A:::A l:::::l l:::::l i::::i
A:::::A l:::::l l:::::l iiii
A:::::::A l:::::l l:::::l
A:::::::::A l::::l l::::l iiiiiii eeeeeeeeeeee
A:::::A:::::A l::::l l::::l i:::::i ee::::::::::::ee
A:::::A A:::::A l::::l l::::l i::::i e::::::eeeee:::::ee
A:::::A A:::::A l::::l l::::l i::::i e::::::e e:::::e
A:::::A A:::::A l::::l l::::l i::::i e:::::::eeeee::::::e
A:::::AAAAAAAAA:::::A l::::l l::::l i::::i e:::::::::::::::::e
A:::::::::::::::::::::A l::::l l::::l i::::i e::::::eeeeeeeeeee
A:::::AAAAAAAAAAAAA:::::A l::::l l::::l i::::i e:::::::e
A:::::A A:::::A l::::::ll::::::li::::::ie::::::::e
A:::::A A:::::A l::::::ll::::::li::::::i e::::::::eeeeeeee
A:::::A A:::::A l::::::ll::::::li::::::i ee:::::::::::::e
AAAAAAA AAAAAAAlllllllllllllllliiiiiiii eeeeeeeeeeeeee
______ _ ___ ______ _____
| ___| | | / _ \ | ___ \_ _| _
| |_ ___ __ _| |_ _ _ _ __ ___ ___ / /_\ \| |_/ / | | (_)
| _/ _ \/ _` | __| | | | '__/ _ \/ __| | _ || __/ | |
| || __/ (_| | |_| |_| | | | __/\__ \ | | | || | _| |_ _
\_| \___|\__,_|\__|\__,_|_| \___||___/ \_| |_/\_| \___/ (_)
_ _ _ _
| | | (_) | |
| | | |_ __| | ___ ___
| | | | |/ _` |/ _ \/ _ \
\ \_/ / | (_| | __/ (_) |
\___/|_|\__,_|\___|\___/
Featurize folders of videos with the default_video_features.
Usage: python3 featurize.py [folder] [featuretype]
All featuretype options include:
["video_features","y8m_features"]
Read more @ https://github.com/jim-schwoebel/allie/tree/master/features/video_features
'''
import os, json, wget, uuid
import numpy as np
from gensim.models import KeyedVectors
import os, wget, zipfile, sys
import shutil
from tqdm import tqdm
##################################################
## Helper functions. ##
##################################################
def prev_dir(directory):
g=directory.split('/')
dir_=''
for i in range(len(g)):
if i != len(g)-1:
if i==0:
dir_=dir_+g[i]
else:
dir_=dir_+'/'+g[i]
# print(dir_)
return dir_
def video_featurize(feature_set, videofile, cur_dir, haar_dir, help_dir, fast_model):
# long conditional on all the types of features that can happen and featurizes accordingly.
if feature_set == 'video_features':
features, labels, audio_transcript, image_transcript = vf.video_featurize(videofile, cur_dir, haar_dir)
elif feature_set == 'y8m_features':
features, labels, audio_transcript, image_transcript = yf.y8m_featurize(videofile, cur_dir, help_dir, fast_model)
# make sure all the features do not have any infinity or NaN
features=np.nan_to_num(np.array(features))
features=features.tolist()
return features, labels, audio_transcript, image_transcript
def video_transcribe(default_video_transcriber, videofile):
# this is a placeholder function now
return ''
def audio_transcribe(default_audio_transcriber, audiofile):
# this is a placeholder function now until we have more audio transcription engines
return ''
##################################################
## Main script ##
##################################################
# directory=sys.argv[1]
basedir=os.getcwd()
help_dir=basedir+'/helpers'
prevdir=prev_dir(basedir)
sys.path.append(prevdir)
from standard_array import make_features
# audioset_dir=prevdir+'/audio_features'
# os.chdir(audioset_dir)
# import audioset_features as af
# os.chdir(basedir)
haar_dir=prevdir+'/image_features/helpers/haarcascades/'
# get settings
settingsdir=prev_dir(basedir)
settingsdir=prev_dir(settingsdir)
settings=json.load(open(settingsdir+'/settings.json'))
os.chdir(basedir)
# load settings
audio_transcribe_setting=settings['transcribe_audio']
video_transcribe_setting=settings['transcribe_video']
default_audio_transcriber=settings['default_audio_transcriber']
default_video_transcriber=settings['default_video_transcriber']
try:
feature_sets=[sys.argv[2]]
except:
feature_sets=settings['default_video_features']
# import proper feature sets from database
if 'video_features' in feature_sets:
import video_features as vf
if 'y8m_features' in feature_sets:
import y8m_features as yf
# change to video folder
foldername=sys.argv[1]
os.chdir(foldername)
cur_dir=os.getcwd()
listdir=os.listdir()
os.chdir(cur_dir)
# get class label from folder name
labelname=foldername.split('/')
if labelname[-1]=='':
labelname=labelname[-2]
else:
labelname=labelname[-1]
##################################################
## Download inception ##
##################################################
for j in range(len(feature_sets)):
feature_set=feature_sets[j]
if feature_set == 'video_features':
fast_model=[]
elif feature_set == 'y8m_features':
if 'inception-2015-12-05.tgz' not in os.listdir(basedir+'/helpers'):
os.chdir(basedir+'/helpers')
filename = wget.download('http://download.tensorflow.org/models/image/imagenet/inception-2015-12-05.tgz')
filename=wget.download('http://data.yt8m.org/yt8m_pca.tgz')
os.system('tar zxvf inception-2015-12-05.tgz')
os.system('tar zxvf yt8m_pca.tgz')
os.chdir(cur_dir)
# load in FAST model
os.chdir(prevdir+'/text_features')
if 'wiki-news-300d-1M' not in os.listdir(os.getcwd()+'/helpers'):
print('downloading Facebook FastText model...')
wget.download("https://dl.fbaipublicfiles.com/fasttext/vectors-english/wiki-news-300d-1M.vec.zip", "./helpers/wiki-news-300d-1M.vec.zip")
zip_ref = zipfile.ZipFile(os.getcwd()+'/helpers/wiki-news-300d-1M.vec.zip', 'r')
zip_ref.extractall(os.getcwd()+'/helpers/wiki-news-300d-1M')
zip_ref.close()
print('-----------------')
print('loading Facebook FastText model...')
# Loading fasttext model
fast_model = KeyedVectors.load_word2vec_format(os.getcwd()+'/helpers/wiki-news-300d-1M/wiki-news-300d-1M.vec')
print('loaded Facebook FastText model...')
os.chdir(cur_dir)
###################################################
# featurize all files accoridng to librosa featurize
for i in tqdm(range(len(listdir)), desc=labelname):
# make audio file into spectrogram and analyze those images if audio file
if listdir[i][-4:] in ['.mp4']:
try:
sampletype='video'
videofile=listdir[i]
# I think it's okay to assume audio less than a minute here...
if listdir[i][0:-4]+'.json' not in listdir:
# rename to avoid conflicts
# newfile=str(uuid.uuid4())+'.mp4'
# os.rename(listdir[i], newfile)
# videofile=newfile
# make new .JSON if it is not there with base array schema.
basearray=make_features(sampletype)
# get features and add label
for j in range(len(feature_sets)):
feature_set=feature_sets[j]
video_features=basearray['features']['video']
features, labels, audio_transcript, video_transcript = video_featurize(feature_set, videofile, cur_dir, haar_dir, help_dir, fast_model)
try:
data={'features':features.tolist(),
'labels': labels}
except:
data={'features':features,
'labels': labels}
video_features[feature_set]=data
basearray['features']['video']=video_features
basearray['labels']=[labelname]
# only add transcripts in schema if they are true
transcript_list=basearray['transcripts']
# video transcription setting
if video_transcribe_setting == True:
for j in range(len(default_video_transcriber)):
video_transcriber=default_video_transcriber[j]
if video_transcriber=='tesseract (averaged over frames)':
transcript_list['video'][video_transcriber] = video_transcript
else:
print('cannot transcribe video file, as the %s transcriber is not supported'%(video_transcriber.upper()))
# audio transcriber setting
if audio_transcribe_setting == True:
for j in range(len(default_audio_transcriber)):
audio_transcriber=default_audio_transcriber[j]
if audio_transcriber == 'pocketsphinx':
transcript_list['audio'][audio_transcriber] = audio_transcript
else:
print('cannot transcribe audio file, as the %s transcriber is not supported'%(audio_transcriber.upper()))
# update transcript list
basearray['transcripts']=transcript_list
# write to .JSON
jsonfile=open(videofile[0:-4]+'.json','w')
json.dump(basearray, jsonfile)
jsonfile.close()
elif listdir[i][0:-4]+'.json' in listdir:
# load the .JSON file if it is there
basearray=json.load(open(listdir[i][0:-4]+'.json'))
video_features=basearray['features']['video']
# featurizes/labels only if necessary (skips if feature embedding there)
for j in range(len(feature_sets)):
feature_set=feature_sets[j]
if feature_set not in list(video_features):
features, labels, audio_transcript, video_transcript = video_featurize(feature_set, videofile, cur_dir, haar_dir, help_dir, fast_model)
print(features)
try:
data={'features':features.tolist(),
'labels': labels}
except:
data={'features':features,
'labels': labels}
video_features[feature_set]=data
basearray['features']['video']=video_features
# make transcript additions, as necessary
transcript_list=basearray['transcripts']['video']
if video_transcribe_setting == True:
for j in range(len(default_video_transcriber)):
video_transcriber=default_video_transcriber[j]
if video_transcriber not in list(transcript_list):
if video_transcriber == 'tesseract (averaged over frames)':
transcript_list['video'][video_transcriber] = video_transcript
else:
print('cannot transcribe video file, as the %s transcriber is not supported'%(video_transcriber.upper()))
if audio_transcribe_setting == True:
for j in range(len(default_audio_transcriber)):
audio_transcriber=default_audio_transcriber[j]
if audio_transcriber not in list(transcript_list):
if audio_transcriber == 'pocketsphinx':
transcript_list['audio'][audio_transcriber] = audio_transcript
else:
print('cannot transcribe audio file, as the %s transcriber is not supported'%(audio_transcriber.upper()))
basearray['transcripts']=transcript_list
# add appropriate labels only if they are new labels
label_list=basearray['labels']
if labelname not in label_list:
label_list.append(labelname)
basearray['labels']=label_list
# overwrite .JSON
jsonfile=open(videofile[0:-4]+'.json','w')
json.dump(basearray, jsonfile)
jsonfile.close()
except:
print('error - %s'%(videofile.upper()))
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/video_features/helpers/tesseract_features.py | features/video_features/helpers/tesseract_features.py | import os, sys, time
from PIL import Image
import pytesseract
def prev_dir(directory):
g=directory.split('/')
# print(g)
lastdir=g[len(g)-1]
i1=directory.find(lastdir)
directory=directory[0:i1]
return directory
directory=os.getcwd()
prevdir=prev_dir(directory)
sys.path.append(prevdir+'text_features')
print(prevdir+'text_features')
import nltk_features as nf
os.chdir(directory)
def transcribe_image(imgfile):
transcript=pytesseract.image_to_string(Image.open(imgfile))
return transcript
def tesseract_featurize(imgfile):
# can stitch across an entire length of video frames too
transcript=transcribe_image(imgfile)
features, labels = nf.nltk_featurize(transcript)
return transcript, features, labels
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/video_features/helpers/cut_video.py | features/video_features/helpers/cut_video.py | import os, librosa, math, shutil
def cut_video(video, splitlength):
audio=video[0:-4]+'.wav'
foldername=video[0:-4]
os.system('ffmpeg -i %s %s'%(video, audio))
y, sr=librosa.core.load(audio)
duration=librosa.core.get_duration(y,sr)
splits=math.floor(duration/10)
count=0
curdir=os.getcwd()
try:
os.mkdir(foldername)
os.chdir(foldername)
except:
shutil.rmtree(foldername)
os.mkdir(foldername)
os.chdir(foldername)
shutil.copy(curdir+'/'+video, curdir+'/'+foldername+'/'+video)
for i in range(splits):
os.system('ffmpeg -i %s -ss %s -t %s %s.mp4'%(video, str(count), str(count+10), video[0:-4]+'_'+str(i)+'.mp4'))
count=count+10
cut_video('test.mp4',10) | python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/text_features/glove_features.py | features/text_features/glove_features.py | '''
AAA lllllll lllllll iiii
A:::A l:::::l l:::::l i::::i
A:::::A l:::::l l:::::l iiii
A:::::::A l:::::l l:::::l
A:::::::::A l::::l l::::l iiiiiii eeeeeeeeeeee
A:::::A:::::A l::::l l::::l i:::::i ee::::::::::::ee
A:::::A A:::::A l::::l l::::l i::::i e::::::eeeee:::::ee
A:::::A A:::::A l::::l l::::l i::::i e::::::e e:::::e
A:::::A A:::::A l::::l l::::l i::::i e:::::::eeeee::::::e
A:::::AAAAAAAAA:::::A l::::l l::::l i::::i e:::::::::::::::::e
A:::::::::::::::::::::A l::::l l::::l i::::i e::::::eeeeeeeeeee
A:::::AAAAAAAAAAAAA:::::A l::::l l::::l i::::i e:::::::e
A:::::A A:::::A l::::::ll::::::li::::::ie::::::::e
A:::::A A:::::A l::::::ll::::::li::::::i e::::::::eeeeeeee
A:::::A A:::::A l::::::ll::::::li::::::i ee:::::::::::::e
AAAAAAA AAAAAAAlllllllllllllllliiiiiiii eeeeeeeeeeeeee
______ _ ___ ______ _____
| ___| | | / _ \ | ___ \_ _| _
| |_ ___ __ _| |_ _ _ _ __ ___ ___ / /_\ \| |_/ / | | (_)
| _/ _ \/ _` | __| | | | '__/ _ \/ __| | _ || __/ | |
| || __/ (_| | |_| |_| | | | __/\__ \ | | | || | _| |_ _
\_| \___|\__,_|\__|\__,_|_| \___||___/ \_| |_/\_| \___/ (_)
_____ _
|_ _| | |
| | _____ _| |_
| |/ _ \ \/ / __|
| | __/> <| |_
\_/\___/_/\_\\__|
Featurize folders of text files if default_text_features = ['glove_features']
This uses a GloVE embedding with 100 dimensions:
https://machinelearningmastery.com/develop-word-embeddings-python-gensim/
'''
import numpy as np
def glove_featurize(transcript,model):
# set to 100 size
sentences2=transcript.split()
size=100
w2v_embed=list()
for i in range(len(sentences2)):
try:
w2v_embed.append(model[sentences2[i]])
except:
#pass if there is an error to not distort averages... :)
pass
out_embed=np.zeros(size)
for j in range(len(w2v_embed)):
out_embed=out_embed+np.array(w2v_embed[j])
out_embed=(1/len(w2v_embed))*out_embed
features=out_embed
labels=list()
for i in range(len(features)):
labels.append('glove_feature_%s'%(str(i+1)))
return features, labels
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/text_features/blabla_features.py | features/text_features/blabla_features.py | '''
AAA lllllll lllllll iiii
A:::A l:::::l l:::::l i::::i
A:::::A l:::::l l:::::l iiii
A:::::::A l:::::l l:::::l
A:::::::::A l::::l l::::l iiiiiii eeeeeeeeeeee
A:::::A:::::A l::::l l::::l i:::::i ee::::::::::::ee
A:::::A A:::::A l::::l l::::l i::::i e::::::eeeee:::::ee
A:::::A A:::::A l::::l l::::l i::::i e::::::e e:::::e
A:::::A A:::::A l::::l l::::l i::::i e:::::::eeeee::::::e
A:::::AAAAAAAAA:::::A l::::l l::::l i::::i e:::::::::::::::::e
A:::::::::::::::::::::A l::::l l::::l i::::i e::::::eeeeeeeeeee
A:::::AAAAAAAAAAAAA:::::A l::::l l::::l i::::i e:::::::e
A:::::A A:::::A l::::::ll::::::li::::::ie::::::::e
A:::::A A:::::A l::::::ll::::::li::::::i e::::::::eeeeeeee
A:::::A A:::::A l::::::ll::::::li::::::i ee:::::::::::::e
AAAAAAA AAAAAAAlllllllllllllllliiiiiiii eeeeeeeeeeeeee
______ _ ___ ______ _____
| ___| | | / _ \ | ___ \_ _| _
| |_ ___ __ _| |_ _ _ _ __ ___ ___ / /_\ \| |_/ / | | (_)
| _/ _ \/ _` | __| | | | '__/ _ \/ __| | _ || __/ | |
| || __/ (_| | |_| |_| | | | __/\__ \ | | | || | _| |_ _
\_| \___|\__,_|\__|\__,_|_| \___||___/ \_| |_/\_| \___/ (_)
_____ _
|_ _| | |
| | _____ _| |_
| |/ _ \ \/ / __|
| | __/> <| |_
\_/\___/_/\_\\__|
Featurize folders of text files if default_text_features = ['blabla_features']
Learn more with the documenation - https://github.com/novoic/blabla
'''
import os, uuid
import numpy as np
import pandas as pd
def setup_blabla():
# assumes you have blabla installed
os.system('pip3 install blabla')
# install corenlp
os.chdir('helpers')
os.chdir('blabla')
os.system('./setup_corenlp.sh')
os.system('export CORENLP_HOME=/Users/jim/corenlp')
'''
CoreNLP (english) successfully installed at /Users/jim/corenlp
Now and in the future, run 'export CORENLP_HOME=/Users/jim/corenlp' before using BlaBla or add this command to your .bashrc/.profile or equivalent file
'''
# make sure javac is installed as well (assumes you have done this)
os.system('javac -version')
def blabla_featurize(transcript):
curdir=os.getcwd()
text_folder=str(uuid.uuid4())
os.mkdir(text_folder)
text_folderpath=os.getcwd()+'/'+text_folder
os.chdir(text_folder)
g=open('transcript.txt','w')
g.write(transcript)
g.close()
g=open('transcript2.txt','w')
g.write(transcript)
g.close()
os.chdir(curdir)
os.system('blabla compute-features -F helpers/blabla/example_configs/features.yaml -S helpers/blabla/stanza_config/stanza_config.yaml -i %s -o %s/blabla_features.csv -format string'%(text_folderpath, text_folderpath))
os.chdir(text_folder)
g=pd.read_csv('blabla_features.csv')
features=list(g.iloc[0,:][0:-1])
labels=list(g)[0:-1]
os.chdir(curdir)
return features, labels
# features, labels = blabla_featurize('This is the coolest transcript ever. This is the coolest transcript ever.')
# print(features)
# print(labels)
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/text_features/grammar_features.py | features/text_features/grammar_features.py | '''
AAA lllllll lllllll iiii
A:::A l:::::l l:::::l i::::i
A:::::A l:::::l l:::::l iiii
A:::::::A l:::::l l:::::l
A:::::::::A l::::l l::::l iiiiiii eeeeeeeeeeee
A:::::A:::::A l::::l l::::l i:::::i ee::::::::::::ee
A:::::A A:::::A l::::l l::::l i::::i e::::::eeeee:::::ee
A:::::A A:::::A l::::l l::::l i::::i e::::::e e:::::e
A:::::A A:::::A l::::l l::::l i::::i e:::::::eeeee::::::e
A:::::AAAAAAAAA:::::A l::::l l::::l i::::i e:::::::::::::::::e
A:::::::::::::::::::::A l::::l l::::l i::::i e::::::eeeeeeeeeee
A:::::AAAAAAAAAAAAA:::::A l::::l l::::l i::::i e:::::::e
A:::::A A:::::A l::::::ll::::::li::::::ie::::::::e
A:::::A A:::::A l::::::ll::::::li::::::i e::::::::eeeeeeee
A:::::A A:::::A l::::::ll::::::li::::::i ee:::::::::::::e
AAAAAAA AAAAAAAlllllllllllllllliiiiiiii eeeeeeeeeeeeee
______ _ ___ ______ _____
| ___| | | / _ \ | ___ \_ _| _
| |_ ___ __ _| |_ _ _ _ __ ___ ___ / /_\ \| |_/ / | | (_)
| _/ _ \/ _` | __| | | | '__/ _ \/ __| | _ || __/ | |
| || __/ (_| | |_| |_| | | | __/\__ \ | | | || | _| |_ _
\_| \___|\__,_|\__|\__,_|_| \___||___/ \_| |_/\_| \___/ (_)
_____ _
|_ _| | |
| | _____ _| |_
| |/ _ \ \/ / __|
| | __/> <| |_
\_/\___/_/\_\\__|
Featurize folders of text files if default_text_features = ['grammar_features']
Inputs a text file and featurizes the text into many grammatical features.
This will produce a sparse matrix with many zeros, with a few significant features
and is memory-intensive.
'''
from itertools import permutations
import nltk
from nltk import load, word_tokenize
def grammar_featurize(importtext):
#now have super long string of text can do operations
#get all POS fromo Penn Treebank (POS tagger)
tagdict = load('help/tagsets/upenn_tagset.pickle')
nltk_pos_list=tagdict.keys()
#get all permutations of this list
perm=permutations(nltk_pos_list,3)
#make these permutations in a list
listobj=list()
for i in list(perm):
listobj.append(list(i))
#split by sentences? or by word? (will do by word here)
text=word_tokenize(importtext)
#tokens now
tokens=nltk.pos_tag(text)
#initialize new list for pos
pos_list=list()
#parse through entire document and tokenize every 3 words until end
for i in range(len(tokens)-3):
pos=[tokens[i][1],tokens[i+1][1],tokens[i+2][1]]
pos_list.append(pos)
#count each part of speech event and total event count
counts=list()
totalcounts=0
for i in range(len(listobj)):
count=pos_list.count(listobj[i])
totalcounts=totalcounts+count
counts.append(count)
#now create probabilities / frequencies from total count
freqs=list()
for i in range(len(counts)):
freq=counts[i]/totalcounts
freqs.append(freq)
#now you can sort lowest to highest frequency (this is commented out to keep order consistent)
# listobj.sort(key=lambda x: (x[3]),reverse=True)
features=freqs
labels=listobj
return features, labels
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/text_features/fast_features.py | features/text_features/fast_features.py | '''
AAA lllllll lllllll iiii
A:::A l:::::l l:::::l i::::i
A:::::A l:::::l l:::::l iiii
A:::::::A l:::::l l:::::l
A:::::::::A l::::l l::::l iiiiiii eeeeeeeeeeee
A:::::A:::::A l::::l l::::l i:::::i ee::::::::::::ee
A:::::A A:::::A l::::l l::::l i::::i e::::::eeeee:::::ee
A:::::A A:::::A l::::l l::::l i::::i e::::::e e:::::e
A:::::A A:::::A l::::l l::::l i::::i e:::::::eeeee::::::e
A:::::AAAAAAAAA:::::A l::::l l::::l i::::i e:::::::::::::::::e
A:::::::::::::::::::::A l::::l l::::l i::::i e::::::eeeeeeeeeee
A:::::AAAAAAAAAAAAA:::::A l::::l l::::l i::::i e:::::::e
A:::::A A:::::A l::::::ll::::::li::::::ie::::::::e
A:::::A A:::::A l::::::ll::::::li::::::i e::::::::eeeeeeee
A:::::A A:::::A l::::::ll::::::li::::::i ee:::::::::::::e
AAAAAAA AAAAAAAlllllllllllllllliiiiiiii eeeeeeeeeeeeee
______ _ ___ ______ _____
| ___| | | / _ \ | ___ \_ _| _
| |_ ___ __ _| |_ _ _ _ __ ___ ___ / /_\ \| |_/ / | | (_)
| _/ _ \/ _` | __| | | | '__/ _ \/ __| | _ || __/ | |
| || __/ (_| | |_| |_| | | | __/\__ \ | | | || | _| |_ _
\_| \___|\__,_|\__|\__,_|_| \___||___/ \_| |_/\_| \___/ (_)
_____ _
|_ _| | |
| | _____ _| |_
| |/ _ \ \/ / __|
| | __/> <| |_
\_/\___/_/\_\\__|
Featurize folders of text files if default_text_features = ['fast_features']
This is the Facebook FastText model:
https://fasttext.cc/docs/en/english-vectors.html
'''
import numpy as np
def fast_featurize(sentence,model):
size=300
sentences2=sentence.split()
w2v_embed=list()
for i in range(len(sentences2)):
try:
feature=model[sentences2[i]]
print(feature)
w2v_embed.append(feature)
except:
#pass if there is an error to not distort averages... :)
pass
out_embed=np.zeros(size)
for j in range(len(w2v_embed)):
out_embed=out_embed+np.array(w2v_embed[j])
out_embed=(1/len(w2v_embed))*out_embed
print(len(w2v_embed))
features=out_embed
labels=list()
for i in range(len(features)):
labels.append('fast_feature_%s'%(str(i+1)))
return features, labels
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/text_features/textacy_features.py | features/text_features/textacy_features.py | '''
AAA lllllll lllllll iiii
A:::A l:::::l l:::::l i::::i
A:::::A l:::::l l:::::l iiii
A:::::::A l:::::l l:::::l
A:::::::::A l::::l l::::l iiiiiii eeeeeeeeeeee
A:::::A:::::A l::::l l::::l i:::::i ee::::::::::::ee
A:::::A A:::::A l::::l l::::l i::::i e::::::eeeee:::::ee
A:::::A A:::::A l::::l l::::l i::::i e::::::e e:::::e
A:::::A A:::::A l::::l l::::l i::::i e:::::::eeeee::::::e
A:::::AAAAAAAAA:::::A l::::l l::::l i::::i e:::::::::::::::::e
A:::::::::::::::::::::A l::::l l::::l i::::i e::::::eeeeeeeeeee
A:::::AAAAAAAAAAAAA:::::A l::::l l::::l i::::i e:::::::e
A:::::A A:::::A l::::::ll::::::li::::::ie::::::::e
A:::::A A:::::A l::::::ll::::::li::::::i e::::::::eeeeeeee
A:::::A A:::::A l::::::ll::::::li::::::i ee:::::::::::::e
AAAAAAA AAAAAAAlllllllllllllllliiiiiiii eeeeeeeeeeeeee
______ _ ___ ______ _____
| ___| | | / _ \ | ___ \_ _| _
| |_ ___ __ _| |_ _ _ _ __ ___ ___ / /_\ \| |_/ / | | (_)
| _/ _ \/ _` | __| | | | '__/ _ \/ __| | _ || __/ | |
| || __/ (_| | |_| |_| | | | __/\__ \ | | | || | _| |_ _
\_| \___|\__,_|\__|\__,_|_| \___||___/ \_| |_/\_| \___/ (_)
_____ _
|_ _| | |
| | _____ _| |_
| |/ _ \ \/ / __|
| | __/> <| |_
\_/\___/_/\_\\__|
Featurize folders of text files if default_text_features = ['textacy_features']
Featurizes .TXT files with textacy - mostly intelligibility measures, as shown below.
More info on textacy can be found @ https://github.com/chartbeat-labs/textacy
-----
ts = textacy.TextStats(doc)
ts.n_unique_words
57
>>> ts.basic_counts
{'n_sents': 3,
'n_words': 73,
'n_chars': 414,
'n_syllables': 134,
'n_unique_words': 57,
'n_long_words': 30,
'n_monosyllable_words': 38,
'n_polysyllable_words': 19}
>>> ts.flesch_kincaid_grade_level
15.56027397260274
>>> ts.readability_stats
{'flesch_kincaid_grade_level': 15.56027397260274,
'flesch_reading_ease': 26.84351598173518,
'smog_index': 17.5058628484301,
'gunning_fog_index': 20.144292237442922,
'coleman_liau_index': 16.32928468493151,
'automated_readability_index': 17.448173515981736,
'lix': 65.42922374429223,
'gulpease_index': 44.61643835616438,
'wiener_sachtextformel': 11.857779908675797}
'''
import textacy, os
import numpy as np
import spacy
def stats(matrix):
mean=np.mean(matrix)
std=np.std(matrix)
maxv=np.amax(matrix)
minv=np.amin(matrix)
median=np.median(matrix)
output=np.array([mean,std,maxv,minv,median])
return output
def textacy_featurize(transcript):
features=list()
labels=list()
# use Spacy doc
try:
nlp=spacy.load('en_core_web_sm')
doc = nlp(transcript)
except:
os.system('python3 -m spacy download en_core_web_sm')
nlp=spacy.load('en_core_web_sm')
doc = nlp(transcript)
ts = textacy.TextStats(doc)
uniquewords=ts.n_unique_words
features.append(uniquewords)
labels.append('uniquewords')
mfeatures=ts.basic_counts
features=features+list(mfeatures.values())
labels=labels+list(mfeatures)
kincaid=ts.flesch_kincaid_grade_level
features.append(kincaid)
labels.append('flesch_kincaid_grade_level')
readability=ts.readability_stats
features=features+list(readability.values())
labels=labels+list(readability)
return features, labels
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/text_features/nltk_features.py | features/text_features/nltk_features.py | '''
AAA lllllll lllllll iiii
A:::A l:::::l l:::::l i::::i
A:::::A l:::::l l:::::l iiii
A:::::::A l:::::l l:::::l
A:::::::::A l::::l l::::l iiiiiii eeeeeeeeeeee
A:::::A:::::A l::::l l::::l i:::::i ee::::::::::::ee
A:::::A A:::::A l::::l l::::l i::::i e::::::eeeee:::::ee
A:::::A A:::::A l::::l l::::l i::::i e::::::e e:::::e
A:::::A A:::::A l::::l l::::l i::::i e:::::::eeeee::::::e
A:::::AAAAAAAAA:::::A l::::l l::::l i::::i e:::::::::::::::::e
A:::::::::::::::::::::A l::::l l::::l i::::i e::::::eeeeeeeeeee
A:::::AAAAAAAAAAAAA:::::A l::::l l::::l i::::i e:::::::e
A:::::A A:::::A l::::::ll::::::li::::::ie::::::::e
A:::::A A:::::A l::::::ll::::::li::::::i e::::::::eeeeeeee
A:::::A A:::::A l::::::ll::::::li::::::i ee:::::::::::::e
AAAAAAA AAAAAAAlllllllllllllllliiiiiiii eeeeeeeeeeeeee
______ _ ___ ______ _____
| ___| | | / _ \ | ___ \_ _| _
| |_ ___ __ _| |_ _ _ _ __ ___ ___ / /_\ \| |_/ / | | (_)
| _/ _ \/ _` | __| | | | '__/ _ \/ __| | _ || __/ | |
| || __/ (_| | |_| |_| | | | __/\__ \ | | | || | _| |_ _
\_| \___|\__,_|\__|\__,_|_| \___||___/ \_| |_/\_| \___/ (_)
_____ _
|_ _| | |
| | _____ _| |_
| |/ _ \ \/ / __|
| | __/> <| |_
\_/\___/_/\_\\__|
Featurize folders of text files if default_text_features = ['nltk_features']
Takes in a text sample and featurizes it with various text features.
I often find this feature set to be useful as a first-pass to see if
text features are relevant to your particular dataset.
Particularly, this is the output array of 63 text features:
['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n',
'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z', 'space',
'numbers', 'capletters', 'cc', 'cd', 'dt', 'ex', 'in', 'jj', 'jjr',
'jjs', 'ls', 'md', 'nn', 'nnp', 'nns', 'pdt', 'pos', 'prp', 'prp2',
'rbr', 'rbs', 'rp', 'to', 'uh', 'vb', 'vbd', 'vbg', 'vbn', 'vbp',
'vbz', 'wdt', 'wp', 'wrb', 'polarity', 'subjectivity', 'repeat']
These are mostly character counts and parts of speech tags.
Check out the NLTK documentation or book for more information.
https://www.nltk.org/book/
'''
import nltk
from nltk import word_tokenize
import speech_recognition as sr_audio
import numpy as np
from textblob import TextBlob
import helpers.transcribe as ts
def nltk_featurize(transcript):
#alphabetical features
a=transcript.count('a')
b=transcript.count('b')
c=transcript.count('c')
d=transcript.count('d')
e=transcript.count('e')
f=transcript.count('f')
g_=transcript.count('g')
h=transcript.count('h')
i=transcript.count('i')
j=transcript.count('j')
k=transcript.count('k')
l=transcript.count('l')
m=transcript.count('m')
n=transcript.count('n')
o=transcript.count('o')
p=transcript.count('p')
q=transcript.count('q')
r=transcript.count('r')
s=transcript.count('s')
t=transcript.count('t')
u=transcript.count('u')
v=transcript.count('v')
w=transcript.count('w')
x=transcript.count('x')
y=transcript.count('y')
z=transcript.count('z')
space=transcript.count(' ')
#numerical features and capital letters
num1=transcript.count('0')+transcript.count('1')+transcript.count('2')+transcript.count('3')+transcript.count('4')+transcript.count('5')+transcript.count('6')+transcript.count('7')+transcript.count('8')+transcript.count('9')
num2=transcript.count('zero')+transcript.count('one')+transcript.count('two')+transcript.count('three')+transcript.count('four')+transcript.count('five')+transcript.count('six')+transcript.count('seven')+transcript.count('eight')+transcript.count('nine')+transcript.count('ten')
number=num1+num2
capletter=sum(1 for c in transcript if c.isupper())
#part of speech
text=word_tokenize(transcript)
g=nltk.pos_tag(transcript)
cc=0
cd=0
dt=0
ex=0
in_=0
jj=0
jjr=0
jjs=0
ls=0
md=0
nn=0
nnp=0
nns=0
pdt=0
pos=0
prp=0
prp2=0
rb=0
rbr=0
rbs=0
rp=0
to=0
uh=0
vb=0
vbd=0
vbg=0
vbn=0
vbp=0
vbp=0
vbz=0
wdt=0
wp=0
wrb=0
for i in range(len(g)):
if g[i][1] == 'CC':
cc=cc+1
elif g[i][1] == 'CD':
cd=cd+1
elif g[i][1] == 'DT':
dt=dt+1
elif g[i][1] == 'EX':
ex=ex+1
elif g[i][1] == 'IN':
in_=in_+1
elif g[i][1] == 'JJ':
jj=jj+1
elif g[i][1] == 'JJR':
jjr=jjr+1
elif g[i][1] == 'JJS':
jjs=jjs+1
elif g[i][1] == 'LS':
ls=ls+1
elif g[i][1] == 'MD':
md=md+1
elif g[i][1] == 'NN':
nn=nn+1
elif g[i][1] == 'NNP':
nnp=nnp+1
elif g[i][1] == 'NNS':
nns=nns+1
elif g[i][1] == 'PDT':
pdt=pdt+1
elif g[i][1] == 'POS':
pos=pos+1
elif g[i][1] == 'PRP':
prp=prp+1
elif g[i][1] == 'PRP$':
prp2=prp2+1
elif g[i][1] == 'RB':
rb=rb+1
elif g[i][1] == 'RBR':
rbr=rbr+1
elif g[i][1] == 'RBS':
rbs=rbs+1
elif g[i][1] == 'RP':
rp=rp+1
elif g[i][1] == 'TO':
to=to+1
elif g[i][1] == 'UH':
uh=uh+1
elif g[i][1] == 'VB':
vb=vb+1
elif g[i][1] == 'VBD':
vbd=vbd+1
elif g[i][1] == 'VBG':
vbg=vbg+1
elif g[i][1] == 'VBN':
vbn=vbn+1
elif g[i][1] == 'VBP':
vbp=vbp+1
elif g[i][1] == 'VBZ':
vbz=vbz+1
elif g[i][1] == 'WDT':
wdt=wdt+1
elif g[i][1] == 'WP':
wp=wp+1
elif g[i][1] == 'WRB':
wrb=wrb+1
#sentiment
tblob=TextBlob(transcript)
polarity=float(tblob.sentiment[0])
subjectivity=float(tblob.sentiment[1])
#word repeats
words=transcript.split()
newlist=transcript.split()
repeat=0
for i in range(len(words)):
newlist.remove(words[i])
if words[i] in newlist:
repeat=repeat+1
features=np.array([a,b,c,d,
e,f,g_,h,
i,j,k,l,
m,n,o,p,
q,r,s,t,
u,v,w,x,
y,z,space,number,
capletter,cc,cd,dt,
ex,in_,jj,jjr,
jjs,ls,md,nn,
nnp,nns,pdt,pos,
prp,prp2,rbr,rbs,
rp,to,uh,vb,
vbd,vbg,vbn,vbp,
vbz,wdt,wp,wrb,
polarity,subjectivity,repeat])
labels=['a', 'b', 'c', 'd',
'e','f','g','h',
'i', 'j', 'k', 'l',
'm','n','o', 'p',
'q','r','s','t',
'u','v','w','x',
'y','z','space', 'numbers',
'capletters','cc','cd','dt',
'ex','in','jj','jjr',
'jjs','ls','md','nn',
'nnp','nns','pdt','pos',
'prp','prp2','rbr','rbs',
'rp','to','uh','vb',
'vbd','vbg','vbn','vbp',
'vbz', 'wdt', 'wp','wrb',
'polarity', 'subjectivity','repeat']
return features, labels
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/text_features/text2_features.py | features/text_features/text2_features.py | import nltk
nltk.download('universal_tagset')
from nltk.tokenize import RegexpTokenizer, word_tokenize
from nltk.tokenize import word_tokenize
from nltk import FreqDist
import numpy as np
import math
def filler_ratio(s, tokens = None):
if tokens == None:
tokens = word_tokenize(s)
tokenizer = RegexpTokenizer('uh|ugh|um|like|you know')
qtokens = tokenizer.tokenize(s.lower())
if len(tokens) == 0:
return float(0)
else:
return float(len(qtokens)) / float(len(tokens))
def type_token_ratio(s, tokens = None):
if tokens == None:
tokens = word_tokenize(s)
uniques = []
for token, count in FreqDist(tokens).items():
if count == 1:
uniques.append(token)
if len(tokens) == 0:
return float(0)
else:
return float(len(uniques)) / float(len(tokens))
def entropy(tokens):
freqdist = FreqDist(tokens)
probs = [freqdist.freq(l) for l in freqdist]
return -sum(p * math.log(p, 2) for p in probs)
def standardized_word_entropy(s, tokens = None):
if tokens == None:
tokens = word_tokenize(s)
if len(tokens) == 0:
return float(0)
else:
if math.log(len(tokens)) == 0:
return float(0)
else:
return entropy(tokens) / math.log(len(tokens))
def question_ratio(s, tokens = None):
if tokens == None:
tokens = word_tokenize(s)
tokenizer = RegexpTokenizer('Who|What|When|Where|Why|How|\?')
qtokens = tokenizer.tokenize(s)
if len(tokens) == 0:
return float(0)
else:
return float(len(qtokens)) / float(len(tokens))
def number_ratio(s, tokens = None):
if tokens == None:
tokens = word_tokenize(s)
tokenizer = RegexpTokenizer('zero|one|two|three|four|five|six|seven|eight|nine|ten|eleven|twelve|thirteen|fourteen|fifteen|sixteen|seventeen|eighteen|nineteen|twenty|thirty|forty|fifty|sixty|seventy|eighty|ninety|hundred|thousand|million|billion|trillion|dozen|couple|several|few|\d')
qtokens = tokenizer.tokenize(s.lower())
if len(tokens) == 0:
return float(0)
else:
return float(len(qtokens)) / float(len(tokens))
def brunets_index(s, tokens = None):
if tokens == None:
tokens = word_tokenize(s)
N = float(len(tokens))
V = float(len(set(tokens)))
if N == 0 or V == 0:
return float(0)
else:
return math.pow(N, math.pow(V, -0.165))
def honores_statistic(s, tokens = None):
if tokens == None:
tokens = word_tokenize(s)
uniques = []
for token, count in FreqDist(tokens).items():
if count == 1:
uniques.append(token)
N = float(len(tokens))
V = float(len(set(tokens)))
V1 = float(len(uniques))
if N == 0 or V == 0 or V1 == 0:
return float(0)
elif V == V1:
return (100 * math.log(N))
else:
return (100 * math.log(N)) / (1 - (V1 / V))
from nltk.tokenize import word_tokenize
from nltk.tag import pos_tag, map_tag
def pronoun_to_noun_ratio(s, tokens = None):
if tokens == None:
tokens = word_tokenize(s)
pos = pos_tag(tokens)
pronouns = []
nouns = []
for [token, tag] in pos:
part = map_tag('en-ptb', 'universal', tag)
if part == "PRON":
pronouns.append(token)
for [token, tag] in pos:
part = map_tag('en-ptb', 'universal', tag)
if part == "NOUN":
nouns.append(token)
if len(nouns) == 0:
return float(0)
else:
return float(len(pronouns)) / float(len(nouns))
def wpm(s, tokens, duration):
r = float(duration / 60)
return len(tokens) / r
def text2_featurize(transcript):
features=[filler_ratio(transcript),
type_token_ratio(transcript),
standardized_word_entropy(transcript),
question_ratio(transcript),
number_ratio(transcript),
brunets_index(transcript),
honores_statistic(transcript),
pronoun_to_noun_ratio(transcript)]
labels=['filler_ratio', 'type_token_ratio', 'standardized_word_entropy',
'question_ratio', 'number_ratio', 'brunets_index', 'honores_statistic',
'pronoun_to_noun_ratio']
return features, labels
# features, labels =text2_featurize('this is a test transcript.')
# print(features)
# print(labels)
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/text_features/spacy_features.py | features/text_features/spacy_features.py | '''
AAA lllllll lllllll iiii
A:::A l:::::l l:::::l i::::i
A:::::A l:::::l l:::::l iiii
A:::::::A l:::::l l:::::l
A:::::::::A l::::l l::::l iiiiiii eeeeeeeeeeee
A:::::A:::::A l::::l l::::l i:::::i ee::::::::::::ee
A:::::A A:::::A l::::l l::::l i::::i e::::::eeeee:::::ee
A:::::A A:::::A l::::l l::::l i::::i e::::::e e:::::e
A:::::A A:::::A l::::l l::::l i::::i e:::::::eeeee::::::e
A:::::AAAAAAAAA:::::A l::::l l::::l i::::i e:::::::::::::::::e
A:::::::::::::::::::::A l::::l l::::l i::::i e::::::eeeeeeeeeee
A:::::AAAAAAAAAAAAA:::::A l::::l l::::l i::::i e:::::::e
A:::::A A:::::A l::::::ll::::::li::::::ie::::::::e
A:::::A A:::::A l::::::ll::::::li::::::i e::::::::eeeeeeee
A:::::A A:::::A l::::::ll::::::li::::::i ee:::::::::::::e
AAAAAAA AAAAAAAlllllllllllllllliiiiiiii eeeeeeeeeeeeee
______ _ ___ ______ _____
| ___| | | / _ \ | ___ \_ _| _
| |_ ___ __ _| |_ _ _ _ __ ___ ___ / /_\ \| |_/ / | | (_)
| _/ _ \/ _` | __| | | | '__/ _ \/ __| | _ || __/ | |
| || __/ (_| | |_| |_| | | | __/\__ \ | | | || | _| |_ _
\_| \___|\__,_|\__|\__,_|_| \___||___/ \_| |_/\_| \___/ (_)
_____ _
|_ _| | |
| | _____ _| |_
| |/ _ \ \/ / __|
| | __/> <| |_
\_/\___/_/\_\\__|
Featurize folders of text files if default_text_features = ['spacy_features']
Extract linguistic features using the spacy library.
This is one of many things the spacy library can do.
Extracts 315 features with labels below:
['PROPN', 'ADP', 'DET', 'NUM', 'PUNCT', 'SPACE',
'VERB', 'NOUN', 'ADV', 'CCONJ', 'PRON', 'ADJ',
'SYM', 'PART', 'INTJ', 'X', 'pos_other', 'NNP',
'IN', 'DT', 'CD', 'NNPS', ',', '_SP', 'VBZ', 'NN',
'RB', 'CC', '', 'NNS', '.', 'PRP', 'MD', 'VB',
'HYPH', 'VBD', 'JJ', ':', '-LRB-', '$', '-RRB-',
'VBG', 'VBN', 'NFP', 'RBR', 'POS', 'VBP', 'RP',
'JJS', 'PRP$', 'EX', 'JJR', 'WP', 'WDT', 'TO',
'WRB', "''", '``', 'PDT', 'AFX', 'RBS', 'UH',
'WP$', 'FW', 'XX', 'SYM', 'LS', 'ADD', 'tag_other',
'compound', 'ROOT', 'prep', 'det', 'pobj',
'nummod', 'punct', '', 'nsubj', 'advmod',
'cc', 'conj', 'aux', 'dobj', 'nmod', 'acl',
'appos', 'npadvmod', 'amod', 'agent', 'case',
'intj', 'prt', 'pcomp', 'ccomp', 'attr',
'dep', 'acomp', 'poss', 'auxpass', 'expl',
'mark', 'nsubjpass', 'quantmod', 'advcl', 'relcl',
'oprd', 'neg', 'xcomp', 'csubj', 'predet', 'parataxis',
'dative', 'preconj', 'csubjpass', 'meta', 'dep_other',
'\ufeffXxx', 'Xxxxx', 'XXxxx', 'xx', 'X', 'Xxxx', 'Xxx',
',', '\n\n', 'xXxxx', 'xxx', 'xxxx', '\n', '.', ' ',
'-', 'xxx.xxxx.xxx', '\n\n\n', ':', '\n ',
'dddd', '[', '#', 'dd', ']', 'd', 'XXX-d',
'*', 'XXXX', 'XX', 'XXX', '\n\n\n\n',
'Xx', '\n\n\n ', '--', '\n\n ',
' ', ' ', ' ', "'x", 'x', 'X.', 'xxx--',
';', 'Xxx.', '(', ')', "'", '“', '”', 'Xx.',
'!', "'xx", 'xx!--Xxx', "x'xxxx", '?',
'_', "x'x", "x'xx", "Xxx'xxxx", 'Xxxxx--',
'xxxx--', '--xxxx', 'X--', 'xx--', 'xxxx”--xxx', 'xxx--“xxxx',
"Xxx'x", ';--', 'xxx--_xxx', "xxx'x", 'xxx!--xxxx',
'xxxx?--_Xxx', "Xxxxx'x", 'xxxx--“xxxx', "xxxx'xxx",
'--Xxxxx', ',--', '?--', 'xx--“xx', 'xx!--X', '.--',
'xxx--“xxx', ':--', 'Xxxxx--“xxxx', 'xxxx!--xxxx',
'xx”--xxx', 'xxxx--_xxx', 'xxxx--“xxx', '--xx',
'--X', 'xxxx!--Xxx', '--xxx', 'xxx_.', 'xxxx--_xx',
'xxxx--_xx_xxxx', 'xx!--xxxx', 'xxxx!--xx', "X'xx",
"xxxx'x", "X_'x", "xxx'xxx", '--Xxxx', "X'Xxxxx",
"Xx'xxxx", '--Xxx', 'xxxx”--xxxx', 'xxxx!--',
'xxxx--“x', 'Xxxx!--Xxxx', 'xxx!--Xxx',
'Xxxxx.', 'xxxx_.', 'xx--“Xxxx', '\n\n ',
'Xxxxx”--xxx', 'xxxx”--xx', 'xxxx--“xx',
"Xxxxx!--Xxx'x", "X'xxxx", 'Xxxxx?--',
'--Xx', 'xxxx!”--Xx', "xxxx--“X'x", "xxxx'", 'xxx.--“Xxxx',
'xxxx--“X', 'xxxx!--X', 'Xxx”--xx', 'xxx”--xxx', 'xxx-_xxx',
"x'Xxxxx", 'Xxxxx!--X', 'Xxxxx!--Xxx', 'dd-d.xxx', 'xxxx://xxx.xxxx.xxx/d/dd/',
'xXxxxx', 'xxxx://xxxx.xxx/xxxx', 'd.X.', '/', 'd.X.d', 'd.X', '%',
'Xd', 'xxxx://xxx.xxxx.xxx', 'ddd(x)(d', 'X.X.', 'ddd', 'xxxx@xxxx.xxx',
'xxxx://xxxx.xxx', '$', 'd,ddd', 'shape_other', 'mean sentence polarity',
'std sentence polarity', 'max sentence polarity', 'min sentence polarity',
'median sentence polarity', 'mean sentence subjectivity',
'std sentence subjectivity', 'max sentence subjectivity',
'min sentence subjectivity', 'median sentence subjectivity',
'character count', 'word count', 'sentence number', 'words per sentence',
'unique chunk noun text', 'unique chunk root text',
'unique chunk root head text', 'chunkdep ROOT', 'chunkdep pobj',
'chunkdep nsubj', 'chunkdep dobj', 'chunkdep conj', 'chunkdep appos',
'chunkdep attr', 'chunkdep nsubjpass', 'chunkdep dative', 'chunkdep pcomp',
'number of named entities', 'PERSON', 'NORP', 'FAC', 'ORG', 'GPE', 'LOC',
'PRODUCT', 'EVENT', 'WORK_OF_ART', 'LAW', 'LANGUAGE', 'DATE', 'TIME',
'PERCENT', 'MONEY', 'QUANTITY', 'ORDINAL', 'CARDINAL']
'''
import spacy
from spacy.symbols import nsubj, VERB
from textblob import TextBlob
import numpy as np
def stats(matrix):
mean=np.mean(matrix)
std=np.std(matrix)
maxv=np.amax(matrix)
minv=np.amin(matrix)
median=np.median(matrix)
output=np.array([mean,std,maxv,minv,median])
return output
def spacy_featurize(transcript):
try:
nlp=spacy.load('en_core_web_sm')
except:
os.system('python3 -m spacy download en_core_web_sm')
nlp=spacy.load('en_core_web_sm')
doc=nlp(transcript)
# initialize lists
entity_types=['PERSON','NORP','FAC','ORG',
'GPE','LOC','PRODUCT','EVENT',
'WORK_OF_ART','LAW','LANGUAGE',
'DATE','TIME','PERCENT','MONEY',
'QUANTITY','ORDINAL','CARDINAL']
pos_types=['PROPN', 'ADP', 'DET', 'NUM',
'PUNCT', 'SPACE', 'VERB', 'NOUN',
'ADV', 'CCONJ', 'PRON', 'ADJ',
'SYM', 'PART', 'INTJ', 'X']
tag_types=['NNP', 'IN', 'DT', 'CD',
'NNPS', ',', '_SP', 'VBZ',
'NN', 'RB', 'CC', '', 'NNS',
'.', 'PRP', 'MD', 'VB',
'HYPH', 'VBD', 'JJ', ':',
'-LRB-', '$', '-RRB-', 'VBG',
'VBN', 'NFP', 'RBR', 'POS',
'VBP', 'RP', 'JJS', 'PRP$',
'EX', 'JJR', 'WP', 'WDT',
'TO', 'WRB', "''", '``',
'PDT', 'AFX', 'RBS', 'UH',
'WP$', 'FW', 'XX', 'SYM', 'LS',
'ADD']
dep_types=['compound', 'ROOT', 'prep', 'det',
'pobj', 'nummod', 'punct', '',
'nsubj', 'advmod', 'cc', 'conj',
'aux', 'dobj', 'nmod', 'acl',
'appos', 'npadvmod', 'amod', 'agent',
'case', 'intj', 'prt', 'pcomp',
'ccomp', 'attr', 'dep', 'acomp',
'poss', 'auxpass', 'expl', 'mark',
'nsubjpass', 'quantmod', 'advcl', 'relcl',
'oprd', 'neg', 'xcomp', 'csubj',
'predet', 'parataxis', 'dative', 'preconj',
'csubjpass', 'meta']
shape_types=['\ufeffXxx', 'Xxxxx', 'XXxxx', 'xx',
'X', 'Xxxx', 'Xxx', ',', '\n\n',
'xXxxx', 'xxx', 'xxxx', '\n',
'.', ' ', '-', 'xxx.xxxx.xxx', '\n\n\n',
':', '\n ', 'dddd', '[', '#', 'dd', ']',
'd', 'XXX-d', '*', 'XXXX',
'XX', 'XXX', '\n\n\n\n', 'Xx',
'\n\n\n ', '--', '\n\n ', ' ',
' ', ' ', "'x", 'x',
'X.', 'xxx--', ';', 'Xxx.',
'(', ')', "'", '“', '”',
'Xx.', '!', "'xx", 'xx!--Xxx',
"x'xxxx", '?', '_', "x'x", "x'xx",
"Xxx'xxxx", 'Xxxxx--', 'xxxx--',
'--xxxx', 'X--', 'xx--', 'xxxx”--xxx',
'xxx--“xxxx', "Xxx'x", ';--',
'xxx--_xxx', "xxx'x", 'xxx!--xxxx', 'xxxx?--_Xxx',
"Xxxxx'x", 'xxxx--“xxxx', "xxxx'xxx", '--Xxxxx',
',--', '?--', 'xx--“xx', 'xx!--X',
'.--', 'xxx--“xxx', ':--', 'Xxxxx--“xxxx',
'xxxx!--xxxx', 'xx”--xxx', 'xxxx--_xxx', 'xxxx--“xxx',
'--xx', '--X', 'xxxx!--Xxx', '--xxx',
'xxx_.', 'xxxx--_xx', 'xxxx--_xx_xxxx', 'xx!--xxxx',
'xxxx!--xx', "X'xx", "xxxx'x", "X_'x",
"xxx'xxx", '--Xxxx', "X'Xxxxx", "Xx'xxxx",
'--Xxx', 'xxxx”--xxxx', 'xxxx!--',
'xxxx--“x', 'Xxxx!--Xxxx', 'xxx!--Xxx', 'Xxxxx.',
'xxxx_.', 'xx--“Xxxx', '\n\n ', 'Xxxxx”--xxx',
'xxxx”--xx', 'xxxx--“xx', "Xxxxx!--Xxx'x", "X'xxxx",
'Xxxxx?--', '--Xx', 'xxxx!”--Xx', "xxxx--“X'x", "xxxx'",
'xxx.--“Xxxx', 'xxxx--“X', 'xxxx!--X', 'Xxx”--xx', 'xxx”--xxx',
'xxx-_xxx', "x'Xxxxx", 'Xxxxx!--X', 'Xxxxx!--Xxx',
'dd-d.xxx', 'xxxx://xxx.xxxx.xxx/d/dd/', 'xXxxxx', 'xxxx://xxxx.xxx/xxxx',
'd.X.', '/', 'd.X.d', 'd.X',
'%', 'Xd', 'xxxx://xxx.xxxx.xxx', 'ddd(x)(d',
'X.X.', 'ddd', 'xxxx@xxxx.xxx', 'xxxx://xxxx.xxx',
'$', 'd,ddd']
chunkdep_types=['ROOT', 'pobj', 'nsubj', 'dobj', 'conj',
'appos', 'attr', 'nsubjpass', 'dative', 'pcomp']
# initialize lists
features=list()
labels=list()
poslist=list()
taglist=list()
deplist=list()
shapelist=list()
sentences=list()
sentence_length=0
sent_polarity=list()
sent_subjectivity=list()
# EXTRACT ALL TOKENS
for token in doc:
if token.pos_ in pos_types:
poslist.append(token.pos_)
else:
poslist.append('pos_other')
if token.tag_ in tag_types:
taglist.append(token.tag_)
else:
taglist.append('tag_other')
if token.dep_ in dep_types:
deplist.append(token.dep_)
else:
deplist.append('dep_other')
if token.shape_ in shape_types:
shapelist.append(token.shape_)
else:
shapelist.append('shape_other')
pos_types.append('pos_other')
tag_types.append('tag_other')
dep_types.append('dep_other')
shape_types.append('shape_other')
# count unique instances throughout entire tokenization
# keep labels as well
for i in range(len(pos_types)):
features.append(poslist.count(pos_types[i]))
labels.append(pos_types[i])
for i in range(len(tag_types)):
features.append(taglist.count(tag_types[i]))
labels.append(tag_types[i])
for i in range(len(dep_types)):
features.append(deplist.count(dep_types[i]))
labels.append(dep_types[i])
for i in range(len(shape_types)):
features.append(shapelist.count(shape_types[i]))
labels.append(shape_types[i])
# EXTRACT SENTENCES
for sent in doc.sents:
sentences.append(sent.text)
# NOW ITERATE OVER SENTENCES TO CALCULATE THINGS PER SENTENCE
for i in range(len(sentences)):
sent_polarity.append(TextBlob(sentences[i]).sentiment[0])
sent_subjectivity.append(TextBlob(sentences[i]).sentiment[1])
# STATISTICAL POLARITY AND SUBJECTIVITY FEATURES PER SENTENCE
sent_polarity=stats(np.array(sent_polarity))
for i in range(len(sent_polarity)):
features.append(sent_polarity[i])
if i == 0:
labels.append('mean sentence polarity')
elif i == 1:
labels.append('std sentence polarity')
elif i == 2:
labels.append('max sentence polarity')
elif i == 3:
labels.append('min sentence polarity')
elif i == 4:
labels.append('median sentence polarity')
sent_subjectivity=stats(np.array(sent_subjectivity))
for i in range(len(sent_subjectivity)):
features.append(sent_subjectivity[i])
if i ==0:
labels.append('mean sentence subjectivity')
elif i==1:
labels.append('std sentence subjectivity')
elif i==2:
labels.append('max sentence subjectivity')
elif i==3:
labels.append('min sentence subjectivity')
elif i==4:
labels.append('median sentence subjectivity')
# CHARACTERS
characters=len(transcript)
features.append(characters)
labels.append('character count')
# TOTAL NUMBER OF WORDS
words=len(transcript.split())
features.append(words)
labels.append('word count')
# TOTAL NUMBER OF SENTENCES
sentence_num=len(sentences)
features.append(sentence_num)
labels.append('sentence number')
# WORDS PER SENTENCE
wps=sentence_num/words
features.append(wps)
labels.append('words per sentence')
# NEED TO GET MORE FEATURES
#_________________________
# EXTRACT NOUN CHUNKS
chunktext=list()
chunkroot=list()
chunkdep=list()
chunkhead=list()
for chunk in doc.noun_chunks:
if chunk.text not in chunk.text:
chunktext.append(chunk.text)
#print('text:'+chunk.text)
if chunk.root.text not in chunkroot:
chunkroot.append(chunk.root.text)
# later extract chunkdep
chunkdep.append(chunk.root.dep_)
if chunk.root.head.text not in chunkhead:
chunkhead.append(chunk.root.head.text)
features.append(len(chunktext))
labels.append('unique chunk noun text')
features.append(len(chunkroot))
labels.append('unique chunk root text')
features.append(len(chunkhead))
labels.append('unique chunk root head text')
for i in range(len(chunkdep_types)):
features.append(chunkdep.count(chunkdep_types[i]))
labels.append('chunkdep '+chunkdep_types[i])
# EXTRACT NAMED ENTITY FREQUENCIES
ent_texts=list()
ent_labels=list()
for ent in doc.ents:
ent_texts.append(ent.text)
ent_labels.append(ent.label_)
features.append(len(ent_texts))
labels.append('number of named entities')
for i in range(len(entity_types)):
features.append(ent_labels.count(entity_types[i]))
labels.append(entity_types[i])
return features, labels
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/text_features/text_features.py | features/text_features/text_features.py | '''
AAA lllllll lllllll iiii
A:::A l:::::l l:::::l i::::i
A:::::A l:::::l l:::::l iiii
A:::::::A l:::::l l:::::l
A:::::::::A l::::l l::::l iiiiiii eeeeeeeeeeee
A:::::A:::::A l::::l l::::l i:::::i ee::::::::::::ee
A:::::A A:::::A l::::l l::::l i::::i e::::::eeeee:::::ee
A:::::A A:::::A l::::l l::::l i::::i e::::::e e:::::e
A:::::A A:::::A l::::l l::::l i::::i e:::::::eeeee::::::e
A:::::AAAAAAAAA:::::A l::::l l::::l i::::i e:::::::::::::::::e
A:::::::::::::::::::::A l::::l l::::l i::::i e::::::eeeeeeeeeee
A:::::AAAAAAAAAAAAA:::::A l::::l l::::l i::::i e:::::::e
A:::::A A:::::A l::::::ll::::::li::::::ie::::::::e
A:::::A A:::::A l::::::ll::::::li::::::i e::::::::eeeeeeee
A:::::A A:::::A l::::::ll::::::li::::::i ee:::::::::::::e
AAAAAAA AAAAAAAlllllllllllllllliiiiiiii eeeeeeeeeeeeee
______ _ ___ ______ _____
| ___| | | / _ \ | ___ \_ _| _
| |_ ___ __ _| |_ _ _ _ __ ___ ___ / /_\ \| |_/ / | | (_)
| _/ _ \/ _` | __| | | | '__/ _ \/ __| | _ || __/ | |
| || __/ (_| | |_| |_| | | | __/\__ \ | | | || | _| |_ _
\_| \___|\__,_|\__|\__,_|_| \___||___/ \_| |_/\_| \___/ (_)
_____ _
|_ _| | |
| | _____ _| |_
| |/ _ \ \/ / __|
| | __/> <| |_
\_/\___/_/\_\\__|
Featurize folders of text files if default_text_features = ['text_features']
This extracts many linguistic features such as the filler ratio,
type_token_ratio, entropy, standardized_word_entropy,
question_ratio, number_ratio, brunet's index, honore's statistic,
and many others.
'''
from nltk.tokenize import RegexpTokenizer, word_tokenize
from nltk.tokenize import word_tokenize
from nltk import FreqDist
import numpy as np
import math
def filler_ratio(s, tokens = None):
if tokens == None:
tokens = word_tokenize(s)
tokenizer = RegexpTokenizer('uh|ugh|um|like|you know')
qtokens = tokenizer.tokenize(s.lower())
if len(tokens) == 0:
return float(0)
else:
return float(len(qtokens)) / float(len(tokens))
def type_token_ratio(s, tokens = None):
if tokens == None:
tokens = word_tokenize(s)
uniques = []
for token, count in FreqDist(tokens).items():
if count == 1:
uniques.append(token)
if len(tokens) == 0:
return float(0)
else:
return float(len(uniques)) / float(len(tokens))
def entropy(tokens):
freqdist = FreqDist(tokens)
probs = [freqdist.freq(l) for l in freqdist]
return -sum(p * math.log(p, 2) for p in probs)
def standardized_word_entropy(s, tokens = None):
if tokens == None:
tokens = word_tokenize(s)
if len(tokens) == 0:
return float(0)
else:
if math.log(len(tokens)) == 0:
return float(0)
else:
return entropy(tokens) / math.log(len(tokens))
def question_ratio(s, tokens = None):
if tokens == None:
tokens = word_tokenize(s)
tokenizer = RegexpTokenizer('Who|What|When|Where|Why|How|\?')
qtokens = tokenizer.tokenize(s)
if len(tokens) == 0:
return float(0)
else:
return float(len(qtokens)) / float(len(tokens))
def number_ratio(s, tokens = None):
if tokens == None:
tokens = word_tokenize(s)
tokenizer = RegexpTokenizer('zero|one|two|three|four|five|six|seven|eight|nine|ten|eleven|twelve|thirteen|fourteen|fifteen|sixteen|seventeen|eighteen|nineteen|twenty|thirty|forty|fifty|sixty|seventy|eighty|ninety|hundred|thousand|million|billion|trillion|dozen|couple|several|few|\d')
qtokens = tokenizer.tokenize(s.lower())
if len(tokens) == 0:
return float(0)
else:
return float(len(qtokens)) / float(len(tokens))
def brunets_index(s, tokens = None):
if tokens == None:
tokens = word_tokenize(s)
N = float(len(tokens))
V = float(len(set(tokens)))
if N == 0 or V == 0:
return float(0)
else:
return math.pow(N, math.pow(V, -0.165))
def honores_statistic(s, tokens = None):
if tokens == None:
tokens = word_tokenize(s)
uniques = []
for token, count in FreqDist(tokens).items():
if count == 1:
uniques.append(token)
N = float(len(tokens))
V = float(len(set(tokens)))
V1 = float(len(uniques))
if N == 0 or V == 0 or V1 == 0:
return float(0)
elif V == V1:
return (100 * math.log(N))
else:
return (100 * math.log(N)) / (1 - (V1 / V))
def datewords_freq(importtext):
text=word_tokenize(importtext.lower())
datewords=['time','monday','tuesday','wednesday','thursday','friday','saturday','sunday','january','february','march','april','may','june','july','august','september','november','december','year','day','hour','today','month',"o'clock","pm","am"]
datewords2=list()
for i in range(len(datewords)):
datewords2.append(datewords[i]+'s')
datewords=datewords+datewords2
datecount=0
for i in range(len(text)):
if text[i].lower() in datewords:
datecount=datecount+1
datewords=datecount
try:
datewordfreq=datecount/len(text)
except:
datewordfreq=0
return datewordfreq
def word_stats(importtext):
text=word_tokenize(importtext)
#average word length
awords=list()
for i in range(len(text)):
awords.append(len(text[i]))
awordlength=np.mean(awords)
#all words greater than 5 in length
fivewords= [w for w in text if len(w) > 5]
fivewordnum=len(fivewords)
#maximum word length
vmax=np.amax(awords)
#minimum word length
vmin=np.amin(awords)
#variance of the vocabulary
vvar=np.var(awords)
#stdev of vocabulary
vstd=np.std(awords)
features=[float(awordlength),float(fivewordnum), float(vmax),float(vmin),float(vvar),float(vstd)]
return features
def num_sentences(importtext):
#actual number of periods
periods=importtext.count('.')
#count number of questions
questions=importtext.count('?')
#count number of interjections
interjections=importtext.count('!')
#actual number of sentences
sentencenum=periods+questions+interjections
return [sentencenum,periods,questions,interjections]
def word_repeats(importtext):
tokens=word_tokenize(importtext)
tenwords=list()
tenwords2=list()
repeatnum=0
repeatedwords=list()
#make number of sentences
for i in range(0,len(tokens),10):
tenwords.append(i)
for j in range(0,len(tenwords)):
if j not in [len(tenwords)-2,len(tenwords)-1]:
tenwords2.append(tokens[tenwords[j]:tenwords[j+1]])
else:
pass
#now parse for word repeats sentence-over-sentence
for k in range(0,len(tenwords2)):
if k<len(tenwords2)-1:
for l in range(10):
if tenwords2[k][l] in tenwords2[k+1]:
repeatnum=repeatnum+1
repeatedwords.append(tenwords2[k][l])
if tenwords2[k+1][l] in tenwords2[k]:
repeatnum=repeatnum+1
repeatedwords.append(tenwords2[k+1][l])
else:
pass
#calculate the number of sentences and repeat word avg per sentence
sentencenum=len(tenwords)
repeatavg=repeatnum/sentencenum
#repeated word freqdist
return [repeatavg]
def text_featurize(transcript):
labels=list()
features=list()
# extract features
features1=[filler_ratio(transcript),
type_token_ratio(transcript),
standardized_word_entropy(transcript),
question_ratio(transcript),
number_ratio(transcript),
brunets_index(transcript),
honores_statistic(transcript),
datewords_freq(transcript)]
features2=word_stats(transcript)
features3=num_sentences(transcript)
features4=word_repeats(transcript)
# extract labels
labels1=['filler ratio', 'type token ratio', 'standardized word entropy', 'question ratio', 'number ratio', 'Brunets Index',
'Honores statistic', 'datewords freq']
labels2=['word number', 'five word count', 'max word length', 'min word length', 'variance of vocabulary', 'std of vocabulary']
labels3=['sentencenum', 'periods', 'questions', 'interjections']
labels4=['repeatavg']
features=features1+features2+features3+features4
labels=labels1+labels2+labels3+labels4
return features, labels | python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/text_features/bert_features.py | features/text_features/bert_features.py | '''
AAA lllllll lllllll iiii
A:::A l:::::l l:::::l i::::i
A:::::A l:::::l l:::::l iiii
A:::::::A l:::::l l:::::l
A:::::::::A l::::l l::::l iiiiiii eeeeeeeeeeee
A:::::A:::::A l::::l l::::l i:::::i ee::::::::::::ee
A:::::A A:::::A l::::l l::::l i::::i e::::::eeeee:::::ee
A:::::A A:::::A l::::l l::::l i::::i e::::::e e:::::e
A:::::A A:::::A l::::l l::::l i::::i e:::::::eeeee::::::e
A:::::AAAAAAAAA:::::A l::::l l::::l i::::i e:::::::::::::::::e
A:::::::::::::::::::::A l::::l l::::l i::::i e::::::eeeeeeeeeee
A:::::AAAAAAAAAAAAA:::::A l::::l l::::l i::::i e:::::::e
A:::::A A:::::A l::::::ll::::::li::::::ie::::::::e
A:::::A A:::::A l::::::ll::::::li::::::i e::::::::eeeeeeee
A:::::A A:::::A l::::::ll::::::li::::::i ee:::::::::::::e
AAAAAAA AAAAAAAlllllllllllllllliiiiiiii eeeeeeeeeeeeee
______ _ ___ ______ _____
| ___| | | / _ \ | ___ \_ _| _
| |_ ___ __ _| |_ _ _ _ __ ___ ___ / /_\ \| |_/ / | | (_)
| _/ _ \/ _` | __| | | | '__/ _ \/ __| | _ || __/ | |
| || __/ (_| | |_| |_| | | | __/\__ \ | | | || | _| |_ _
\_| \___|\__,_|\__|\__,_|_| \___||___/ \_| |_/\_| \___/ (_)
_____ _
|_ _| | |
| | _____ _| |_
| |/ _ \ \/ / __|
| | __/> <| |_
\_/\___/_/\_\\__|
Featurize folders of text files if default_text_features = ['bert_features']
This is from the BERT Model. Read more about the BERT model embedding @
https://github.com/UKPLab/sentence-transformers
'''
import os
try:
from sentence_transformers import SentenceTransformer
except:
os.system("pip3 install sentence_transformers==0.4.1.2")
import numpy as np
def bert_featurize(sentence,model):
features = model.encode(sentence)
labels=list()
for i in range(len(features)):
labels.append('bert_feature_%s'%(str(i+1)))
return features, labels
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/text_features/w2v_features.py | features/text_features/w2v_features.py | '''
AAA lllllll lllllll iiii
A:::A l:::::l l:::::l i::::i
A:::::A l:::::l l:::::l iiii
A:::::::A l:::::l l:::::l
A:::::::::A l::::l l::::l iiiiiii eeeeeeeeeeee
A:::::A:::::A l::::l l::::l i:::::i ee::::::::::::ee
A:::::A A:::::A l::::l l::::l i::::i e::::::eeeee:::::ee
A:::::A A:::::A l::::l l::::l i::::i e::::::e e:::::e
A:::::A A:::::A l::::l l::::l i::::i e:::::::eeeee::::::e
A:::::AAAAAAAAA:::::A l::::l l::::l i::::i e:::::::::::::::::e
A:::::::::::::::::::::A l::::l l::::l i::::i e::::::eeeeeeeeeee
A:::::AAAAAAAAAAAAA:::::A l::::l l::::l i::::i e:::::::e
A:::::A A:::::A l::::::ll::::::li::::::ie::::::::e
A:::::A A:::::A l::::::ll::::::li::::::i e::::::::eeeeeeee
A:::::A A:::::A l::::::ll::::::li::::::i ee:::::::::::::e
AAAAAAA AAAAAAAlllllllllllllllliiiiiiii eeeeeeeeeeeeee
______ _ ___ ______ _____
| ___| | | / _ \ | ___ \_ _| _
| |_ ___ __ _| |_ _ _ _ __ ___ ___ / /_\ \| |_/ / | | (_)
| _/ _ \/ _` | __| | | | '__/ _ \/ __| | _ || __/ | |
| || __/ (_| | |_| |_| | | | __/\__ \ | | | || | _| |_ _
\_| \___|\__,_|\__|\__,_|_| \___||___/ \_| |_/\_| \___/ (_)
_____ _
|_ _| | |
| | _____ _| |_
| |/ _ \ \/ / __|
| | __/> <| |_
\_/\___/_/\_\\__|
Featurize folders of text files if default_text_features = ['w2v_features']
This is the Google W2V embedding. Note that many w2v embeddings can be
created into the future for custom text embeddings across a variety of domains.
Following this tutorial:
https://machinelearningmastery.com/develop-word-embeddings-python-gensim/
'''
import numpy as np
def w2v_featurize(transcript,model):
sentences2=transcript.split()
size=300
w2v_embed=list()
for i in range(len(sentences2)):
try:
print(len(model[sentences2[i]]))
w2v_embed.append(model[sentences2[i]])
except:
#pass if there is an error to not distort averages... :)
pass
out_embed=np.zeros(size)
for j in range(len(w2v_embed)):
out_embed=out_embed+np.array(w2v_embed[j])
out_embed=(1/len(w2v_embed))*out_embed
features=out_embed
labels=list()
for i in range(len(features)):
labels.append('w2v_feature_%s'%(str(i+1)))
return features, labels
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/text_features/featurize.py | features/text_features/featurize.py | '''
AAA lllllll lllllll iiii
A:::A l:::::l l:::::l i::::i
A:::::A l:::::l l:::::l iiii
A:::::::A l:::::l l:::::l
A:::::::::A l::::l l::::l iiiiiii eeeeeeeeeeee
A:::::A:::::A l::::l l::::l i:::::i ee::::::::::::ee
A:::::A A:::::A l::::l l::::l i::::i e::::::eeeee:::::ee
A:::::A A:::::A l::::l l::::l i::::i e::::::e e:::::e
A:::::A A:::::A l::::l l::::l i::::i e:::::::eeeee::::::e
A:::::AAAAAAAAA:::::A l::::l l::::l i::::i e:::::::::::::::::e
A:::::::::::::::::::::A l::::l l::::l i::::i e::::::eeeeeeeeeee
A:::::AAAAAAAAAAAAA:::::A l::::l l::::l i::::i e:::::::e
A:::::A A:::::A l::::::ll::::::li::::::ie::::::::e
A:::::A A:::::A l::::::ll::::::li::::::i e::::::::eeeeeeee
A:::::A A:::::A l::::::ll::::::li::::::i ee:::::::::::::e
AAAAAAA AAAAAAAlllllllllllllllliiiiiiii eeeeeeeeeeeeee
______ _ ___ ______ _____
| ___| | | / _ \ | ___ \_ _| _
| |_ ___ __ _| |_ _ _ _ __ ___ ___ / /_\ \| |_/ / | | (_)
| _/ _ \/ _` | __| | | | '__/ _ \/ __| | _ || __/ | |
| || __/ (_| | |_| |_| | | | __/\__ \ | | | || | _| |_ _
\_| \___|\__,_|\__|\__,_|_| \___||___/ \_| |_/\_| \___/ (_)
_____ _
|_ _| | |
| | _____ _| |_
| |/ _ \ \/ / __|
| | __/> <| |_
\_/\___/_/\_\\__|
Featurize folders of text files with the default_text_features.
Usage: python3 featurize.py [folder] [featuretype]
All featuretype options include:
["bert_features", "fast_features", "glove_features", "grammar_features",
"nltk_features", "spacy_features", "text_features", "w2v_features"]
Read more @ https://github.com/jim-schwoebel/allie/tree/master/features/text_features
'''
##################################################
## Import statements ##
##################################################
import numpy as np
from gensim.models import Word2Vec
from gensim.models import KeyedVectors
from tqdm import tqdm
import helpers.transcribe as ts
import json, os, sys
import os, wget, zipfile, uuid
import shutil
##################################################
## Helper functions. ##
##################################################
def prev_dir(directory):
g=directory.split('/')
dir_=''
for i in range(len(g)):
if i != len(g)-1:
if i==0:
dir_=dir_+g[i]
else:
dir_=dir_+'/'+g[i]
# print(dir_)
return dir_
def text_featurize(feature_set, transcript, glovemodel, w2vmodel, fastmodel, bert_model):
if feature_set == 'nltk_features':
features, labels = nf.nltk_featurize(transcript)
elif feature_set == 'spacy_features':
features, labels = sf.spacy_featurize(transcript)
elif feature_set == 'glove_features':
features, labels=gf.glove_featurize(transcript, glovemodel)
elif feature_set == 'w2v_features':
features, labels=w2v.w2v_featurize(transcript, w2vmodel)
elif feature_set == 'fast_features':
features, labels=ff.fast_featurize(transcript, fastmodel)
elif feature_set == 'text_features':
features, labels=textf.text_featurize(transcript)
elif feature_set == 'grammar_features':
features, labels=grammarf.grammar_featurize(transcript)
elif feature_set == 'bert_features':
features, labels=bertf.bert_featurize(transcript, bert_model)
elif feature_set == 'blabla_feature':
features, labels=bbf.blabla_featurize(transcript)
# make sure all the features do not have any infinity or NaN
features=np.nan_to_num(np.array(features))
features=features.tolist()
return features, labels
def transcribe_text(default_text_transcriber, transcript):
## create a simple function to expand into the future
if default_text_transcriber == 'raw text':
transcript=transcript
else:
transcript=''
return transcript
# type in folder before downloading and loading large files.
foldername=sys.argv[1]
# get class label from folder name
labelname=foldername.split('/')
if labelname[-1]=='':
labelname=labelname[-2]
else:
labelname=labelname[-1]
##################################################
## Main script ##
##################################################
basedir=os.getcwd()
# directory=sys.argv[1]
settingsdir=prev_dir(basedir)
sys.path.append(settingsdir)
from standard_array import make_features
settingsdir=prev_dir(settingsdir)
settings=json.load(open(settingsdir+'/settings.json'))
os.chdir(basedir)
try:
feature_sets=[sys.argv[2]]
except:
feature_sets=settings['default_text_features']
default_text_transcribers=settings['default_text_transcriber']
text_transcribe=settings['transcribe_text']
# contextually load repositories here
if 'blabla_features' in feature_sets:
import blabla_features as bbf
if 'nltk_features' in feature_sets:
import nltk_features as nf
if 'spacy_features' in feature_sets:
import spacy_features as sf
if 'glove_features' in feature_sets:
import glove_features as gf
if 'w2v_features' in feature_sets:
import w2v_features as w2v
if 'fast_features' in feature_sets:
import fast_features as ff
if 'text_features' in feature_sets:
import text_features as textf
if 'grammar_features' in feature_sets:
import grammar_features as grammarf
if 'bert_features' in feature_sets:
import bert_features as bertf
from sentence_transformers import SentenceTransformer
bert_model=SentenceTransformer('bert-base-nli-mean-tokens')
else:
bert_model=[]
# can specify many types of features...
for j in range(len(feature_sets)):
feature_set=feature_sets[j]
glovemodel=[]
w2vmodel=[]
fastmodel=[]
if feature_set in ['nltk_features', 'spacy_features']:
# save memory by not loading any models that are not necessary.
glovemodel=[]
w2vmodel=[]
fastmodel=[]
else:
##################################################
## Load ML models ##
##################################################
# load GloVE model
if feature_set == 'glove_features':
from gensim.scripts.glove2word2vec import glove2word2vec
if 'glove.6B' not in os.listdir(os.getcwd()+'/helpers'):
curdir=os.getcwd()
print('downloading GloVe model...')
wget.download("http://neurolex.co/uploads/glove.6B.zip", "./helpers/glove.6B.zip")
print('extracting GloVe model')
zip_ref = zipfile.ZipFile(os.getcwd()+'/helpers/glove.6B.zip', 'r')
zip_ref.extractall(os.getcwd()+'/helpers/glove.6B')
zip_ref.close()
os.chdir(os.getcwd()+'/helpers/glove.6B')
glove_input_file = 'glove.6B.100d.txt'
word2vec_output_file = 'glove.6B.100d.txt.word2vec'
glove2word2vec(glove_input_file, word2vec_output_file)
os.chdir(curdir)
glovemodelname = 'glove.6B.100d.txt.word2vec'
print('-----------------')
print('loading GloVe model...')
glovemodel = KeyedVectors.load_word2vec_format(os.getcwd()+'/helpers/glove.6B/'+glovemodelname, binary=False)
print('loaded GloVe model...')
# load Google W2V model
elif feature_set == 'w2v_features':
if 'GoogleNews-vectors-negative300.bin' not in os.listdir(os.getcwd()+'/helpers'):
print('downloading Google W2V model...')
wget.download("http://neurolex.co/uploads/GoogleNews-vectors-negative300.bin", "./helpers/GoogleNews-vectors-negative300.bin")
w2vmodelname = 'GoogleNews-vectors-negative300.bin'
print('-----------------')
print('loading Google W2V model...')
w2vmodel = KeyedVectors.load_word2vec_format(os.getcwd()+'/helpers/'+w2vmodelname, binary=True)
print('loaded Google W2V model...')
# load facebook FastText model
elif feature_set == 'fast_features':
from gensim.models.fasttext import FastText
if 'wiki-news-300d-1M' not in os.listdir(os.getcwd()+'/helpers'):
print('downloading Facebook FastText model...')
wget.download("https://dl.fbaipublicfiles.com/fasttext/vectors-english/wiki-news-300d-1M.vec.zip", "./helpers/wiki-news-300d-1M.vec.zip")
zip_ref = zipfile.ZipFile(os.getcwd()+'/helpers/wiki-news-300d-1M.vec.zip', 'r')
zip_ref.extractall(os.getcwd()+'/helpers/wiki-news-300d-1M')
zip_ref.close()
print('-----------------')
print('loading Facebook FastText model...')
# Loading fasttext model
fastmodel = KeyedVectors.load_word2vec_format(os.getcwd()+'/helpers/wiki-news-300d-1M/wiki-news-300d-1M.vec')
print('loaded Facebook FastText model...')
# # rename files appropriately to eliminate ( and )
# os.chdir(foldername)
# listdir=os.listdir()
# for i in range(len(listdir)):
# if listdir[i].endswith('.txt'):
# id_=str(uuid.uuid4())
# os.rename(listdir[i], id_+'.txt')
# if listdir[i][0:-4]+'.json' in listdir:
# os.rename(listdir[i][0:-4]+'.json', id_+'.json')
# now get files and directory
os.chdir(foldername)
listdir=os.listdir()
cur_dir=os.getcwd()
# featurize all files accoridng to librosa featurize
for i in tqdm(range(len(listdir)), desc=labelname):
if listdir[i][-4:] in ['.txt']:
try:
sampletype='text'
os.chdir(cur_dir)
transcript=open(listdir[i]).read()
# I think it's okay to assume audio less than a minute here...
if listdir[i][0:-4]+'.json' not in listdir:
# make new .JSON if it is not there with base array schema.
basearray=make_features(sampletype)
# assume text_transcribe==True and add to transcript list
if text_transcribe==True:
transcript_list=basearray['transcripts']
for j in range(len(default_text_transcribers)):
default_text_transcriber=default_text_transcribers[j]
transcript_=transcribe_text(default_text_transcriber, transcript)
transcript_list['text'][default_text_transcriber]=transcript_
basearray['transcripts']=transcript_list
for j in range(len(feature_sets)):
feature_set=feature_sets[j]
# featurize the text file
features, labels = text_featurize(feature_set, transcript, glovemodel, w2vmodel, fastmodel, bert_model)
print(features)
try:
data={'features':features.tolist(),
'labels': labels}
except:
data={'features':features,
'labels': labels}
text_features=basearray['features']['text']
text_features[feature_set]=data
basearray['features']['text']=text_features
basearray['labels']=[foldername]
jsonfile=open(listdir[i][0:-4]+'.json','w')
json.dump(basearray, jsonfile)
jsonfile.close()
elif listdir[i][0:-4]+'.json' in listdir:
# load base array
basearray=json.load(open(listdir[i][0:-4]+'.json'))
# get transcript and update if necessary
transcript_list=basearray['transcripts']
# assume text_transcribe==True and add to transcript list
if text_transcribe==True:
for j in range(len(default_text_transcribers)):
default_text_transcriber=default_text_transcribers[j]
if default_text_transcriber not in list(transcript_list['text']):
transcript_=transcribe_text(default_text_transcriber, transcript)
transcript_list['text'][default_text_transcriber]=transcript_
basearray['transcripts']=transcript_list
for j in range(len(feature_sets)):
feature_set=feature_sets[j]
# re-featurize only if necessary
if feature_set not in list(basearray['features']['text']):
features, labels = text_featurize(feature_set, transcript, glovemodel, w2vmodel, fastmodel, bert_model)
print(features)
try:
data={'features':features.tolist(),
'labels': labels}
except:
data={'features':features,
'labels': labels}
basearray['features']['text'][feature_set]=data
# only add the label if necessary
label_list=basearray['labels']
if labelname not in label_list:
label_list.append(labelname)
basearray['labels']=label_list
# overwrite existing .JSON
jsonfile=open(listdir[i][0:-4]+'.json','w')
json.dump(basearray, jsonfile)
jsonfile.close()
except:
print('error')
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/text_features/helpers/transcribe.py | features/text_features/helpers/transcribe.py | '''
================================================
## VOICEBOOK REPOSITORY ##
================================================
repository name: voicebook
repository version: 1.0
repository link: https://github.com/jim-schwoebel/voicebook
author: Jim Schwoebel
author contact: js@neurolex.co
description: a book and repo to get you started programming voice applications in Python - 10 chapters and 200+ scripts.
license category: opensource
license: Apache 2.0 license
organization name: NeuroLex Laboratories, Inc.
location: Seattle, WA
website: https://neurolex.ai
release date: 2018-09-28
This code (voicebook) is hereby released under a Apache 2.0 license license.
For more information, check out the license terms below.
================================================
## LICENSE TERMS ##
================================================
Copyright 2018 NeuroLex Laboratories, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
================================================
## SERVICE STATEMENT ##
================================================
If you are using the code written for a larger project, we are
happy to consult with you and help you with deployment. Our team
has >10 world experts in Kafka distributed architectures, microservices
built on top of Node.js / Python / Docker, and applying machine learning to
model speech and text data.
We have helped a wide variety of enterprises - small businesses,
researchers, enterprises, and/or independent developers.
If you would like to work with us let us know @ js@neurolex.co.
================================================
## TRANSCRIBE.PY ##
================================================
Overview of how to implement various transcriptions for offline or
online applications.
Note some of these transcription methods require environment variables
to be setup (e.g. Google).
'''
import os, json, time, datetime
import speech_recognition as sr_audio
def sync_record(filename, duration, fs, channels):
print('recording')
myrecording = sd.rec(int(duration * fs), samplerate=fs, channels=channels)
sd.wait()
sf.write(filename, myrecording, fs)
print('done recording')
def convert_audio(file):
# convert to proper format with FFmpeg shell script
filename=file[0:-4]+'_temp.wav'
command='ffmpeg -i %s -acodec pcm_s16le -ac 1 -ar 16000 %s'%(file,filename)
os.system(command)
return filename
def transcribe_google(file):
# transcribe with google speech API, $0.024/minute
r=sr_audio.Recognizer()
with sr_audio.AudioFile(file) as source:
audio = r.record(source)
transcript=r.recognize_google_cloud(audio)
print('google transcript: '+transcript)
return transcript
# transcribe with pocketsphinx (open-source)
def transcribe_sphinx(file):
r=sr_audio.Recognizer()
with sr_audio.AudioFile(file) as source:
audio = r.record(source)
transcript=r.recognize_sphinx(audio)
print('sphinx transcript: '+transcript)
return transcript
# transcribe with deepspeech (open-source, but can be CPU-intensive)
def transcribe_deepspeech(file):
# get the deepspeech model installed if you don't already have it (1.6 GB model)
# can be computationally-intensive, so make sure it works on your CPU
if 'models' not in os.listdir():
os.system('brew install wget')
os.system('pip3 install deepspeech')
os.system('wget https://github.com/mozilla/DeepSpeech/releases/download/v0.1.1/deepspeech-0.1.1-models.tar.gz')
os.system('tar -xvzf deepspeech-0.1.1-models.tar.gz')
# make intermediate text file and fetch transcript
textfile=file[0:-4]+'.txt'
command='deepspeech models/output_graph.pb %s models/alphabet.txt models/lm.binary models/trie >> %s'%(file,textfile)
os.system(command)
transcript=open(textfile).read()
print('deepspeech transcript: '+transcript)
# remove text file
os.remove(textfile)
return transcript
def transcribe_all(file):
# get transcripts from all methods and store in .json file
filename=convert_audio(file)
try:
google_transcript=transcribe_google(filename)
except:
google_transcript=''
try:
sphinx_transcript=transcribe_sphinx(filename)
except:
sphinx_transcript=''
try:
deepspeech_transcript=transcribe_deepspeech(filename)
except:
deepspeech_transcript=''
os.remove(filename)
# write to .json
jsonfilename=file[0:-4]+'.json'
jsonfile=open(jsonfilename,'w')
data={
'filename': file,
'date': str(datetime.datetime.now()),
'transcripts': {
'google':google_transcript,
'sphinx':sphinx_transcript,
'deepspeech':deepspeech_transcript}
}
json.dump(data,jsonfile)
return jsonfilename
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/text_features/helpers/blabla/setup.py | features/text_features/helpers/blabla/setup.py | from setuptools import setup
setup(
name="blabla",
version="0.2.2",
description="Novoic linguistics feature extraction package.",
url="http://github.com/novoic/BlaBla",
author="Abhishek Shivkumar",
author_email="abhishek@novoic.com",
license="GPL-3.0",
packages=[
"blabla",
"blabla.sentence_processor",
"blabla.utils",
"blabla.language_resources",
"blabla.sentence_aggregators",
],
keywords=[
"feature-extraction",
"language",
"machine-learning",
"text-processing",
"python",
"natural-language-processing",
"nlp",
"stanza",
"dementia",
"alzheimers-disease",
"parkinsons-disease",
],
download_url="https://github.com/novoic/blabla/archive/v0.2.2.tar.gz",
install_requires=[
"stanza>=1.1.0",
"flask==1.1.2",
"jsonpickle==1.4",
"anytree==2.8.0",
"nltk==3.5",
"ipython==7.13.0",
"jsonschema==3.2.0",
"pyyaml==5.3.1",
"pandas==1.0.3",
"tqdm==4.46.0"
],
package_data={"blabla.language_resources": ["*.txt"]},
include_package_data=True,
scripts=["bin/blabla"],
zip_safe=False,
classifiers=[
'Development Status :: 4 - Beta',
'Intended Audience :: Healthcare Industry',
'Topic :: Scientific/Engineering :: Bio-Informatics',
'License :: OSI Approved :: GNU General Public License v3 (GPLv3)',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
],
)
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/text_features/helpers/blabla/docs/conf.py | features/text_features/helpers/blabla/docs/conf.py | # Configuration file for the Sphinx documentation builder.
#
# This file only contains a selection of the most common options. For a full
# list see the documentation:
# https://www.sphinx-doc.org/en/master/usage/configuration.html
# -- Path setup --------------------------------------------------------------
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
import os
import sys
from unittest.mock import MagicMock
sys.path.insert(0, os.path.abspath("."))
master_doc = "index"
# Needed to not sort alphabetically.
autodoc_member_order = "bysource"
# -- Project information -----------------------------------------------------
project = "BlaBla"
copyright = "2020, Abhishek Shivkumar"
author = "Abhishek Shivkumar"
# -- General configuration ---------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = ["sphinx.ext.autodoc", "sphinx.ext.coverage", "sphinx.ext.napoleon"]
# Add any paths that contain templates here, relative to this directory.
templates_path = ["_templates"]
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This pattern also affects html_static_path and html_extra_path.
exclude_patterns = ["_build", "Thumbs.db", ".DS_Store"]
# -- Options for HTML output -------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
html_theme = "sphinx_rtd_theme"
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ["_static"]
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | false |
jim-schwoebel/allie | https://github.com/jim-schwoebel/allie/blob/b89f1403f63033ad406d0606b7c7a45000b43481/features/text_features/helpers/blabla/blabla/document_engine.py | features/text_features/helpers/blabla/blabla/document_engine.py | import traceback
from blabla.sentence_aggregators.phonetic_and_phonological_feature_aggregator import (
phonetic_and_phonological_feature_processor,
)
from blabla.sentence_aggregators.lexico_semantic_fearture_aggregator import (
lexico_semantic_feature_processor,
)
from blabla.sentence_aggregators.morpho_syntactic_feature_aggregator import (
morpho_syntactic_feature_processor,
)
from blabla.sentence_aggregators.syntactic_feature_aggregator import (
syntactic_feature_processor,
)
from blabla.sentence_aggregators.discourse_and_pragmatic_feature_aggregator import (
discourse_and_pragmatic_feature_processor,
)
import blabla.utils.settings as settings
from blabla.utils.exceptions import *
class Document(object):
"""This class represents the Document Engine class that defines all the features
"""
def __init__(self, lang, nlp, client):
self.lang = lang
self.nlp = nlp
self.client = client
self.sentence_objs = []
self.CONST_PT_SUPPORTED_LANGUAGES = ['ar', 'de', 'en', 'es', 'fr', 'zh-hant']
def validate_features_list(self, feature_list):
"""Compute features
Args:
feature_list (str): A list of features to be extracted
Returns:
dict: A dictionary of features and their values
"""
for feature in feature_list:
if feature in settings.CONST_PT_FEATURES:
if self.lang not in self.CONST_PT_SUPPORTED_LANGUAGES:
raise InvalidFeatureException(
'You have reqquested for a feature {} in language {} which are not compatible. Please check the FEATURED.md list'.format(
feature, self.lang
)
)
def compute_features(self, feature_list, **kwargs):
"""Compute features
Args:
feature_list (list of str): A list of features to be extracted
Returns:
dict: A dictionary of features and their values
"""
self.validate_features_list(feature_list)
for sentence_obj in self.sentence_objs:
sentence_obj.setup_dep_pt()
if any(feature in settings.CONST_PT_FEATURES for feature in feature_list):
sentence_obj.setup_const_pt()
sentence_obj.setup_yngve_tree()
features = {}
for feature_name in feature_list:
try:
method_to_call = getattr(self, feature_name)
result = method_to_call(**kwargs)
features[feature_name] = result
except AttributeError as e:
raise InvalidFeatureException(
f'Please check the feature name. Feature name {feature_name} is invalid'
)
return features
def _extract_phonetic_and_phonological_features(self, *features, **kwargs):
"""Extract phonetic and phonological features across all sentence objects
Args:
features (list): The list of features to be extracted
kwargs (list): Optional arguments for threshold values
Returns:
dict: The dictionary containing the different features
"""
features_dict = {}
for feature in features:
features_dict[feature] = phonetic_and_phonological_feature_processor(
self.sentence_objs, feature, **kwargs
)
return features_dict
def _extract_lexico_semantic_features(self, *features, **kwargs):
"""Extract lexico semantic features across all sentence objects
Args:
features (list): The list of features to be extracted
kwargs (list): Optional arguments for threshold values
Returns:
dict: The dictionary containing the different features
"""
features_dict = {}
for feature in features:
features_dict[feature] = lexico_semantic_feature_processor(
self.sentence_objs, feature, **kwargs
)
return features_dict
def _extract_morpho_syntactic_features(self, *features, **kwargs):
"""Extract morpho syntactic features across all sentence objects
Args:
features (list): The list of features to be extracted
kwargs (list): Optional arguments for threshold values
Returns:
dict: The dictionary containing the different features
"""
features_dict = {}
for feature in features:
features_dict[feature] = morpho_syntactic_feature_processor(
self.sentence_objs, feature, **kwargs
)
return features_dict
def _extract_syntactic_features(self, *features, **kwargs):
"""Extract syntactic features across all sentence objects
Args:
features (list): The list of features to be extracted
kwargs (list): Optional arguments for threshold values
Returns:
dict: The dictionary containing the different features
"""
features_dict = {}
for feature in features:
features_dict[feature] = syntactic_feature_processor(
self.sentence_objs, feature, **kwargs
)
return features_dict
def _extract_discourse_and_pragmatic_feature_processor(self, *features, **kwargs):
"""Extract discourse and pragmatic features across all sentence objects
Args:
features (list): The list of features to be extracted
kwargs (list): Optional arguments for threshold values
Returns:
dict: The dictionary containing the different features
"""
features_dict = {}
for feature in features:
features_dict[feature] = discourse_and_pragmatic_feature_processor(
self.sentence_objs, feature, **kwargs
)
return features_dict
def num_pauses(self, **kwargs):
"""Method to extract the number of pauses
Args:
kwargs (list): Optional arguments for threshold values
Returns:
int: The number of pauses
"""
return self._extract_phonetic_and_phonological_features('num_pauses', **kwargs)[
'num_pauses'
]
def total_pause_time(self, **kwargs):
"""Method to extract the total pause time
Args:
kwargs (list): Optional arguments for threshold values
Returns:
float: The total pause time
"""
return self._extract_phonetic_and_phonological_features(
'total_pause_time', **kwargs
)['total_pause_time']
def mean_pause_duration(self, **kwargs):
"""Method to extract the mean pause duration
Args:
kwargs (list): Optional arguments for threshold values
Returns:
float: The mean pause duration
"""
return self._extract_phonetic_and_phonological_features(
'mean_pause_duration', **kwargs
)['mean_pause_duration']
def between_utterance_pause_duration(self, **kwargs):
"""Method to extract the between utterance pause duration
Args:
kwargs (list): Optional arguments for threshold values
Returns:
float: The between utterance pause duration
"""
return self._extract_phonetic_and_phonological_features(
'between_utterance_pause_duration', **kwargs
)['between_utterance_pause_duration']
def hesitation_ratio(self, **kwargs):
"""Method to extract the hesitation ratio
Args:
kwargs (list): Optional arguments for threshold values
Returns:
float: The hesitation ratio
"""
return self._extract_phonetic_and_phonological_features(
'hesitation_ratio', **kwargs
)['hesitation_ratio']
def speech_rate(self, **kwargs):
"""Method to extract the speech rate
Args:
kwargs (list): Optional arguments for threshold values
Returns:
float: The speech rate
"""
return self._extract_phonetic_and_phonological_features('speech_rate', **kwargs)[
'speech_rate'
]
def maximum_speech_rate(self, **kwargs):
"""Method to extract the maximum speech rate
Args:
kwargs (list): Optional arguments for threshold values
Returns:
float: The maximum speech rate
"""
return self._extract_phonetic_and_phonological_features(
'maximum_speech_rate', **kwargs
)['maximum_speech_rate']
def total_phonation_time(self, **kwargs):
"""Method to extract the total phonation time
Args:
kwargs (list): Optional arguments for threshold values
Returns:
float: The total phonation time
"""
return self._extract_phonetic_and_phonological_features(
'total_phonation_time', **kwargs
)['total_phonation_time']
def std_phonation_time(self, **kwargs):
"""Method to extract the standardized phonation time
Args:
kwargs (list): Optional arguments for threshold values
Returns:
float: The standardized phonation time
"""
return self._extract_phonetic_and_phonological_features(
'standardized_phonation_time', **kwargs
)['standardized_phonation_time']
def total_locution_time(self, **kwargs):
"""Method to extract the total locution time
Args:
kwargs (list): Optional arguments for threshold values
Returns:
float: The total locution time
"""
return self._extract_phonetic_and_phonological_features(
'total_locution_time', **kwargs
)['total_locution_time']
def adjective_rate(self, **kwargs):
"""Extract the adjective rate.
Ref: https://pubmed.ncbi.nlm.nih.gov/28321196/
Args:
kwargs (list): Optional arguments for threshold values
Returns:
The adjective rate across all sentence objects
"""
return self._extract_lexico_semantic_features('adjective_rate', **kwargs)['adjective_rate']
def adposition_rate(self, **kwargs):
"""Extract the adposition rate.
Ref: https://pubmed.ncbi.nlm.nih.gov/28321196/
Args:
kwargs (list): Optional arguments for threshold values
Returns:
The adposition rate across all sentence objects
"""
return self._extract_lexico_semantic_features('adposition_rate', **kwargs)['adposition_rate']
def adverb_rate(self, **kwargs):
"""Extract the adverb rate.
Ref: https://pubmed.ncbi.nlm.nih.gov/28321196/
Args:
kwargs (list): Optional arguments for threshold values
Returns:
The adverb rate across all sentence objects
"""
return self._extract_lexico_semantic_features('adverb_rate', **kwargs)['adverb_rate']
def auxiliary_rate(self, **kwargs):
"""Extract the auxiliary rate.
Ref: https://pubmed.ncbi.nlm.nih.gov/28321196/
Args:
kwargs (list): Optional arguments for threshold values
Returns:
The auxiliary rate across all sentence objects
"""
return self._extract_lexico_semantic_features('auxiliary_rate', **kwargs)['auxiliary_rate']
def determiner_rate(self, **kwargs):
"""Extract the determiner rate.
Ref: https://pubmed.ncbi.nlm.nih.gov/28321196/
Args:
kwargs (list): Optional arguments for threshold values
Returns:
The determiner rate across all sentence objects
"""
return self._extract_lexico_semantic_features('determiner_rate', **kwargs)['determiner_rate']
def interjection_rate(self, **kwargs):
"""Extract the interjection rate.
Ref: https://pubmed.ncbi.nlm.nih.gov/28321196/
Args:
kwargs (list): Optional arguments for threshold values
Returns:
The interjection rate across all sentence objects
"""
return self._extract_lexico_semantic_features('interjection_rate', **kwargs)['interjection_rate']
def noun_rate(self, **kwargs):
"""Extract the noun rate.
Ref: https://pubmed.ncbi.nlm.nih.gov/28321196/
Args:
kwargs (list): Optional arguments for threshold values
Returns:
The noun rate across all sentence objects
"""
return self._extract_lexico_semantic_features('noun_rate', **kwargs)['noun_rate']
def numeral_rate(self, **kwargs):
"""Extract the numeral rate.
Ref: https://pubmed.ncbi.nlm.nih.gov/28321196/
Args:
kwargs (list): Optional arguments for threshold values
Returns:
The numeral rate across all sentence objects
"""
return self._extract_lexico_semantic_features('numeral_rate', **kwargs)['numeral_rate']
def particle_rate(self, **kwargs):
"""Extract the particle rate.
Ref: https://pubmed.ncbi.nlm.nih.gov/28321196/
Args:
kwargs (list): Optional arguments for threshold values
Returns:
The particle rate across all sentence objects
"""
return self._extract_lexico_semantic_features('particle_rate', **kwargs)['particle_rate']
def pronoun_rate(self, **kwargs):
"""Extract the pronoun rate.
Ref: https://pubmed.ncbi.nlm.nih.gov/28321196/
Args:
kwargs (list): Optional arguments for threshold values
Returns:
The pronoun rate across all sentence objects
"""
return self._extract_lexico_semantic_features('pronoun_rate', **kwargs)['pronoun_rate']
def proper_noun_rate(self, **kwargs):
"""Extract the proper_noun rate.
Ref: https://pubmed.ncbi.nlm.nih.gov/28321196/
Args:
kwargs (list): Optional arguments for threshold values
Returns:
The proper_noun rate across all sentence objects
"""
return self._extract_lexico_semantic_features('proper_noun_rate', **kwargs)['proper_noun_rate']
def punctuation_rate(self, **kwargs):
"""Extract the punctuation rate.
Ref: https://pubmed.ncbi.nlm.nih.gov/28321196/
Args:
kwargs (list): Optional arguments for threshold values
Returns:
The punctuation rate across all sentence objects
"""
return self._extract_lexico_semantic_features('punctuation_rate', **kwargs)['punctuation_rate']
def subordinating_conjunction_rate(self, **kwargs):
"""Extract the subordinating_conjunction rate.
Ref: https://pubmed.ncbi.nlm.nih.gov/28321196/
Args:
kwargs (list): Optional arguments for threshold values
Returns:
The subordinating_conjunction rate across all sentence objects
"""
return self._extract_lexico_semantic_features('subordinating_conjunction_rate', **kwargs)['subordinating_conjunction_rate']
def symbol_rate(self, **kwargs):
"""Extract the symbol rate.
Ref: https://pubmed.ncbi.nlm.nih.gov/28321196/
Args:
kwargs (list): Optional arguments for threshold values
Returns:
The symbol rate across all sentence objects
"""
return self._extract_lexico_semantic_features('symbol_rate', **kwargs)['symbol_rate']
def verb_rate(self, **kwargs):
"""Extract the verb rate.
Ref: https://pubmed.ncbi.nlm.nih.gov/28321196/
Args:
kwargs (list): Optional arguments for threshold values
Returns:
flaot: The verb rate across all sentence objects
"""
return self._extract_lexico_semantic_features('verb_rate', **kwargs)['verb_rate']
def demonstrative_rate(self, **kwargs):
"""Extract the demonstrative rate
Args:
kwargs (list): Optional arguments for threshold values
Returns:
float: The demonstrative rate across all sentence objects
"""
return self._extract_lexico_semantic_features('demonstrative_rate', **kwargs)[
'demonstrative_rate'
]
def conjunction_rate(self, **kwargs):
"""Extract the conjunction rate.
Ref: https://pubmed.ncbi.nlm.nih.gov/28321196/
Args:
kwargs (list): Optional arguments for threshold values
Returns:
float: The conjunction rate across all sentence objects
"""
return self._extract_lexico_semantic_features('conjunction_rate', **kwargs)[
'conjunction_rate'
]
def possessive_rate(self, **kwargs):
"""Extract the possesive rate.
Ref: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3642700/
Args:
kwargs (list): Optional arguments for threshold values
Returns:
float: The possesive rate across all sentence objects
"""
return self._extract_lexico_semantic_features('possessive_rate', **kwargs)[
'possessive_rate'
]
def noun_verb_ratio(self, **kwargs):
"""Extract the noun to verb rate.
Ref: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5337522/
Args:
kwargs (list): Optional arguments for threshold values
Returns:
float: The noun to verb rate across all sentence objects
"""
return self._extract_lexico_semantic_features('noun_verb_ratio', **kwargs)[
'noun_verb_ratio'
]
def noun_ratio(self, **kwargs):
"""Extract the noun ratio.
Ref:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5337522/
Args:
kwargs (list): Optional arguments for threshold values
Returns:
float: The noun ratio across all sentence objects
"""
return self._extract_lexico_semantic_features('noun_ratio', **kwargs)[
'noun_ratio'
]
def pronoun_noun_ratio(self, **kwargs):
"""Extract the pronoun to noun ratio.
Ref: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5337522/
Args:
kwargs (list): Optional arguments for threshold values
Returns:
float: The pronoun to noun ratio across all sentence objects
"""
return self._extract_lexico_semantic_features('pronoun_noun_ratio', **kwargs)[
'pronoun_noun_ratio'
]
def total_dependency_distance(self, **kwargs):
"""Extract the total dependency distance.
Ref: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5337522/
Args:
kwargs (list): Optional arguments for threshold values
Returns:
float: The total dependency distance across all sentence objects
"""
return self._extract_lexico_semantic_features('total_dependency_distance', **kwargs)[
'total_dependency_distance'
]
def average_dependency_distance(self, **kwargs):
"""Extract the average dependency distance.
Ref: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5337522/
Args:
kwargs (list): Optional arguments for threshold values
Returns:
float: The average dependency distance across all sentence objects
"""
return self._extract_lexico_semantic_features('average_dependency_distance', **kwargs)[
'average_dependency_distance'
]
def total_dependencies(self, **kwargs):
"""Extract the number of unique dependency relations.
Ref: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5337522/
Args:
kwargs (list): Optional arguments for threshold values
Returns:
float: The total number of unique dependencies across all sentence objects
"""
return self._extract_lexico_semantic_features('total_dependencies', **kwargs)[
'total_dependencies'
]
def average_dependencies(self, **kwargs):
"""Extract the average number of unique dependency relations.
Ref: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5337522/
Args:
kwargs (list): Optional arguments for threshold values
Returns:
float: The average number of unique dependencies across all sentence objects
"""
return self._extract_lexico_semantic_features('average_dependencies', **kwargs)[
'average_dependencies'
]
def closed_class_word_rate(self, **kwargs):
"""Extract the proportion of closed class words.
Ref: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5337522/
Args:
kwargs (list): Optional arguments for threshold values
Returns:
float: The proportion of closed class words across all sentence objects
"""
return self._extract_lexico_semantic_features(
'closed_class_word_rate', **kwargs
)['closed_class_word_rate']
def open_class_word_rate(self, **kwargs):
"""Extract the prooportion of open class words.
Ref: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5337522/
Args:
kwargs (list): Optional arguments for threshold values
Returns:
float: The prooportion of open class words across all sentence objects
"""
return self._extract_lexico_semantic_features('open_class_word_rate', **kwargs)[
'open_class_word_rate'
]
def content_density(self, **kwargs):
"""Extract the content density
Ref: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5337522/
Args:
kwargs (list): Optional arguments for threshold values
Returns:
float: The content density across all sentence objects
"""
return self._extract_lexico_semantic_features('content_density', **kwargs)[
'content_density'
]
def idea_density(self, **kwargs):
"""Extract the idea density
Ref: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5337522/
Args:
kwargs (list): Optional arguments for threshold values
Returns:
float: The idea density across all sentence objects
"""
return self._extract_lexico_semantic_features('idea_density', **kwargs)[
'idea_density'
]
def honore_statistic(self, **kwargs):
"""Extract the honore statistic
Ref: https://www.aclweb.org/anthology/W16-1902.pdf
Args:
kwargs (list): Optional arguments for threshold values
Returns:
float: The honore statistic across all sentence objects
"""
return self._extract_lexico_semantic_features('honore_statistic', **kwargs)[
'honore_statistic'
]
def brunet_index(self, **kwargs):
"""Extract the brunet's index.
Ref: https://www.aclweb.org/anthology/W16-1902.pdf
Args:
kwargs (list): Optional arguments for threshold values
Returns:
float: The brunet's index across all sentence objects
"""
return self._extract_lexico_semantic_features('brunet_index', **kwargs)[
'brunet_index'
]
def type_token_ratio(self, **kwargs):
"""Extract the type to token ratio.
Ref https://www.tandfonline.com/doi/abs/10.1080/02687038.2017.1303441
Args:
kwargs (list): Optional arguments for threshold values
Returns:
float: The type to token ratio across all sentence objects
"""
return self._extract_lexico_semantic_features('type_token_ratio', **kwargs)[
'type_token_ratio'
]
def word_length(self, **kwargs):
"""Extract the mean word length.
Ref: https://pubmed.ncbi.nlm.nih.gov/26484921/
Args:
kwargs (list): Optional arguments for threshold values
Returns:
float: The mean word length all sentence objects
"""
return self._extract_lexico_semantic_features('word_length', **kwargs)[
'word_length'
]
def prop_inflected_verbs(self, **kwargs):
"""Extract proportion of inflected verbs.
Ref: https://pubmed.ncbi.nlm.nih.gov/28321196/
Args:
kwargs (list): Optional arguments for threshold values
Returns:
float: The proportion of inflected verbs across all sentence objects
"""
return self._extract_morpho_syntactic_features("prop_inflected_verbs", **kwargs)[
"prop_inflected_verbs"
]
def prop_auxiliary_verbs(self, **kwargs):
"""Extract the proportion of auxiliary verbs
Ref: https://pubmed.ncbi.nlm.nih.gov/28321196/
Args:
kwargs (list): Optional arguments for threshold values
Returns:
float: The proportion of auxiliary verbs across all sentence objects
"""
return self._extract_morpho_syntactic_features('prop_auxiliary_verbs', **kwargs)[
'prop_auxiliary_verbs'
]
def prop_gerund_verbs(self, **kwargs):
"""Extract the proportion of gerund verbs.
Ref: https://pubmed.ncbi.nlm.nih.gov/28321196/
Args:
kwargs (list): Optional arguments for threshold values
Returns:
float: The the proportion of gerund verbs across all sentence objects
"""
return self._extract_morpho_syntactic_features('prop_gerund_verbs', **kwargs)[
'prop_gerund_verbs'
]
def prop_participles(self, **kwargs):
"""Extract the proportion of participle verbs.
Ref: https://pubmed.ncbi.nlm.nih.gov/28321196/
Args:
kwargs (list): Optional arguments for threshold values
Returns:
float: The proportion of participle verbs across all sentence objects
"""
return self._extract_morpho_syntactic_features('prop_participles', **kwargs)[
'prop_participles'
]
def num_noun_phrases(self, **kwargs):
"""Extract the number of noun phrases.
Ref: https://pubmed.ncbi.nlm.nih.gov/28321196/
Args:
kwargs (list): Optional arguments for threshold values
Returns:
float: The number of noun phrases across all sentence objects
"""
return self._extract_syntactic_features('num_noun_phrases', **kwargs)[
'num_noun_phrases'
]
def noun_phrase_rate(self, **kwargs):
"""Extract the number of noun phrases.
Ref: https://pubmed.ncbi.nlm.nih.gov/28321196/
Args:
kwargs (list): Optional arguments for threshold values
Returns:
float: The noun phrase rate across all sentence objects
"""
return self._extract_syntactic_features('noun_phrase_rate', **kwargs)[
'noun_phrase_rate'
]
def num_verb_phrases(self, **kwargs):
"""Extract the number of verb phrases.
Ref: https://pubmed.ncbi.nlm.nih.gov/28321196/
Args:
kwargs (list): Optional arguments for threshold values
Returns:
float: The number of verb phrases across all sentence objects
"""
return self._extract_syntactic_features('num_verb_phrases')['num_verb_phrases']
def verb_phrase_rate(self, **kwargs):
"""Extract the number of noun phrases.
Ref: https://pubmed.ncbi.nlm.nih.gov/28321196/
Args:
kwargs (list): Optional arguments for threshold values
Returns:
float: The noun phrase rate across all sentence objects
"""
return self._extract_syntactic_features('verb_phrase_rate', **kwargs)[
'verb_phrase_rate'
]
def num_prepositional_phrases(self, **kwargs):
"""Extract the number of prepositional phrases.
Ref: https://pubmed.ncbi.nlm.nih.gov/28321196/
Args:
kwargs (list): Optional arguments for threshold values
Returns:
float: The number of prepositional phrases across all sentence objects
"""
return self._extract_syntactic_features('num_prepositional_phrases', **kwargs)[
'num_prepositional_phrases'
]
def prepositional_phrase_rate(self, **kwargs):
"""Extract the number of prepositional phrases.
Ref: https://pubmed.ncbi.nlm.nih.gov/28321196/
Args:
kwargs (list): Optional arguments for threshold values
Returns:
float: The prepositional phrases rate across all sentence objects
"""
return self._extract_syntactic_features('prepositional_phrase_rate', **kwargs)[
'prepositional_phrase_rate'
]
def num_clauses(self, **kwargs):
"""Extract the number of clauses.
Ref: https://pubmed.ncbi.nlm.nih.gov/28321196/
Args:
kwargs (list): Optional arguments for threshold values
Returns:
float: The number of clauses across all sentence objects
"""
return self._extract_syntactic_features('num_clauses', **kwargs)['num_clauses']
def clause_rate(self, **kwargs):
"""Extract the number of clauses.
Ref: https://pubmed.ncbi.nlm.nih.gov/28321196/
Args:
kwargs (list): Optional arguments for threshold values
Returns:
float: The number of clauses across all sentence objects
"""
return self._extract_syntactic_features('clause_rate', **kwargs)['clause_rate']
def num_infinitive_phrases(self, **kwargs):
"""Extract the number of infinitive phrases.
Ref: https://pubmed.ncbi.nlm.nih.gov/28321196/
Args:
kwargs (list): Optional arguments for threshold values
Returns:
float: The number of infinitive phrases across all sentence objects
"""
return self._extract_syntactic_features('num_infinitive_phrases', **kwargs)[
'num_infinitive_phrases'
]
def infinitive_phrase_rate(self, **kwargs):
"""Extract the number of infinitive phrases.
Ref: https://pubmed.ncbi.nlm.nih.gov/28321196/
Args:
kwargs (list): Optional arguments for threshold values
Returns:
float: The infinitive phrases rate across all sentence objects
"""
return self._extract_syntactic_features('infinitive_phrase_rate', **kwargs)[
'infinitive_phrase_rate'
]
def num_dependent_clauses(self, **kwargs):
"""Extract the number of dependent clauses.
Ref: https://pubmed.ncbi.nlm.nih.gov/28321196/
Args:
kwargs (list): Optional arguments for threshold values
Returns:
float: The number of dependent clauses across all sentence objects
"""
return self._extract_syntactic_features('num_dependent_clauses', **kwargs)[
'num_dependent_clauses'
]
def dependent_clause_rate(self, **kwargs):
"""Extract the dependent clauses rate.
Ref: https://pubmed.ncbi.nlm.nih.gov/28321196/
Args:
| python | Apache-2.0 | b89f1403f63033ad406d0606b7c7a45000b43481 | 2026-01-05T07:09:07.495102Z | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.