markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Apply the pipeline
import chardet doc_annotations=dict() note_count = 0 # count the number of text notes want to process *** for i in files[:]: if ".txt" in i: doc_file = os.path.join(path,i) #note_count = note_count + 1 # #if note_count > 2: # count the number of text notes ...
_____no_output_____
Apache-2.0
dimer_pipe.ipynb
asy1113/NLP_PE
Append annotation dataframes to annotation files
for k in doc_annotations: # dict of annotations k0=k.split('.')[0] k1=k0+'.nlp' nlp_ann='' for doc_ann in doc_annotations[k]: # doc_ann is line of mention ann in doc annotation nlp_ann = nlp_ann + doc_ann.ann_id +'\t' nlp_ann = nlp_ann + doc_an...
_____no_output_____
Apache-2.0
dimer_pipe.ipynb
asy1113/NLP_PE
doc classification
doc_cls_results={} for k in doc_annotations: # dict of annotations doc_cls_results[k]='low_ddimer' for doc_ann in doc_annotations[k]: if doc_ann.type =='high_ddimer': doc_cls_results[k]='high_ddimer' for k in doc_cls_results: print(k, '-----', doc_cls_results[k])
90688_292.txt ----- high_ddimer 65675_64.txt ----- high_ddimer 48640_63.txt ----- low_ddimer 86087_123.txt ----- high_ddimer 83838_106.txt ----- high_ddimer 72554_306.txt ----- high_ddimer 15899_182.txt ----- high_ddimer 13867_266.txt ----- high_ddimer 61180_73.txt ----- high_ddimer 32113_141.txt ----- high_ddimer 5938...
Apache-2.0
dimer_pipe.ipynb
asy1113/NLP_PE
Select one doc annotation for mention level evaluation
k = '90688_292.txt' ann1 = doc_annotations[k] print(ann1)
[<pipeUtils.Annotation object at 0x7fa074ff0ef0>, <pipeUtils.Annotation object at 0x7fa074ebe7b8>]
Apache-2.0
dimer_pipe.ipynb
asy1113/NLP_PE
read annotation and convert to dataframe
import numpy as np import pandas as pd nlp_list=[] for a in ann1: list1=[a.ann_id, a.type, a.start_index, a.end_index, a.spanned_text] nlp_list.append(list1) nlp_list nlp_df = pd.DataFrame(nlp_list, columns=['markup_id','type','start','end','txt']) nlp_df
_____no_output_____
Apache-2.0
dimer_pipe.ipynb
asy1113/NLP_PE
convert df to annotation object, compare two annotations
def df2ann(df=[], pdoc_type='', ndoc_type=''): from pipeUtils import Annotation from pipeUtils import Document #ann_obj=Annotation() Annotations1=[] for index, row in df.iterrows() : if (pdoc_type == row['type'] or ndoc_type == row['type']): continue ann_obj=Annotation(...
_____no_output_____
Apache-2.0
dimer_pipe.ipynb
asy1113/NLP_PE
Convert d-dimer nlp to annotation obj
Annotations2=df2ann(df=nlp_df, pdoc_type='positive_DOC', ndoc_type='negative_DOC') for a in Annotations2: print (a.ann_id, a.type, a.start_index, a.end_index, a.spanned_text)
NLP_0 high_ddimer 398 414 elevated d-dimer NLP_1 high_ddimer 2023 2039 elevated d-dimer
Apache-2.0
dimer_pipe.ipynb
asy1113/NLP_PE
read manual ref_ann and convert to df
import os unid = 'u0496358' path = "/home/"+str(unid)+"/BRAT/"+str(unid)+"/Project_pe_test" ann_file='90688_292.ann' # read ann and convert to df import numpy as np import pandas as pd annoList = [] with open(os.path.join(path,ann_file)) as f: ann_file = f.read() ann_file=ann_file.split('\n') for line in ann_...
_____no_output_____
Apache-2.0
dimer_pipe.ipynb
asy1113/NLP_PE
convert manual ref_ann df to annotation obj
Annotations3=df2ann(ann_df, pdoc_type='positive_DOC', ndoc_type='negative_DOC') for a in Annotations2: print (a.ann_id, a.type, a.start_index, a.end_index, a.spanned_text)
NLP_0 high_ddimer 398 414 elevated d-dimer NLP_1 high_ddimer 2023 2039 elevated d-dimer
Apache-2.0
dimer_pipe.ipynb
asy1113/NLP_PE
Mention Level Evaluation
tp, fp, fn, tp_list, fp_list, fn_list = compare2ann_types_by_span(ref_ann=Annotations2, sys_ann=Annotations3, ref_type ='high_ddimer', sys_type='high_ddimer', exact=True) print("tp, fp, fn") print(tp, fp, fn) print("-----fn_list-----") for i in fn_list: print(i.ann_id, i.start_index, i.end_index)
tp, fp, fn 0 0 2 -----fn_list----- NLP_0 398 414 NLP_1 2023 2039
Apache-2.0
dimer_pipe.ipynb
asy1113/NLP_PE
Object recognition with deep learning Load preliminary packages and launch a Spark connection
from pyspark.sql import SparkSession from pyspark.ml import feature from pyspark.ml import classification from pyspark.sql import functions as fn from pyspark.ml import Pipeline from pyspark.ml.evaluation import BinaryClassificationEvaluator, \ MulticlassClassificationEvaluator, \ RegressionEvaluator from pyspa...
_____no_output_____
BSD-4-Clause-UC
notebooks/.ipynb_checkpoints/deep_learning_object_recognition-checkpoint.ipynb
daniel-acuna/ist718
Read the data
caltech101_df = spark.read.parquet('datasets/caltech101_60_40_ubyte.parquet')
_____no_output_____
BSD-4-Clause-UC
notebooks/.ipynb_checkpoints/deep_learning_object_recognition-checkpoint.ipynb
daniel-acuna/ist718
The dataframe contains images from the [Caltech 101 dataset](https://www.vision.caltech.edu/Image_Datasets/Caltech101/). These images have been downsized and transformed to gray scale by the professor. Let's look at the content of the dataset:
caltech101_df.printSchema()
root |-- category: string (nullable = true) |-- filename: string (nullable = true) |-- raw_pixels: vector (nullable = true)
BSD-4-Clause-UC
notebooks/.ipynb_checkpoints/deep_learning_object_recognition-checkpoint.ipynb
daniel-acuna/ist718
The column `raw_pixels` contains a flattened version of the 60 by 40 images. The dataset has 101 categories plus 1 distracting category:
caltech101_df.select(fn.countDistinct('category')).show()
+------------------------+ |count(DISTINCT category)| +------------------------+ | 102| +------------------------+
BSD-4-Clause-UC
notebooks/.ipynb_checkpoints/deep_learning_object_recognition-checkpoint.ipynb
daniel-acuna/ist718
The number of examples for each category is not balanced. For example the _airplanes_ and _motorbikes_ categories have nearly 10 times more examples than kangaroo and starfish:
caltech101_df.groupby('category').agg(fn.count('*').alias('n_images')).orderBy(fn.desc('n_images')).show()
+-----------------+--------+ | category|n_images| +-----------------+--------+ | airplanes| 800| | Motorbikes| 798| |BACKGROUND_Google| 468| | Faces_easy| 435| | Faces| 435| | watch| 239| | Leopards| 200| | bonsai| 128| |...
BSD-4-Clause-UC
notebooks/.ipynb_checkpoints/deep_learning_object_recognition-checkpoint.ipynb
daniel-acuna/ist718
We will use the helper function `display_first_as_img` to display some images on the notebook. This function takes the first row of the dataframe and displays the `raw_pixels` as an image.
display_first_as_img(caltech101_df.where(fn.col('category') == "Motorbikes").sample(True, 0.1)) display_first_as_img(caltech101_df.where(fn.col('category') == "Faces_easy").sample(True, 0.5)) display_first_as_img(caltech101_df.where(fn.col('category') == "airplanes").sample(True, 1.))
_____no_output_____
BSD-4-Clause-UC
notebooks/.ipynb_checkpoints/deep_learning_object_recognition-checkpoint.ipynb
daniel-acuna/ist718
Multilayer perceptron in SparkML In this homework, we will use the multilayer perceptron as a learning model. Our idea is to take the raw pixels of an image and predict the category of such image. This is therefore a classification problem. A multilayer perceptron for classification is [available in Spark ML](http://s...
training_df, validation_df, testing_df = caltech101_df.\ where(fn.col('category').isin(['airplanes', 'Faces_easy', 'Motorbikes'])).\ randomSplit([0.6, 0.2, 0.2], seed=0) [training_df.count(), validation_df.count(), testing_df.count()]
_____no_output_____
BSD-4-Clause-UC
notebooks/.ipynb_checkpoints/deep_learning_object_recognition-checkpoint.ipynb
daniel-acuna/ist718
It is important to check the distribution of cateogires in the validation and testing dataframes to see if they have a similar distribution as the training dataset:
validation_df.groupBy('category').agg(fn.count('*')).show() testing_df.groupBy('category').agg(fn.count('*')).show()
+----------+--------+ | category|count(1)| +----------+--------+ |Motorbikes| 152| |Faces_easy| 87| | airplanes| 166| +----------+--------+ +----------+--------+ | category|count(1)| +----------+--------+ |Motorbikes| 151| |Faces_easy| 83| | airplanes| 163| +----------+--------+
BSD-4-Clause-UC
notebooks/.ipynb_checkpoints/deep_learning_object_recognition-checkpoint.ipynb
daniel-acuna/ist718
They are similar. Transforming string labels into numerical labels To use a multilayer perceptron, we first need to transform the input data to be fitted by Spark ML. One of the things that Spark ML needs is a numerical representation of the category (`label` as they call it). In our case, we only have a string repres...
from pyspark.ml import feature category_to_number_model = feature.StringIndexer(inputCol='category', outputCol='label').\ fit(training_df)
_____no_output_____
BSD-4-Clause-UC
notebooks/.ipynb_checkpoints/deep_learning_object_recognition-checkpoint.ipynb
daniel-acuna/ist718
Now, we can see how it transforms the category into a `label`:
category_to_number_model.transform(training_df).show()
+----------+--------------+--------------------+-----+ | category| filename| raw_pixels|label| +----------+--------------+--------------------+-----+ |Faces_easy|image_0013.jpg|[254.0,221.0,110....| 2.0| |Faces_easy|image_0034.jpg|[82.0,79.0,80.0,7...| 2.0| |Faces_easy|image_0091.jpg|[125.0,125.0,125.....
BSD-4-Clause-UC
notebooks/.ipynb_checkpoints/deep_learning_object_recognition-checkpoint.ipynb
daniel-acuna/ist718
There are the categories found the estimator:
list(enumerate(category_to_number_model.labels))
_____no_output_____
BSD-4-Clause-UC
notebooks/.ipynb_checkpoints/deep_learning_object_recognition-checkpoint.ipynb
daniel-acuna/ist718
Multi-layer perceptron The multi-layer perceptron will take the inputs as a flattened list of pixels and it will have three output neurons, each representing a label:
from pyspark.ml import classification mlp = classification.MultilayerPerceptronClassifier(seed=0).\ setStepSize(0.2).\ setMaxIter(200).\ setFeaturesCol('raw_pixels')
_____no_output_____
BSD-4-Clause-UC
notebooks/.ipynb_checkpoints/deep_learning_object_recognition-checkpoint.ipynb
daniel-acuna/ist718
The parameter `stepSize` is the learning rate for stochastic gradient descent and we set the maximum number of stochastic gradient descent to 200. Now, to define the layers, the multilayer perceptron needs to receive the number of neurons of each intermediate layer (hidden layers).As we saw in class, however, if we don...
mlp = mlp.setLayers([60*40, 3])
_____no_output_____
BSD-4-Clause-UC
notebooks/.ipynb_checkpoints/deep_learning_object_recognition-checkpoint.ipynb
daniel-acuna/ist718
Now, we are ready to fit this simple multi-class logistic regression to the data. We need to create a pipeline that will take the training data, transform the category column, and apply the perceptron.
from pyspark.ml import Pipeline mlp_simple_model = Pipeline(stages=[category_to_number_model, mlp]).fit(training_df)
_____no_output_____
BSD-4-Clause-UC
notebooks/.ipynb_checkpoints/deep_learning_object_recognition-checkpoint.ipynb
daniel-acuna/ist718
Now we can apply the model to the validation data to do some simple tests:
mlp_simple_model.transform(validation_df).show(10)
+----------+--------------+--------------------+-----+--------------------+--------------------+----------+ | category| filename| raw_pixels|label| rawPrediction| probability|prediction| +----------+--------------+--------------------+-----+--------------------+--------------------+--------...
BSD-4-Clause-UC
notebooks/.ipynb_checkpoints/deep_learning_object_recognition-checkpoint.ipynb
daniel-acuna/ist718
As we can see, the label and prediction mostly coincides. We can be a more systematic by computing the accuracy of the prediction:
mlp_simple_model.transform(validation_df).select(fn.expr('avg(float(label=prediction))').alias('accuracy')).show()
+------------------+ | accuracy| +------------------+ |0.7728395061728395| +------------------+
BSD-4-Clause-UC
notebooks/.ipynb_checkpoints/deep_learning_object_recognition-checkpoint.ipynb
daniel-acuna/ist718
Alternatively, we can take advantage of the [evaluators shipped with Spark ML](https://spark.apache.org/docs/latest/mllib-evaluation-metrics.html) as follows:
from pyspark.ml import evaluation evaluator = evaluation.MulticlassClassificationEvaluator(metricName="accuracy") evaluator.evaluate(mlp_simple_model.transform(validation_df))
_____no_output_____
BSD-4-Clause-UC
notebooks/.ipynb_checkpoints/deep_learning_object_recognition-checkpoint.ipynb
daniel-acuna/ist718
Which gives us the same number as before. More layers (warning: this will take a long time) Now, let's see if we add one hidden layer, which adds non-linearity and interactions to the input. The definition will be similar, but we need to define how many hidden layers and how many neurons per hiden layer. This is a fie...
mlp2 = classification.MultilayerPerceptronClassifier(seed=0).\ setStepSize(0.2).\ setMaxIter(200).\ setFeaturesCol('raw_pixels').\ setLayers([60*40, 100, 3])
_____no_output_____
BSD-4-Clause-UC
notebooks/.ipynb_checkpoints/deep_learning_object_recognition-checkpoint.ipynb
daniel-acuna/ist718
Now, __fitting this model will take significantly more time because we are adding 2 orders of magnitude more parameters than the previous model__:
mlp2_model = Pipeline(stages=[category_to_number_model, mlp2]).fit(training_df)
_____no_output_____
BSD-4-Clause-UC
notebooks/.ipynb_checkpoints/deep_learning_object_recognition-checkpoint.ipynb
daniel-acuna/ist718
But the model is significantly more powerful and fits the data better:
evaluator.evaluate(mlp2_model.transform(validation_df))
_____no_output_____
BSD-4-Clause-UC
notebooks/.ipynb_checkpoints/deep_learning_object_recognition-checkpoint.ipynb
daniel-acuna/ist718
Complexity of the model with more data (warning: this will take a long time) We will evaluate how the multilayer percetron learns Let's evaluate how the performance changes with 1) the amount of training data and 2) the number of neurons in the hidden layer
evaluation_info = [] for training_size in [0.1, 0.5, 1.]: for n_neurons in [1, 3, 10, 20]: print("Training size: ", training_size, "; # Neurons: ", n_neurons) training_sample_df = training_df.sample(False, training_size, seed=0) mlp_template = classification.MultilayerPerceptronClassifier(s...
_____no_output_____
BSD-4-Clause-UC
notebooks/.ipynb_checkpoints/deep_learning_object_recognition-checkpoint.ipynb
daniel-acuna/ist718
You will try to understand some trends based on these numbers and plots:
import pandas as pd evaluation_df = pd.DataFrame(evaluation_info) evaluation_df for training_size in sorted(evaluation_df.training_size.unique()): fig, ax = plt.subplots(1, 1); evaluation_df.query('training_size == ' + str(training_size)).groupby(['dataset']).\ plot(x='n_neurons', y='accuracy', ax=ax); ...
_____no_output_____
BSD-4-Clause-UC
notebooks/.ipynb_checkpoints/deep_learning_object_recognition-checkpoint.ipynb
daniel-acuna/ist718
Another way to look at it is by varying the training size
for n_neurons in sorted(evaluation_df.n_neurons.unique()): fig, ax = plt.subplots(1, 1); evaluation_df.query('n_neurons == ' + str(n_neurons)).groupby(['dataset']).\ plot(x='training_size', y='accuracy', ax=ax); plt.legend(['training', 'validation'], loc='upper left'); plt.title('# Neurons: ' + ...
_____no_output_____
BSD-4-Clause-UC
notebooks/.ipynb_checkpoints/deep_learning_object_recognition-checkpoint.ipynb
daniel-acuna/ist718
Predicting We will load an image from the Internet:
from PIL import Image import requests from io import BytesIO # response = requests.get("http://images.all-free-download.com/images/graphicthumb/airplane_311727.jpg") # response = requests.get("https://www.tugraz.at/uploads/pics/Alexander_by_Kanizaj_02.jpg") # face response = requests.get("https://www.sciencenewsforstu...
_____no_output_____
BSD-4-Clause-UC
notebooks/.ipynb_checkpoints/deep_learning_object_recognition-checkpoint.ipynb
daniel-acuna/ist718
We need to transform this image to grayscale and shrink it to the size that the neural network expects. We will use these steps using several packages: Transform to grayscale:
# convert to grayscale gray_img = np.array(img.convert('P')) plt.imshow(255-gray_img, 'gray');
_____no_output_____
BSD-4-Clause-UC
notebooks/.ipynb_checkpoints/deep_learning_object_recognition-checkpoint.ipynb
daniel-acuna/ist718
Shrink it to 60 by 40:
shrinked_img = np.array((img.resize([40, 60]).convert('P'))) plt.imshow(shrinked_img, 'gray');
_____no_output_____
BSD-4-Clause-UC
notebooks/.ipynb_checkpoints/deep_learning_object_recognition-checkpoint.ipynb
daniel-acuna/ist718
Flatten it and put it in a Spark dataframe:
# from pyspark.ml.linalg import Vectors # new_image = shrinked_img.flatten() # true_label = int(np.where(np.array(category_to_number_model.labels) == 'motorcyles')[0][0]) # new_img_df = spark.createDataFrame([[Vectors.dense(new_image), true_label]], ['raw_pixels', 'label']) from pyspark.ml.linalg import Vectors new_ima...
_____no_output_____
BSD-4-Clause-UC
notebooks/.ipynb_checkpoints/deep_learning_object_recognition-checkpoint.ipynb
daniel-acuna/ist718
Now `new_img_df` has one image
display(new_img_df)
_____no_output_____
BSD-4-Clause-UC
notebooks/.ipynb_checkpoints/deep_learning_object_recognition-checkpoint.ipynb
daniel-acuna/ist718
Use the `mlp2_model` and `new_img_df` to predict the category that the new image belongs to. Show the code you use for such prediction.
mlp_simple_model.transform(new_img_df).show() from pyspark.ml.feature import OneHotEncoder, StringIndexer, VectorAssembler df = spark.createDataFrame([ (0, "a"), (1, "b"), (2, "c"), (3, "a"), (4, "a"), (5, "c") ], ["id", "category"]) stringIndexer = StringIndexer(inputCol="category", outputCol...
+---+--------+-------------+-------------+--------------------------------------------+ | id|category|categoryIndex| categoryVec|VectorAssembler_44d095f846fbbd1b9186__output| +---+--------+-------------+-------------+--------------------------------------------+ | 0| a| 0.0|(2,[0],[1.0])| ...
BSD-4-Clause-UC
notebooks/.ipynb_checkpoints/deep_learning_object_recognition-checkpoint.ipynb
daniel-acuna/ist718
Table of Contents1&nbsp;&nbsp;Name2&nbsp;&nbsp;Search2.1&nbsp;&nbsp;Load Cached Results2.2&nbsp;&nbsp;Build Model From Google Images3&nbsp;&nbsp;Analysis3.1&nbsp;&nbsp;Gender cross validation3.2&nbsp;&nbsp;Face Sizes3.3&nbsp;&nbsp;Screen Time Across All Shows3.4&nbsp;&nbsp;Appearances on a Single Show3.5&nbsp;&nbsp;Oth...
from esper.prelude import * from esper.identity import * from esper.plot_util import * from esper.topics import * from esper import embed_google_images
_____no_output_____
Apache-2.0
app/notebooks/labeled_identities/shooters/robert_lewis_dear.ipynb
scanner-research/esper-tv
Name Please add the person's name and their expected gender below (Male/Female).
name = 'Robert Lewis Dear Jr' gender = 'Male'
_____no_output_____
Apache-2.0
app/notebooks/labeled_identities/shooters/robert_lewis_dear.ipynb
scanner-research/esper-tv
Search Load Cached Results Reads cached identity model from local disk. Run this if the person has been labelled before and you only wish to regenerate the graphs. Otherwise, if you have never created a model for this person, please see the next section.
assert name != '' results = FaceIdentityModel.load(name=name) # or load_from_gcs(name=name) imshow(tile_images([cv2.resize(x[1][0], (200, 200)) for x in results.model_params['images']], cols=10)) plt.show() plot_precision_and_cdf(results)
_____no_output_____
Apache-2.0
app/notebooks/labeled_identities/shooters/robert_lewis_dear.ipynb
scanner-research/esper-tv
Build Model From Google Images Run this section if you do not have a cached model and precision curve estimates. This section will grab images using Google Image Search and score each of the faces in the dataset. We will interactively build the precision vs score curve.It is important that the images that you select a...
assert name != '' # Grab face images from Google img_dir = embed_google_images.fetch_images(name) # If the images returned are not satisfactory, rerun the above with extra params: # query_extras='' # additional keywords to add to search # force=True # ignore cached images face_imgs = load_and_select_face...
_____no_output_____
Apache-2.0
app/notebooks/labeled_identities/shooters/robert_lewis_dear.ipynb
scanner-research/esper-tv
Now we will validate which of the images in the dataset are of the target identity.__Hover over with mouse and press S to select a face. Press F to expand the frame.__
show_reference_imgs() print(('Mark all images that ARE NOT {}. Thumbnails are ordered by DESCENDING distance ' 'to your selected images. (The first page is more likely to have non "{}" images.) ' 'There are a total of {} frames. (CLICK THE DISABLE JUPYTER KEYBOARD BUTTON ' 'BEFORE PROCEEDING.)').fo...
_____no_output_____
Apache-2.0
app/notebooks/labeled_identities/shooters/robert_lewis_dear.ipynb
scanner-research/esper-tv
Run the following cell after labelling to compute the precision curve. Do not forget to re-enable jupyter shortcuts.
# Compute the precision from the selections lower_precision = precision_model.compute_precision_for_lower_buckets(lower_widget.selected) upper_precision = precision_model.compute_precision_for_upper_buckets(upper_widget.selected) precision_by_bucket = {**lower_precision, **upper_precision} results = FaceIdentityModel(...
_____no_output_____
Apache-2.0
app/notebooks/labeled_identities/shooters/robert_lewis_dear.ipynb
scanner-research/esper-tv
The next cell persists the model locally.
results.save()
_____no_output_____
Apache-2.0
app/notebooks/labeled_identities/shooters/robert_lewis_dear.ipynb
scanner-research/esper-tv
Analysis Gender cross validationSituations where the identity model disagrees with the gender classifier may be cause for alarm. We would like to check that instances of the person have the expected gender as a sanity check. This section shows the breakdown of the identity instances and their labels from the gender c...
gender_breakdown = compute_gender_breakdown(results) print('Expected counts by gender:') for k, v in gender_breakdown.items(): print(' {} : {}'.format(k, int(v))) print() print('Percentage by gender:') denominator = sum(v for v in gender_breakdown.values()) for k, v in gender_breakdown.items(): print(' {} :...
_____no_output_____
Apache-2.0
app/notebooks/labeled_identities/shooters/robert_lewis_dear.ipynb
scanner-research/esper-tv
Situations where the identity detector returns high confidence, but where the gender is not the expected gender indicate either an error on the part of the identity detector or the gender detector. The following visualization shows randomly sampled images, where the identity detector returns high confidence, grouped by...
high_probability_threshold = 0.8 show_gender_examples(results, high_probability_threshold)
_____no_output_____
Apache-2.0
app/notebooks/labeled_identities/shooters/robert_lewis_dear.ipynb
scanner-research/esper-tv
Face SizesFaces shown on-screen vary in size. For a person such as a host, they may be shown in a full body shot or as a face in a box. Faces in the background or those part of side graphics might be smaller than the rest. When calculuating screentime for a person, we would like to know whether the results represent t...
plot_histogram_of_face_sizes(results)
_____no_output_____
Apache-2.0
app/notebooks/labeled_identities/shooters/robert_lewis_dear.ipynb
scanner-research/esper-tv
The histogram above shows the distribution of face sizes, but not how those sizes occur in the dataset. For instance, one might ask why some faces are so large or whhether the small faces are actually errors. The following cell groups example faces, which are of the target identity with probability, by their sizes in t...
high_probability_threshold = 0.8 show_faces_by_size(results, high_probability_threshold, n=10)
_____no_output_____
Apache-2.0
app/notebooks/labeled_identities/shooters/robert_lewis_dear.ipynb
scanner-research/esper-tv
Screen Time Across All ShowsOne question that we might ask about a person is whether they received a significantly different amount of screentime on different shows. The following section visualizes the amount of screentime by show in total minutes and also in proportion of the show's total time. For a celebrity or po...
screen_time_by_show = get_screen_time_by_show(results) plot_screen_time_by_show(name, screen_time_by_show)
_____no_output_____
Apache-2.0
app/notebooks/labeled_identities/shooters/robert_lewis_dear.ipynb
scanner-research/esper-tv
We might also wish to validate these findings by comparing to the whether the person's name is mentioned in the subtitles. This might be helpful in determining whether extra or lack of screentime for a person may be due to a show's aesthetic choices. The following plots show compare the screen time with the number of c...
caption_mentions_by_show = get_caption_mentions_by_show([name.upper()]) plot_screen_time_and_other_by_show(name, screen_time_by_show, caption_mentions_by_show, 'Number of caption mentions', 'Count')
_____no_output_____
Apache-2.0
app/notebooks/labeled_identities/shooters/robert_lewis_dear.ipynb
scanner-research/esper-tv
Appearances on a Single ShowFor people such as hosts, we would like to examine in greater detail the screen time allotted for a single show. First, fill in a show below.
show_name = 'INSERT-SHOW-HERE' # Compute the screen time for each video of the show screen_time_by_video_id = get_screen_time_by_video(results, show_name)
_____no_output_____
Apache-2.0
app/notebooks/labeled_identities/shooters/robert_lewis_dear.ipynb
scanner-research/esper-tv
One question we might ask about a host is "how long they are show on screen" for an episode. Likewise, we might also ask for how many episodes is the host not present due to being on vacation or on assignment elsewhere. The following cell plots a histogram of the distribution of the length of the person's appearances i...
plot_histogram_of_screen_times_by_video(name, show_name, screen_time_by_video_id)
_____no_output_____
Apache-2.0
app/notebooks/labeled_identities/shooters/robert_lewis_dear.ipynb
scanner-research/esper-tv
For a host, we expect screentime over time to be consistent as long as the person remains a host. For figures such as Hilary Clinton, we expect the screentime to track events in the real world such as the lead-up to 2016 election and then to drop afterwards. The following cell plots a time series of the person's screen...
plot_screentime_over_time(name, show_name, screen_time_by_video_id)
_____no_output_____
Apache-2.0
app/notebooks/labeled_identities/shooters/robert_lewis_dear.ipynb
scanner-research/esper-tv
We hypothesized that a host is more likely to appear at the beginning of a video and then also appear throughout the video. The following plot visualizes the distibution of shot beginning times for videos of the show.
plot_distribution_of_appearance_times_by_video(results, show_name)
_____no_output_____
Apache-2.0
app/notebooks/labeled_identities/shooters/robert_lewis_dear.ipynb
scanner-research/esper-tv
In the section 3.3, we see that some shows may have much larger variance in the screen time estimates than others. This may be because a host or frequent guest appears similar to the target identity. Alternatively, the images of the identity may be consistently low quality, leading to lower scores. The next cell plots ...
plot_distribution_of_identity_probabilities(results, show_name)
_____no_output_____
Apache-2.0
app/notebooks/labeled_identities/shooters/robert_lewis_dear.ipynb
scanner-research/esper-tv
Other People Who Are On ScreenFor some people, we are interested in who they are often portrayed on screen with. For instance, the White House press secretary might routinely be shown with the same group of political pundits. A host of a show, might be expected to be on screen with their co-host most of the time. The ...
get_other_people_who_are_on_screen(results, k=25, precision_thresh=0.8)
_____no_output_____
Apache-2.0
app/notebooks/labeled_identities/shooters/robert_lewis_dear.ipynb
scanner-research/esper-tv
Persist to CloudThe remaining code in this notebook uploads the built identity model to Google Cloud Storage and adds the FaceIdentity labels to the database. Save Model to Google Cloud Storage
gcs_model_path = results.save_to_gcs()
_____no_output_____
Apache-2.0
app/notebooks/labeled_identities/shooters/robert_lewis_dear.ipynb
scanner-research/esper-tv
To ensure that the model stored to Google Cloud is valid, we load it and print the precision and cdf curve below.
gcs_results = FaceIdentityModel.load_from_gcs(name=name) imshow(tile_images([cv2.resize(x[1][0], (200, 200)) for x in gcs_results.model_params['images']], cols=10)) plt.show() plot_precision_and_cdf(gcs_results)
_____no_output_____
Apache-2.0
app/notebooks/labeled_identities/shooters/robert_lewis_dear.ipynb
scanner-research/esper-tv
Save Labels to DBIf you are satisfied with the model, we can commit the labels to the database.
from django.core.exceptions import ObjectDoesNotExist def standardize_name(name): return name.lower() person_type = ThingType.objects.get(name='person') try: person = Thing.objects.get(name=standardize_name(name), type=person_type) print('Found person:', person.name) except ObjectDoesNotExist: person...
_____no_output_____
Apache-2.0
app/notebooks/labeled_identities/shooters/robert_lewis_dear.ipynb
scanner-research/esper-tv
Commit the person and labelerThe labeler and person have been created but not set saved to the database. If a person was created, please make sure that the name is correct before saving.
person.save() labeler.save()
_____no_output_____
Apache-2.0
app/notebooks/labeled_identities/shooters/robert_lewis_dear.ipynb
scanner-research/esper-tv
Commit the FaceIdentity labelsNow, we are ready to add the labels to the database. We will create a FaceIdentity for each face whose probability exceeds the minimum threshold.
commit_face_identities_to_db(results, person, labeler, min_threshold=0.001) print('Committed {} labels to the db'.format(FaceIdentity.objects.filter(labeler=labeler).count()))
_____no_output_____
Apache-2.0
app/notebooks/labeled_identities/shooters/robert_lewis_dear.ipynb
scanner-research/esper-tv
Python graphics: Matplotlib fundamentalsWe illustrate three approaches to graphing data with Python's Matplotlib package: * Approach 1: Apply a `plot()` method to a dataframe* Approach 2: Use the `plot(x,y)` function * Approach 3: Create a figure object and apply methods to itThe last one is the least intuitive...
import sys # system module import pandas as pd # data package import matplotlib as mpl # graphics package import matplotlib.pyplot as plt # graphics module import datetime as dt # date and time module # check versions (overkill, bu...
Python version: 3.5.1 |Anaconda 2.5.0 (x86_64)| (default, Dec 7 2015, 11:24:55) [GCC 4.2.1 (Apple Inc. build 5577)] Pandas version: 0.17.1 Matplotlib version: 1.5.1 Today: 2016-03-10
MIT
Code/IPython/bootcamp_graphics.ipynb
ljsun88/data_bootcamp_nyu
**Comment.** When you run the code cell above, its output appears below it. **Exercise.** Enter `pd.read_csv?` in the empty cell below. Run the cell (Cell at the top, or shift-enter). Do you see the documentation? This is the Jupyter version of help in Spyder's IPython console.
# This is an IPython command. It puts plots here in the notebook, rather than a separate window. %matplotlib inline
_____no_output_____
MIT
Code/IPython/bootcamp_graphics.ipynb
ljsun88/data_bootcamp_nyu
Create dataframes to play with * US GDP and consumption * World Bank GDP per capita for several countries * Fama-French equity returns
# US GDP and consumption gdp = [13271.1, 13773.5, 14234.2, 14613.8, 14873.7, 14830.4, 14418.7, 14783.8, 15020.6, 15369.2, 15710.3] pce = [8867.6, 9208.2, 9531.8, 9821.7, 10041.6, 10007.2, 9847.0, 10036.3, 10263.5, 10449.7, 10699.7] year = list(range(2003,2014)) # use range for years 2003-2013 ...
_____no_output_____
MIT
Code/IPython/bootcamp_graphics.ipynb
ljsun88/data_bootcamp_nyu
**Comment.** In the previous cell, we used the `print()` function to produce output. Here we just put the name of the dataframe. The latter displays the dataframe -- and formats it nicely -- **if it's the last statement in the cell**.
# Fama-French import pandas.io.data as web # read annual data from website and rename variables ff = web.DataReader('F-F_Research_Data_factors', 'famafrench')[1] ff.columns = ['xsm', 'smb', 'hml', 'rf'] ff['rm'] = ff['xsm'] + ff['rf'] ff = ff[['rm', 'rf']] # extract rm and rf (return on market, riskfree rate, pe...
/Users/sglyon/anaconda3/lib/python3.5/site-packages/pandas/io/data.py:33: FutureWarning: The pandas.io.data module is moved to a separate package (pandas-datareader) and will be removed from pandas in a future version. After installing the pandas-datareader package (https://github.com/pydata/pandas-datareader), you ca...
MIT
Code/IPython/bootcamp_graphics.ipynb
ljsun88/data_bootcamp_nyu
**Comment.** The warning in pink tells us that the Pandas DataReader will be spun off into a separate package in the near future. **Exercise.** What kind of object is `wbdf`? What are its column and row labels? **Exercise.** What is `ff.index`? What does that tell us?
# This is an IPython command: it puts plots here in the notebook, rather than a separate window. %matplotlib inline
_____no_output_____
MIT
Code/IPython/bootcamp_graphics.ipynb
ljsun88/data_bootcamp_nyu
Digression: Graphing in ExcelRemind yourself that we need to choose: * Data. Typically a block of cells in a spreadsheet. * Chart type. Lines, bars, scatter, or something else. * x and y variables. What is the x axis? What is y? We'll see the same in Matplotlib. Approach 1: Apply `plot()` method to da...
# try this with US GDP us.plot() # do GDP alone us['gdp'].plot() # bar chart us.plot(kind='bar')
_____no_output_____
MIT
Code/IPython/bootcamp_graphics.ipynb
ljsun88/data_bootcamp_nyu
**Exercise.** Show that we get the output from `us.plot.bar()`.
us.plot # scatter plot # we need to be explicit about the x and y variables: x = 'gdp', y = 'pce' us.plot.scatter('gdp', 'pce')
_____no_output_____
MIT
Code/IPython/bootcamp_graphics.ipynb
ljsun88/data_bootcamp_nyu
**Exercise.** Enter `us.plot(kind='bar')` and `us.plot.bar()` in separate cells. Show that they produce the same bar chart. **Exercise.** Add each of these arguments, one at a time, to `us.plot()`: * `kind='area'`* `subplots=True`* `sharey=True`* `figsize=(3,6)`* `ylim=(0,16000)`What do they do?**Exercise.** Type `...
# now try a few things with the Fama-French data ff.plot() ff.plot()
_____no_output_____
MIT
Code/IPython/bootcamp_graphics.ipynb
ljsun88/data_bootcamp_nyu
**Exercise.** What do each of the arguments do in the code below?
ff.plot(kind='hist', bins=20, subplots=True) # "smoothed" histogram ff.plot(kind='kde', subplots=True, sharex=True) # smoothed histogram ("kernel density estimate")
_____no_output_____
MIT
Code/IPython/bootcamp_graphics.ipynb
ljsun88/data_bootcamp_nyu
**Exercise.** Let's see if we can dress up the histogram a little. Try adding, one at a time, the arguments `title='Fama-French returns'`, `grid=True`, and `legend=False`. What does the documentation say about them? What do they do? **Exercise.** What do the histograms tell us about the two returns? How do they ...
# import pyplot module of Matplotlib import matplotlib.pyplot as plt plt.plot(us.index, us['gdp'])
_____no_output_____
MIT
Code/IPython/bootcamp_graphics.ipynb
ljsun88/data_bootcamp_nyu
**Exercise.** What is the `x` variable here? The `y` variable?
# we can do two lines together plt.plot(us.index, us['gdp']) plt.plot(us.index, us['pce']) # or a bar chart plt.bar(us.index, us['gdp'], align='center')
_____no_output_____
MIT
Code/IPython/bootcamp_graphics.ipynb
ljsun88/data_bootcamp_nyu
**Exercise.** Experiment with ```pythonplt.bar(us.index, us['gdp'], align='center', alpha=0.65, color='red', edgecolor='green')```Play with the arguments one by one to see what they do. Or use `plt.bar?` to look them up. Add comments to remind yourself. *Bonus points:* Can you make th...
# we can also add things to plots plt.plot(us.index, us['gdp']) plt.plot(us.index, us['pce']) plt.title('US GDP', fontsize=14, loc='left') # add title plt.ylabel('Billions of 2009 USD') # y axis label plt.xlim(2002.5, 2013.5) # shrink x axis limits plt.tick_params(labelcolor='red') ...
_____no_output_____
MIT
Code/IPython/bootcamp_graphics.ipynb
ljsun88/data_bootcamp_nyu
**Comment.** All of these statements must be in the same cell for this to work. **Comment.** This is overkill -- it looks horrible -- but it makes the point that we control everything in the plot. We recommend you do very little of this until you're more comfortable with the basics. **Exercise.** Add a `plt.ylim()...
# create fig and ax objects fig, ax = plt.subplots()
_____no_output_____
MIT
Code/IPython/bootcamp_graphics.ipynb
ljsun88/data_bootcamp_nyu
**Exercise.** What do we have here? What `type` are `fig` and `ax`? We say `fig` is a **figure object** and `ax` is an **axis object**. This means: * `fig` is a blank canvas for creating a figure.* `ax` is everything in it: axes, labels, lines or bars, and so on. **Exercise.** Use tab completion to see what m...
# let's try that again, this time with content # create objects fig, axe = plt.subplots() # add things by applying methods to ax us.plot(ax=axe)
_____no_output_____
MIT
Code/IPython/bootcamp_graphics.ipynb
ljsun88/data_bootcamp_nyu
**Comment.** Both of these statements must be in the same cell.
# Fama-French example fig, ax = plt.subplots() ff.plot(ax=ax, kind='line', # line plot color=['blue', 'magenta'], # line color title='Fama-French market and riskfree returns')
_____no_output_____
MIT
Code/IPython/bootcamp_graphics.ipynb
ljsun88/data_bootcamp_nyu
**Exercise.** Let's see if we can teach ourselves the rest: * Add the argument `kind='bar'` to convert this into a bar chart. * Add the argument `alpha=0.65` to the bar chart. What does it do? * What would you change in the bar chart to make it look better? Use the help facility to find options that might help. ...
fig, ax = plt.subplots() us.plot(ax=ax) ax.set_title('US GDP and Consumption', fontsize=14, loc='left') ax.set_ylabel('Billions of 2013 USD') ax.legend(['Real GDP', 'Consumption'], loc=0) # more descriptive variable names ax.set_xlim(2002.5, 2013.5) # expand x axis limits ax.tick_params(lab...
_____no_output_____
MIT
Code/IPython/bootcamp_graphics.ipynb
ljsun88/data_bootcamp_nyu
(Your results may differ, but we really enjoyed that.) **Exercise.** Use the `set_xlabel()` method to add an x-axis label. What would you choose? Or would you prefer to leave it empty? **Exercise.** Enter `ax.set_legend?` to access the documentation for the `set_legend` method. What options appeal to you? **Exerc...
# this creates a 2-dimensional ax fig, ax = plt.subplots(nrows=2, ncols=1, sharex=True) print('Object ax has dimension', len(ax)) # now add some content fig, ax = plt.subplots(nrows=2, ncols=1, sharex=True, sharey=True) us['gdp'].plot(ax=ax[0], color='green') # first plot us['pce'].plot(ax=ax[1], color='red') ...
_____no_output_____
MIT
Code/IPython/bootcamp_graphics.ipynb
ljsun88/data_bootcamp_nyu
ExamplesWe conclude with examples that take the data from the previous chapter and make better graphs with it. Student test scores (PISA) The international test scores often used to compare quality of education across countries.
# data input import pandas as pd url = 'http://dx.doi.org/10.1787/888932937035' pisa = pd.read_excel(url, skiprows=18, # skip the first 18 rows skipfooter=7, # skip the last 7 parse_cols=[0,1,9,13], # select columns index_co...
_____no_output_____
MIT
Code/IPython/bootcamp_graphics.ipynb
ljsun88/data_bootcamp_nyu
**Comment.** Yikes! That's horrible! What can we do about it? Let's make the figure taller. The `figsize` argument has the form `(width, height)`. The default is `(6, 4)`. We want a tall figure, so we need to increase the height setting.
fig. # make the plot taller fig, ax = plt.subplots(figsize=(4, 13)) # note figsize pisa['Math'].plot(kind='barh', ax=ax) ax.set_title('PISA Math Score', loc='left')
_____no_output_____
MIT
Code/IPython/bootcamp_graphics.ipynb
ljsun88/data_bootcamp_nyu
**Comment.** What if we wanted to make the US bar red? This is ridiculously complicated, but we used our Google fu and found [a solution](http://stackoverflow.com/questions/18973404/setting-different-bar-color-in-matplotlib-python). Remember: The solution to many problems is Google fu + patience.
fig, ax = plt.subplots() pisa['Math'].plot(kind='barh', ax=ax, figsize=(4,13)) ax.set_title('PISA Math Score', loc='left') ax.get_children()[36].set_color('r')
_____no_output_____
MIT
Code/IPython/bootcamp_graphics.ipynb
ljsun88/data_bootcamp_nyu
**Exercise.** Create the same graph for the Reading score. World Bank dataWe'll use World Bank data for GDP, GDP per capita, and life expectancy to produce a few graphs and illsutrate some methods we haven't seen yet. * Bar charts of GDP and GDP per capita * Scatter plot (bubble plot) of life expectancy v GDP per ...
# load packages (redundancy is ok) import pandas as pd # data management tools from pandas.io import wb # World Bank api import matplotlib.pyplot as plt # plotting tools # variable list (GDP, GDP per capita, life expectancy) var = ['NY.GDP.PCAP.PP.KD', 'NY.GDP.MKTP.PP.KD', 'SP.DYN....
_____no_output_____
MIT
Code/IPython/bootcamp_graphics.ipynb
ljsun88/data_bootcamp_nyu
And just because it's fun, here's an example of Tufte-like axes from [Matplotlib examples](http://matplotlib.org/examples/ticks_and_spines/spines_demo_dropped.html). If you want to do this yourself, copy the last six line and prepare yourself to sink some time into it.
# ditto for GDP per capita (per person) fig, ax = plt.subplots() df['gdppc'].plot(ax=ax, kind='barh', color='b', alpha=0.5) ax.set_title('GDP Per Capita', loc='left', fontsize=14) ax.set_xlabel('Thousands of US Dollars') ax.set_ylabel('') # Tufte-like axes ax.spines['left'].set_position(('outward', 7)) ax.spines['bott...
_____no_output_____
MIT
Code/IPython/bootcamp_graphics.ipynb
ljsun88/data_bootcamp_nyu
**Exercise (challenging).** Make the ticks point out.
# scatterplot of life expectancy vs gdp per capita fig, ax = plt.subplots() ax.scatter(df['gdppc'], df['life'], # x,y variables s=df['pop']/10**6, # size of bubbles alpha=0.5) ax.set_title('Life expectancy vs. GDP per capita', loc='left', fontsize=14) ax.set_xlabel('GDP Per Capit...
_____no_output_____
MIT
Code/IPython/bootcamp_graphics.ipynb
ljsun88/data_bootcamp_nyu
**Exercise.** Make the bubble a little larger. Styles (optional)Graph settings you might like.
# We'll look at this chart under a variety of styles. # Let's make a function so we don't have to repeat the # code to create def gdp_bar(): fig, ax = plt.subplots() df['gdp'].plot(ax=ax, kind='barh', alpha=0.5) ax.set_title('Real GDP', loc='left', fontsize=14) ax.set_xlabel('Trillions of US Dollars') ...
_____no_output_____
MIT
Code/IPython/bootcamp_graphics.ipynb
ljsun88/data_bootcamp_nyu
**Exercise.** Create the same graph with this statement at the top:```pythonplt.style.use('fivethirtyeight')```(Once we execute this statement, it stays executed.) **Comment.** We can get a list of files from `plt.style.available`.
plt.style.available
_____no_output_____
MIT
Code/IPython/bootcamp_graphics.ipynb
ljsun88/data_bootcamp_nyu
**Comment.** Ignore the seaborn styles, that's a package we don't have yet. **Exercise.** Try another one by editing the code below.
plt.style.use('fivethirtyeight') gdp_bar()
_____no_output_____
MIT
Code/IPython/bootcamp_graphics.ipynb
ljsun88/data_bootcamp_nyu
**Comment.** For aficionados, the always tasteful [xkcd style](http://xkcd.com/1235/).
plt.style.use('ggplot') gdp_bar() plt.xkcd() gdp_bar()
_____no_output_____
MIT
Code/IPython/bootcamp_graphics.ipynb
ljsun88/data_bootcamp_nyu
**Comment.** We reset the style with these two lines:
mpl.rcParams.update(mpl.rcParamsDefault) %matplotlib inline
_____no_output_____
MIT
Code/IPython/bootcamp_graphics.ipynb
ljsun88/data_bootcamp_nyu
Load audience overlap edges for level 1
level = 1 audience_overlap_sites = load_level_data(os.path.join(_ALEXA_DATA_PATH, 'corpus_2018_audience_overlap_sites_scrapping_result.json'), level=level) audience_overlap_sites_NODES = create_audience_overlap_nodes(audience_overlap_sites) print(audience_overlap_sites_NODES[:5]) edge_df = pd.DataFrame(audience_overla...
_____no_output_____
MIT
notebooks/graph_algos/node2vec/corpus 2018 audience_overlap lvl data 1.ipynb
Panayot9/News-Media-Peers
Create Graph
import stellargraph as sg G = sg.StellarGraph(edges=edge_df) print(G.info())
StellarGraph: Undirected multigraph Nodes: 11865, Edges: 20399 Node types: default: [11865] Features: none Edge types: default-default->default Edge types: default-default->default: [20399] Weights: all 1 (default) Features: none
MIT
notebooks/graph_algos/node2vec/corpus 2018 audience_overlap lvl data 1.ipynb
Panayot9/News-Media-Peers
Create Node2Vec models
models = create_node2vec_model(G, dimensions=[64, 128, 256, 512, 1024], is_weighted=False, prefix='corpus_2018_audience_overlap_lvl_one')
Start creating random walks Number of random walks: 118650
MIT
notebooks/graph_algos/node2vec/corpus 2018 audience_overlap lvl data 1.ipynb
Panayot9/News-Media-Peers
Export embeddings as feature
for model_name, model in models.items(): print(f'Processing model: {model_name}') embeddings_wv = {site: model.wv.get_vector(site).tolist() for site in G.nodes()} export_model_as_feature(embeddings_wv, model_name, data_year='2018') run_experiment(features=model_name, dataset='emnlp2018', task='fact') ...
Processing model: corpus_2018_audience_overlap_lvl_one_unweighted_64D.model +------+---------------------+---------------+--------------------+-----------------------------------------------------------+ | task | classification_mode | type_training | normalize_features | features ...
MIT
notebooks/graph_algos/node2vec/corpus 2018 audience_overlap lvl data 1.ipynb
Panayot9/News-Media-Peers
MVTS Data Toolkit A Toolkit for Pre-processing Multivariate Time Series Data* **Title:** MVTS Data Toolkit: A Toolkit for Pre-processing Multivariate Time Series Data* **Journal:** SoftwareX Journal [$\triangleright$](https://www.journals.elsevier.com/softwarex)(Elsevier)* **Authors:** Azim Ahmadzadeh [$\oplus$](https...
import os import yaml import urllib.request from mvtsdatatoolkit.data.data_retriever import DataRetriever # for downloading data import CONSTANTS as CONST %matplotlib inline
_____no_output_____
MIT
demo.ipynb
skad00sh/gsudmlab-mvtsdata_toolkit
1. Download the DatasetIn this demo I use an example dataset. In the following cells, it will be automatically downloaded. But in case something goes wrong, here is the direct link:https://bitbucket.org/gsudmlab/mvtsdata_toolkit/downloads/petdataset_01.zip. Please note that here I am using the class `DataRetriever` th...
dr = DataRetriever(1) dr.print_info()
URL: https://bitbucket.org/gsudmlab/mvtsdata_toolkit/downloads/petdataset_01.zip NAME: petdataset_01.zip TYPE: application/zip SIZE: 32M
MIT
demo.ipynb
skad00sh/gsudmlab-mvtsdata_toolkit
Ready to download? This may take a few seconds, depending on your internet bandwidth. Wait for the progress bar.
where_to = './temp/' # Don't change this path. dr.retrieve(target_path = where_to)
Extracting: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2001/2001 [00:01<00:00, 1869.24it/s]
MIT
demo.ipynb
skad00sh/gsudmlab-mvtsdata_toolkit
OK. Let's see how many files are available to us now.
dr.get_total_number_of_files()
_____no_output_____
MIT
demo.ipynb
skad00sh/gsudmlab-mvtsdata_toolkit
2. Setup ConfigurationsFor tasks such as feature extraction (module: `features.feature_extractor`) and data analysis (modules: `data_analysis.mvts_data_analysis` and `data_analysis.extracted_features_analysis`) a configuration file must be provided by the user.This configuration file is a `yml` file, specificly define...
conf_url = 'https://bitbucket.org/gsudmlab/mvtsdata_toolkit/downloads/demo_configs.yml' path_to_config = './demo_configs.yml' urllib.request.urlretrieve(conf_url, path_to_config)
_____no_output_____
MIT
demo.ipynb
skad00sh/gsudmlab-mvtsdata_toolkit