markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Data contains: link to wikipedia article, name of person, text of article.
people.head() len(people)
machine-learning/foundations/document-retrieval-assignment.ipynb
scoaste/showcase
mit
Explore the dataset and checkout the text it contains Exploring the entry for president Obama
obama = people[people['name'] == 'Barack Obama'] obama obama['text']
machine-learning/foundations/document-retrieval-assignment.ipynb
scoaste/showcase
mit
Exploring the entry for actor George Clooney
clooney = people[people['name'] == 'George Clooney'] clooney['text']
machine-learning/foundations/document-retrieval-assignment.ipynb
scoaste/showcase
mit
Get the word counts for Obama article
obama['word_count'] = graphlab.text_analytics.count_words(obama['text']) print obama['word_count']
machine-learning/foundations/document-retrieval-assignment.ipynb
scoaste/showcase
mit
Sort the word counts for the Obama article Turning dictonary of word counts into a table
obama_word_count_table = obama[['word_count']].stack('word_count', new_column_name = ['word','count'])
machine-learning/foundations/document-retrieval-assignment.ipynb
scoaste/showcase
mit
Sorting the word counts to show most common words at the top
obama_word_count_table.head() obama_word_count_table.sort('count',ascending=False)
machine-learning/foundations/document-retrieval-assignment.ipynb
scoaste/showcase
mit
Most common words include uninformative words like "the", "in", "and",... Compute TF-IDF for the corpus To give more weight to informative words, we weigh them by their TF-IDF scores.
people['word_count'] = graphlab.text_analytics.count_words(people['text']) people.head() tfidf = graphlab.text_analytics.tf_idf(people['word_count']) # Earlier versions of GraphLab Create returned an SFrame rather than a single SArray # This notebook was created using Graphlab Create version 1.7.1 if graphlab.version...
machine-learning/foundations/document-retrieval-assignment.ipynb
scoaste/showcase
mit
Examine the TF-IDF for the Obama article
obama = people[people['name'] == 'Barack Obama'] obama[['tfidf']].stack('tfidf',new_column_name=['word','tfidf']).sort('tfidf',ascending=False)
machine-learning/foundations/document-retrieval-assignment.ipynb
scoaste/showcase
mit
Words with highest TF-IDF are much more informative. Manually compute distances between a few people Let's manually compare the distances between the articles for a few famous people.
clinton = people[people['name'] == 'Bill Clinton'] beckham = people[people['name'] == 'David Beckham']
machine-learning/foundations/document-retrieval-assignment.ipynb
scoaste/showcase
mit
Is Obama closer to Clinton than to Beckham? We will use cosine distance, which is given by (1-cosine_similarity) and find that the article about president Obama is closer to the one about former president Clinton than that of footballer David Beckham.
graphlab.distances.cosine(obama['tfidf'][0],clinton['tfidf'][0]) graphlab.distances.cosine(obama['tfidf'][0],beckham['tfidf'][0])
machine-learning/foundations/document-retrieval-assignment.ipynb
scoaste/showcase
mit
Build a nearest neighbor model for document retrieval We now create a nearest-neighbors model and apply it to document retrieval.
knn_model = graphlab.nearest_neighbors.create(people,features=['tfidf'],label='name') knn_model.summary()
machine-learning/foundations/document-retrieval-assignment.ipynb
scoaste/showcase
mit
Applying the nearest-neighbors model for retrieval Who is closest to Obama?
knn_model.query(obama)
machine-learning/foundations/document-retrieval-assignment.ipynb
scoaste/showcase
mit
As we can see, president Obama's article is closest to the one about his vice-president Biden, and those of other politicians. Other examples of document retrieval
swift = people[people['name'] == 'Taylor Swift'] knn_model.query(swift) jolie = people[people['name'] == 'Angelina Jolie'] knn_model.query(jolie) arnold = people[people['name'] == 'Arnold Schwarzenegger'] knn_model.query(arnold) elton = people[people['name'] == 'Elton John'] elton elton[['word_count']].stack('w...
machine-learning/foundations/document-retrieval-assignment.ipynb
scoaste/showcase
mit
Sparsity preserving clustering Keras example <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/model_optimization/guide/combine/sparse_clustering_example"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <...
! pip install -q tensorflow-model-optimization import tensorflow as tf import numpy as np import tempfile import zipfile import os
tensorflow_model_optimization/g3doc/guide/combine/sparse_clustering_example.ipynb
tensorflow/model-optimization
apache-2.0
Train a tf.keras model for MNIST to be pruned and clustered
# Load MNIST dataset mnist = tf.keras.datasets.mnist (train_images, train_labels), (test_images, test_labels) = mnist.load_data() # Normalize the input image so that each pixel value is between 0 to 1. train_images = train_images / 255.0 test_images = test_images / 255.0 model = tf.keras.Sequential([ tf.keras.laye...
tensorflow_model_optimization/g3doc/guide/combine/sparse_clustering_example.ipynb
tensorflow/model-optimization
apache-2.0
Evaluate the baseline model and save it for later usage
_, baseline_model_accuracy = model.evaluate( test_images, test_labels, verbose=0) print('Baseline test accuracy:', baseline_model_accuracy) _, keras_file = tempfile.mkstemp('.h5') print('Saving model to: ', keras_file) tf.keras.models.save_model(model, keras_file, include_optimizer=False)
tensorflow_model_optimization/g3doc/guide/combine/sparse_clustering_example.ipynb
tensorflow/model-optimization
apache-2.0
Prune and fine-tune the model to 50% sparsity Apply the prune_low_magnitude() API to prune the whole pre-trained model to achieve the model that is to be clustered in the next step. For how best to use the API to achieve the best compression rate while maintaining your target accuracy, refer to the pruning comprehensiv...
import tensorflow_model_optimization as tfmot prune_low_magnitude = tfmot.sparsity.keras.prune_low_magnitude pruning_params = { 'pruning_schedule': tfmot.sparsity.keras.ConstantSparsity(0.5, begin_step=0, frequency=100) } callbacks = [ tfmot.sparsity.keras.UpdatePruningStep() ] pruned_model = prune_low_ma...
tensorflow_model_optimization/g3doc/guide/combine/sparse_clustering_example.ipynb
tensorflow/model-optimization
apache-2.0
Fine-tune the model, check sparsity, and evaluate the accuracy against baseline Fine-tune the model with pruning for 3 epochs.
# Fine-tune model pruned_model.fit( train_images, train_labels, epochs=3, validation_split=0.1, callbacks=callbacks)
tensorflow_model_optimization/g3doc/guide/combine/sparse_clustering_example.ipynb
tensorflow/model-optimization
apache-2.0
Define helper functions to calculate and print the sparsity of the model.
def print_model_weights_sparsity(model): for layer in model.layers: if isinstance(layer, tf.keras.layers.Wrapper): weights = layer.trainable_weights else: weights = layer.weights for weight in weights: if "kernel" not in weight.name or "centroid" in weigh...
tensorflow_model_optimization/g3doc/guide/combine/sparse_clustering_example.ipynb
tensorflow/model-optimization
apache-2.0
Check that the model kernels was correctly pruned. We need to strip the pruning wrapper first. We also create a deep copy of the model to be used in the next step.
stripped_pruned_model = tfmot.sparsity.keras.strip_pruning(pruned_model) print_model_weights_sparsity(stripped_pruned_model) stripped_pruned_model_copy = tf.keras.models.clone_model(stripped_pruned_model) stripped_pruned_model_copy.set_weights(stripped_pruned_model.get_weights())
tensorflow_model_optimization/g3doc/guide/combine/sparse_clustering_example.ipynb
tensorflow/model-optimization
apache-2.0
Apply clustering and sparsity preserving clustering and check its effect on model sparsity in both cases Next, we apply both clustering and sparsity preserving clustering on the pruned model and observe that the latter preserves sparsity on your pruned model. Note that we stripped pruning wrappers from the pruned model...
# Clustering cluster_weights = tfmot.clustering.keras.cluster_weights CentroidInitialization = tfmot.clustering.keras.CentroidInitialization clustering_params = { 'number_of_clusters': 8, 'cluster_centroids_init': CentroidInitialization.KMEANS_PLUS_PLUS } clustered_model = cluster_weights(stripped_pruned_model, *...
tensorflow_model_optimization/g3doc/guide/combine/sparse_clustering_example.ipynb
tensorflow/model-optimization
apache-2.0
Check sparsity for both models.
print("Clustered Model sparsity:\n") print_model_weights_sparsity(clustered_model) print("\nSparsity preserved clustered Model sparsity:\n") print_model_weights_sparsity(sparsity_clustered_model)
tensorflow_model_optimization/g3doc/guide/combine/sparse_clustering_example.ipynb
tensorflow/model-optimization
apache-2.0
Create 1.6x smaller models from clustering Define helper function to get zipped model file.
def get_gzipped_model_size(file): # It returns the size of the gzipped model in kilobytes. _, zipped_file = tempfile.mkstemp('.zip') with zipfile.ZipFile(zipped_file, 'w', compression=zipfile.ZIP_DEFLATED) as f: f.write(file) return os.path.getsize(zipped_file)/1000 # Clustered model clustered_model_file...
tensorflow_model_optimization/g3doc/guide/combine/sparse_clustering_example.ipynb
tensorflow/model-optimization
apache-2.0
Create a TFLite model from combining sparsity preserving weight clustering and post-training quantization Strip clustering wrappers and convert to TFLite.
stripped_sparsity_clustered_model = tfmot.clustering.keras.strip_clustering(sparsity_clustered_model) converter = tf.lite.TFLiteConverter.from_keras_model(stripped_sparsity_clustered_model) converter.optimizations = [tf.lite.Optimize.DEFAULT] sparsity_clustered_quant_model = converter.convert() _, pruned_and_clustere...
tensorflow_model_optimization/g3doc/guide/combine/sparse_clustering_example.ipynb
tensorflow/model-optimization
apache-2.0
See the persistence of accuracy from TF to TFLite
def eval_model(interpreter): input_index = interpreter.get_input_details()[0]["index"] output_index = interpreter.get_output_details()[0]["index"] # Run predictions on every image in the "test" dataset. prediction_digits = [] for i, test_image in enumerate(test_images): if i % 1000 == 0: print(f"Ev...
tensorflow_model_optimization/g3doc/guide/combine/sparse_clustering_example.ipynb
tensorflow/model-optimization
apache-2.0
You evaluate the model, which has been pruned, clustered and quantized, and then see that the accuracy from TensorFlow persists in the TFLite backend.
# Keras model evaluation stripped_sparsity_clustered_model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) _, sparsity_clustered_keras_accuracy = stripped_sparsity_clustered_model.evaluate( test_images, test_labels, ve...
tensorflow_model_optimization/g3doc/guide/combine/sparse_clustering_example.ipynb
tensorflow/model-optimization
apache-2.0
Transform EEG data using current source density (CSD) This script shows an example of how to use CSD :footcite:PerrinEtAl1987,PerrinEtAl1989,Cohen2014,KayserTenke2015. CSD takes the spatial Laplacian of the sensor signal (derivative in both x and y). It does what a planar gradiometer does in MEG. Computing these spatia...
# Authors: Alex Rockhill <aprockhill@mailbox.org> # # License: BSD-3-Clause import numpy as np import matplotlib.pyplot as plt import mne from mne.datasets import sample print(__doc__) data_path = sample.data_path()
0.24/_downloads/1537c1215a3e40187a4513e0b5f1d03d/eeg_csd.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Load sample subject data
raw = mne.io.read_raw_fif(data_path + '/MEG/sample/sample_audvis_raw.fif') raw = raw.pick_types(meg=False, eeg=True, eog=True, ecg=True, stim=True, exclude=raw.info['bads']).load_data() events = mne.find_events(raw) raw.set_eeg_reference(projection=True).apply_proj()
0.24/_downloads/1537c1215a3e40187a4513e0b5f1d03d/eeg_csd.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Plot the raw data and CSD-transformed raw data:
raw_csd = mne.preprocessing.compute_current_source_density(raw) raw.plot() raw_csd.plot()
0.24/_downloads/1537c1215a3e40187a4513e0b5f1d03d/eeg_csd.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Also look at the power spectral densities:
raw.plot_psd() raw_csd.plot_psd()
0.24/_downloads/1537c1215a3e40187a4513e0b5f1d03d/eeg_csd.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
CSD can also be computed on Evoked (averaged) data. Here we epoch and average the data so we can demonstrate that.
event_id = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3, 'visual/right': 4, 'smiley': 5, 'button': 32} epochs = mne.Epochs(raw, events, event_id=event_id, tmin=-0.2, tmax=.5, preload=True) evoked = epochs['auditory'].average()
0.24/_downloads/1537c1215a3e40187a4513e0b5f1d03d/eeg_csd.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
First let's look at how CSD affects scalp topography:
times = np.array([-0.1, 0., 0.05, 0.1, 0.15]) evoked_csd = mne.preprocessing.compute_current_source_density(evoked) evoked.plot_joint(title='Average Reference', show=False) evoked_csd.plot_joint(title='Current Source Density')
0.24/_downloads/1537c1215a3e40187a4513e0b5f1d03d/eeg_csd.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
CSD has parameters stiffness and lambda2 affecting smoothing and spline flexibility, respectively. Let's see how they affect the solution:
fig, ax = plt.subplots(4, 4) fig.subplots_adjust(hspace=0.5) fig.set_size_inches(10, 10) for i, lambda2 in enumerate([0, 1e-7, 1e-5, 1e-3]): for j, m in enumerate([5, 4, 3, 2]): this_evoked_csd = mne.preprocessing.compute_current_source_density( evoked, stiffness=m, lambda2=lambda2) this...
0.24/_downloads/1537c1215a3e40187a4513e0b5f1d03d/eeg_csd.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Subsetting data Subset variables (columns) For a DataFrame, basic indexing selects the columns (cfr. the dictionaries of pure python) Selecting a single column:
countries['area'] # single []
notebooks/pandas_03a_selecting_data.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
Remember that the same syntax can also be used to add a new columns: df['new'] = .... We can also select multiple columns by passing a list of column names into []:
countries[['area', 'population']] # double [[]]
notebooks/pandas_03a_selecting_data.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
Subset observations (rows) Using [], slicing or boolean indexing accesses the rows: Slicing
countries[0:4]
notebooks/pandas_03a_selecting_data.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
Boolean indexing (filtering) Often, you want to select rows based on a certain condition. This can be done with 'boolean indexing' (like a where clause in SQL) and comparable to numpy. The indexer (or boolean mask) should be 1-dimensional and the same length as the thing being indexed.
countries['area'] > 100000 countries[countries['area'] > 100000] countries[countries['population'] > 50]
notebooks/pandas_03a_selecting_data.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
An overview of the possible comparison operations: Operator | Description ------ | -------- == | Equal != | Not equal > | Greater than >= | Greater than or equal \< | Lesser than <= | Lesser than or equal and to combine multiple conditions: Operator | Description ------ | -----...
s = countries['capital'] s.isin? s.isin(['Berlin', 'London'])
notebooks/pandas_03a_selecting_data.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
This can then be used to filter the dataframe with boolean indexing:
countries[countries['capital'].isin(['Berlin', 'London'])]
notebooks/pandas_03a_selecting_data.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
Let's say we want to select all data for which the capital starts with a 'B'. In Python, when having a string, we could use the startswith method:
string = 'Berlin' string.startswith('B')
notebooks/pandas_03a_selecting_data.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
In pandas, these are available on a Series through the str namespace:
countries['capital'].str.startswith('B')
notebooks/pandas_03a_selecting_data.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
For an overview of all string methods, see: https://pandas.pydata.org/pandas-docs/stable/reference/series.html#string-handling Exercises using the Titanic dataset
df = pd.read_csv("data/titanic.csv") df.head()
notebooks/pandas_03a_selecting_data.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
<div class="alert alert-success"> <b>EXERCISE 1</b>: <ul> <li>Select all rows for male passengers and calculate the mean age of those passengers. Do the same for the female passengers.</li> </ul> </div>
# %load _solutions/pandas_03a_selecting_data1.py # %load _solutions/pandas_03a_selecting_data2.py # %load _solutions/pandas_03a_selecting_data3.py
notebooks/pandas_03a_selecting_data.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
We will later see an easier way to calculate both averages at the same time with groupby. <div class="alert alert-success"> <b>EXERCISE 2</b>: <ul> <li>How many passengers older than 70 were on the Titanic?</li> </ul> </div>
# %load _solutions/pandas_03a_selecting_data4.py # %load _solutions/pandas_03a_selecting_data5.py
notebooks/pandas_03a_selecting_data.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
<div class="alert alert-success"> <b>EXERCISE 3</b>: <ul> <li>Select the passengers that are between 30 and 40 years old?</li> </ul> </div>
# %load _solutions/pandas_03a_selecting_data6.py
notebooks/pandas_03a_selecting_data.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
<div class="alert alert-success"> <b>EXERCISE 4</b>: For a single string `name = 'Braund, Mr. Owen Harris'`, split this string (check the `split()` method of a string) and get the first element of the resulting list. <details><summary>Hints</summary> - No Pandas in this exercise, just standard Python. - The `split(...
name = 'Braund, Mr. Owen Harris' # %load _solutions/pandas_03a_selecting_data7.py
notebooks/pandas_03a_selecting_data.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
<div class="alert alert-success"> <b>EXERCISE 5</b>: Convert the solution of the previous exercise to all strings of the `Name` column at once. Split the 'Name' column on the `,`, extract the first part (the surname), and add this as new column 'Surname'. <details><summary>Hints</summary> - Pandas uses the `str` a...
# %load _solutions/pandas_03a_selecting_data8.py
notebooks/pandas_03a_selecting_data.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
<div class="alert alert-success"> <b>EXERCISE 6</b>: <ul> <li>Select all passenger that have a surname starting with 'Williams'.</li> </ul> </div>
# %load _solutions/pandas_03a_selecting_data9.py
notebooks/pandas_03a_selecting_data.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
<div class="alert alert-success"> <b>EXERCISE 7</b>: <ul> <li>Select all rows for the passengers with a surname of more than 15 characters.</li> </ul> </div>
# %load _solutions/pandas_03a_selecting_data10.py
notebooks/pandas_03a_selecting_data.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
[OPTIONAL] more exercises For the quick ones among you, here are some more exercises with some larger dataframe with film data. These exercises are based on the PyCon tutorial of Brandon Rhodes (so all credit to him!) and the datasets he prepared for that. You can download these data from here: titles.csv and cast.csv ...
cast = pd.read_csv('data/cast.csv') cast.head() titles = pd.read_csv('data/titles.csv') titles.head()
notebooks/pandas_03a_selecting_data.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
<div class="alert alert-success"> <b>EXERCISE 8</b>: <ul> <li>How many movies are listed in the titles dataframe?</li> </ul> </div>
# %load _solutions/pandas_03a_selecting_data11.py
notebooks/pandas_03a_selecting_data.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
<div class="alert alert-success"> <b>EXERCISE 9</b>: <ul> <li>What are the earliest two films listed in the titles dataframe?</li> </ul> </div>
# %load _solutions/pandas_03a_selecting_data12.py
notebooks/pandas_03a_selecting_data.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
<div class="alert alert-success"> <b>EXERCISE 10</b>: <ul> <li>How many movies have the title "Hamlet"?</li> </ul> </div>
# %load _solutions/pandas_03a_selecting_data13.py
notebooks/pandas_03a_selecting_data.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
<div class="alert alert-success"> <b>EXERCISE 11</b>: <ul> <li>List all of the "Treasure Island" movies from earliest to most recent.</li> </ul> </div>
# %load _solutions/pandas_03a_selecting_data14.py
notebooks/pandas_03a_selecting_data.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
<div class="alert alert-success"> <b>EXERCISE 12</b>: <ul> <li>How many movies were made from 1950 through 1959?</li> </ul> </div>
# %load _solutions/pandas_03a_selecting_data15.py # %load _solutions/pandas_03a_selecting_data16.py
notebooks/pandas_03a_selecting_data.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
<div class="alert alert-success"> <b>EXERCISE 13</b>: <ul> <li>How many roles in the movie "Inception" are NOT ranked by an "n" value?</li> </ul> </div>
# %load _solutions/pandas_03a_selecting_data17.py # %load _solutions/pandas_03a_selecting_data18.py # %load _solutions/pandas_03a_selecting_data19.py
notebooks/pandas_03a_selecting_data.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
<div class="alert alert-success"> <b>EXERCISE 14</b>: <ul> <li>But how many roles in the movie "Inception" did receive an "n" value?</li> </ul> </div>
# %load _solutions/pandas_03a_selecting_data20.py
notebooks/pandas_03a_selecting_data.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
<div class="alert alert-success"> <b>EXERCISE 15</b>: <ul> <li>Display the cast of the "Titanic" (the most famous 1997 one) in their correct "n"-value order, ignoring roles that did not earn a numeric "n" value.</li> </ul> </div>
# %load _solutions/pandas_03a_selecting_data21.py
notebooks/pandas_03a_selecting_data.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
<div class="alert alert-success"> <b>EXERCISE 16</b>: <ul> <li>List the supporting roles (having n=2) played by Brad Pitt in the 1990s, in order by year.</li> </ul> </div>
# %load _solutions/pandas_03a_selecting_data22.py
notebooks/pandas_03a_selecting_data.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
(1) EX. Using the list of words you produced by splitting 'new_string', create a new list that contains only the words whose last letter is "y" We can combine list comprehension and the string manipulation function .endswith(), both of which we learned about on Wednesday, to create a new list the keeps only the element...
word_list_y = [word for word in new_string_list if word.endswith('y')] #print the new list word_list_y
01-IntroToPython/00-PythonBasics_ExerciseSolutions.ipynb
lknelson/text-analysis-2017
bsd-3-clause
(2) EX. Create a new list that contains the first letter of each word. We can again use list comprehension, combined with string splicing, to produce a new list that contain only the first letter of each word. Remember in Python counting starts at 0.
word_list_firstletter = [word[0] for word in new_string_list] #print our new list word_list_firstletter
01-IntroToPython/00-PythonBasics_ExerciseSolutions.ipynb
lknelson/text-analysis-2017
bsd-3-clause
(3) EX. Create a new list that contains only words longer than two letters. We can, again, use list comprehension, the 'len' function, and the algorithm function greater than, or '>', to filter and keep words longer than two letters. Note that '>' is strictly greater than. If we wanted to include words with 2 letters ...
word_list_long = [n for n in new_string_list if len(n)>2] #print new list word_list_long
01-IntroToPython/00-PythonBasics_ExerciseSolutions.ipynb
lknelson/text-analysis-2017
bsd-3-clause
Accessing Data You can access DataFrame data using familiar Python dict/list operations:
cities = pd.DataFrame({'City Name': city_names, 'Population': population}) print(type(cities['City Name'])) cities['City Name'] print(type(cities["City Name"][1])) cities["City Name"][1] print(type(cities[0:2])) cities[0:2]
machine learning/google-ml-crash-course/pandas/pandas_intro.ipynb
yavuzovski/playground
gpl-3.0
Manipulating Data You may apply Python's basic arithmetic operations to Series. For example:
population / 1000 import numpy as np np.log(population) cities['Area square miles'] = pd.Series([46.87, 176.53, 97.92]) cities['Population density'] = cities['Population'] / cities['Area square miles'] cities population.apply(lambda val: val > 1000000)
machine learning/google-ml-crash-course/pandas/pandas_intro.ipynb
yavuzovski/playground
gpl-3.0
Exercise #1 Modify the cities table by adding a new boolean column that is True if and only if both of the following are True: The city is named after a saint. The city has an area greater than 50 square miles. Note: Boolean Series are combined using the bitwise, rather than the traditional boolean, operators. For ex...
cities['is saint and wide'] = (cities['Area square miles'] > 50) & (cities['City Name'].apply(lambda name: name.startswith("San"))) cities
machine learning/google-ml-crash-course/pandas/pandas_intro.ipynb
yavuzovski/playground
gpl-3.0
Indexes Both Series and DataFrame objects also define an index property that assigns an identifier value to each Series item or DataFrame row. By default, at construction, pandas assigns index values that reflect the ordering of the source data. Once created, the index values are stable; that is, they do not change wh...
city_names.index cities.index cities.reindex([2, 0, 1])
machine learning/google-ml-crash-course/pandas/pandas_intro.ipynb
yavuzovski/playground
gpl-3.0
Reindexing is a great way to shuffle (randomize) a DataFrame. In the example below, we take the index, which is array-like, and pass it to NumPy's random.permutation function, which shuffles its values in place. Calling reindex with this shuffled array causes the DataFrame rows to be shuffled in the same way.
cities.reindex(np.random.permutation(cities.index))
machine learning/google-ml-crash-course/pandas/pandas_intro.ipynb
yavuzovski/playground
gpl-3.0
Exercise #2 The reindex method allows index values that are not in the original DataFrame's index values. Try it and see what happens if you use such values! Why do you think this is allowed?
cities.reindex([4, 2, 1, 3, 0])
machine learning/google-ml-crash-course/pandas/pandas_intro.ipynb
yavuzovski/playground
gpl-3.0
1. Создание векторов Самый простой способ создать вектор в NumPy — задать его явно с помощью numpy.array(list, dtype=None, ...). Параметр list задает итерируемый объект, из которого можно создать вектор. Например, в качестве этого параметра можно задать список чисел. Параметр dtype задает тип значений вектора, например...
a = np.array([1, 2, 3, 4]) print 'Вектор:\n', a b = np.array([1, 2, 3, 4, 5], dtype=float) print 'Вещественный вектор:\n', b c = np.array([True, False, True], dtype=bool) print 'Булевский вектор:\n', c
1 Mathematics and Python/Lectures notebooks/9 vector operations/vector_operations.ipynb
maxis42/ML-DA-Coursera-Yandex-MIPT
mit
Тип значений вектора можно узнать с помощью numpy.ndarray.dtype:
print 'Тип булевского вектора:\n', c.dtype
1 Mathematics and Python/Lectures notebooks/9 vector operations/vector_operations.ipynb
maxis42/ML-DA-Coursera-Yandex-MIPT
mit
Другим способом задания вектора является функция numpy.arange(([start, ]stop, [step, ]...), которая задает последовательность чисел заданного типа из промежутка [start, stop) через шаг step:
d = np.arange(start=10, stop=20, step=2) # последнее значение не включается! print 'Вектор чисел от 10 до 20 с шагом 2:\n', d f = np.arange(start=0, stop=1, step=0.3, dtype=float) print 'Вещественный вектор чисел от 0 до 1 с шагом 0.3:\n', f
1 Mathematics and Python/Lectures notebooks/9 vector operations/vector_operations.ipynb
maxis42/ML-DA-Coursera-Yandex-MIPT
mit
По сути вектор в NumPy является одномерным массивом, что соответствует интуитивному определению вектора:
print c.ndim # количество размерностей print c.shape # shape фактически задает длину вектора
1 Mathematics and Python/Lectures notebooks/9 vector operations/vector_operations.ipynb
maxis42/ML-DA-Coursera-Yandex-MIPT
mit
Обратите внимание: вектор _и одномерный массив тождественные понятия в NumPy. Помимо этого, также существуют понятия _вектор-столбец и вектор-строка, которые, несмотря на то что математически задают один и тот же объект, являются двумерными массивами и имеют другое значение поля shape (в этом случае поле состоит из дву...
a = np.array([1, 2, 3]) b = np.array([6, 5, 4]) k = 2 print 'Вектор a:', a print 'Вектор b:', b print 'Число k:', k print 'Сумма a и b:\n', a + b print 'Разность a и b:\n', a - b print 'Покоординатное умножение a и b:\n', a * b print 'Умножение вектора на число (осуществляется покоординатно):\n', k * a
1 Mathematics and Python/Lectures notebooks/9 vector operations/vector_operations.ipynb
maxis42/ML-DA-Coursera-Yandex-MIPT
mit
3. Нормы векторов Вспомним некоторые нормы, которые можно ввести в пространстве $\mathbb{R}^{n}$, и рассмотрим, с помощью каких библиотек и функций их можно вычислять в NumPy. p-норма p-норма (норма Гёльдера) для вектора $x = (x_{1}, \dots, x_{n}) \in \mathbb{R}^{n}$ вычисляется по формуле: $$ \left\Vert x \right\Vert_...
from numpy.linalg import norm
1 Mathematics and Python/Lectures notebooks/9 vector operations/vector_operations.ipynb
maxis42/ML-DA-Coursera-Yandex-MIPT
mit
$\ell_{1}$ норма $\ell_{1}$ норма (также известная как манхэттенское расстояние) для вектора $x = (x_{1}, \dots, x_{n}) \in \mathbb{R}^{n}$ вычисляется по формуле: $$ \left\Vert x \right\Vert_{1} = \sum_{i=1}^n \left| x_{i} \right|. $$ Ей в функции numpy.linalg.norm(x, ord=None, ...) соответствует параметр ord=1.
a = np.array([1, 2, -3]) print 'Вектор a:', a print 'L1 норма вектора a:\n', norm(a, ord=1)
1 Mathematics and Python/Lectures notebooks/9 vector operations/vector_operations.ipynb
maxis42/ML-DA-Coursera-Yandex-MIPT
mit
$\ell_{2}$ норма $\ell_{2}$ норма (также известная как евклидова норма) для вектора $x = (x_{1}, \dots, x_{n}) \in \mathbb{R}^{n}$ вычисляется по формуле: $$ \left\Vert x \right\Vert_{2} = \sqrt{\sum_{i=1}^n \left( x_{i} \right)^2}. $$ Ей в функции numpy.linalg.norm(x, ord=None, ...) соответствует параметр ord=2.
a = np.array([1, 2, -3]) print 'Вектор a:', a print 'L2 норма вектора a:\n', norm(a, ord=2)
1 Mathematics and Python/Lectures notebooks/9 vector operations/vector_operations.ipynb
maxis42/ML-DA-Coursera-Yandex-MIPT
mit
Более подробно о том, какие еще нормы (в том числе матричные) можно вычислить, см. документацию. 4. Расстояния между векторами Для двух векторов $x = (x_{1}, \dots, x_{n}) \in \mathbb{R}^{n}$ и $y = (y_{1}, \dots, y_{n}) \in \mathbb{R}^{n}$ $\ell_{1}$ и $\ell_{2}$ раccтояния вычисляются по следующим формулам соответст...
a = np.array([1, 2, -3]) b = np.array([-4, 3, 8]) print 'Вектор a:', a print 'Вектор b:', b print 'L1 расстояние между векторами a и b:\n', norm(a - b, ord=1) print 'L2 расстояние между векторами a и b:\n', norm(a - b, ord=2)
1 Mathematics and Python/Lectures notebooks/9 vector operations/vector_operations.ipynb
maxis42/ML-DA-Coursera-Yandex-MIPT
mit
Также расстояние между векторами можно посчитать с помощью функции scipy.spatial.distance.cdist(XA, XB, metric='euclidean', p=2, ...) из модуля SciPy, предназначенного для выполнения научных и инженерных расчётов.
from scipy.spatial.distance import cdist
1 Mathematics and Python/Lectures notebooks/9 vector operations/vector_operations.ipynb
maxis42/ML-DA-Coursera-Yandex-MIPT
mit
scipy.spatial.distance.cdist(...) требует, чтобы размерность XA и XB была как минимум двумерная. По этой причине для использования этой функции необходимо преобразовать векторы, которые мы рассматриваем в этом ноутбуке, к вектор-строкам с помощью способов, которые мы рассмотрим ниже. Параметры XA, XB — исходные вектор...
a = np.array([6, 3, -5]) b = np.array([-1, 0, 7]) print 'Вектор a:', a print 'Его размерность:', a.shape print 'Вектор b:', b print 'Его размерность:', b.shape a = a.reshape((1, 3)) b = b.reshape((1, 3)) print 'После применения метода reshape:\n' print 'Вектор-строка a:', a print 'Его размерность:', a.shape print 'Век...
1 Mathematics and Python/Lectures notebooks/9 vector operations/vector_operations.ipynb
maxis42/ML-DA-Coursera-Yandex-MIPT
mit
Заметим, что после применения этого метода размерность полученных вектор-строк будет равна shape. Следующий метод позволяет сделать такое же преобразование, но не изменяет размерность исходного вектора. В NumPy к размерностям объектов можно добавлять фиктивные оси с помощью np.newaxis. Для того, чтобы понять, как это...
d = np.array([3, 0, 8, 9, -10]) print 'Вектор d:', d print 'Его размерность:', d.shape print 'Вектор d с newaxis --> вектор-строка:\n', d[np.newaxis, :] print 'Полученная размерность:', d[np.newaxis, :].shape print 'Вектор d с newaxis --> вектор-столбец:\n', d[:, np.newaxis] print 'Полученная размерность:', d[:, np.n...
1 Mathematics and Python/Lectures notebooks/9 vector operations/vector_operations.ipynb
maxis42/ML-DA-Coursera-Yandex-MIPT
mit
Важно, что np.newaxis добавляет к размерности ось, длина которой равна 1 (это и логично, так как количество элементов должно сохраняться). Таким образом, надо вставлять новую ось там, где нужна единица в размерности. Теперь посчитаем расстояния с помощью scipy.spatial.distance.cdist(...), используя np.newaxis для прео...
a = np.array([6, 3, -5]) b = np.array([-1, 0, 7]) print 'Евклидово расстояние между a и b (через cdist):', cdist(a[np.newaxis, :], b[np.newaxis, :], metric='euclidean')
1 Mathematics and Python/Lectures notebooks/9 vector operations/vector_operations.ipynb
maxis42/ML-DA-Coursera-Yandex-MIPT
mit
Эта функция также позволяет вычислять попарные расстояния между множествами векторов. Например, пусть у нас имеется матрица размера $m_{A} \times n$. Мы можем рассматривать ее как описание некоторых $m_{A}$ наблюдений в $n$-мерном пространстве. Пусть также имеется еще одна аналогичная матрица размера $m_{B} \times n$, ...
a = np.array([0, 5, -1]) b = np.array([-4, 9, 3]) print 'Вектор a:', a print 'Вектор b:', b
1 Mathematics and Python/Lectures notebooks/9 vector operations/vector_operations.ipynb
maxis42/ML-DA-Coursera-Yandex-MIPT
mit
Скалярное произведение в пространстве $\mathbb{R}^{n}$ для двух векторов $x = (x_{1}, \dots, x_{n})$ и $y = (y_{1}, \dots, y_{n})$ определяется как: $$ \langle x, y \rangle = \sum_{i=1}^n x_{i} y_{i}. $$ Скалярное произведение двух векторов можно вычислять с помощью функции numpy.dot(a, b, ...) или метода vec1.dot(vec2...
print 'Скалярное произведение a и b (через функцию):', np.dot(a, b) print 'Скалярное произведение a и b (через метод):', a.dot(b)
1 Mathematics and Python/Lectures notebooks/9 vector operations/vector_operations.ipynb
maxis42/ML-DA-Coursera-Yandex-MIPT
mit
Длиной вектора $x = (x_{1}, \dots, x_{n}) \in \mathbb{R}^{n}$ называется квадратный корень из скалярного произведения, то есть длина равна евклидовой норме вектора: $$ \left| x \right| = \sqrt{\langle x, x \rangle} = \sqrt{\sum_{i=1}^n x_{i}^2} = \left\Vert x \right\Vert_{2}. $$ Теперь, когда мы знаем расстояние между...
cos_angle = np.dot(a, b) / norm(a) / norm(b) print 'Косинус угла между a и b:', cos_angle print 'Сам угол:', np.arccos(cos_angle)
1 Mathematics and Python/Lectures notebooks/9 vector operations/vector_operations.ipynb
maxis42/ML-DA-Coursera-Yandex-MIPT
mit
Initialize Path of files Parameters for ilastik Parameters for cell detection Parameters for blood vessel segmentation
# Set folder where data is stored current_dir = os.path.abspath(os.getcwd()) folder = next(os.walk('.'))[1][0] #print(os.path.exists(folder)) # testing # ilastik parameters classifier_file = folder + '/xbrain_vessel_seg_v7.ilp' #print(classifier_file) # testing #print(os.path.exists(classifier_file)) #...
Demo/xbrain-demo-dvid.ipynb
nerdslab/xbrain
apache-2.0
Load in the data
# load data dvid = DVIDRemote({ "protocol": "http", "host": "172.19.248.41:8000/", }) chan = dvid.get_channel('a3afee0bf807466c9b7c3b0bbfd1acbd','grayscale') print(chan) input_data = dvid.get_cutout(chan,0,[0,2],[0,64],[390,454])#DVID here -- np.load(image_file) # plot the 50th s...
Demo/xbrain-demo-dvid.ipynb
nerdslab/xbrain
apache-2.0
Ingest data and classifer into ilastik
# Compute time required for processing start = time.time() # Process the data to probability maps probability_maps = xbrain.classify_pixel(input_data, classifier_file, threads=no_of_threads, ram=ram_size) end = time.time() print("\nElapsed time: %f minutes" % ((end - start)/60))
Demo/xbrain-demo-dvid.ipynb
nerdslab/xbrain
apache-2.0
Display the results of ilastik
# pull down the coorisponding matricies cell_prob_map = probability_maps[:, :, :, 2] vessel_prob_map = probability_maps[:, :, :, 1] print("cell_prob_map shape", cell_prob_map.shape) ndp.plot(cell_prob_map, slice=50, cmap1='jet', alpha=0.5) print("vessel_prob_map shape", vessel_prob_map.shape) ndp.plot(vessel_prob_ma...
Demo/xbrain-demo-dvid.ipynb
nerdslab/xbrain
apache-2.0
Running different package for testing new algorithm
# reload packages for testing new algorithms # import importlib # importlib.reload(xbrain) # Compute time required for processing start = time.time() # cell detection centroids, cell_map = xbrain.detect_cells(cell_prob_map, cell_probability_threshold, stopping_criterion, initial_template_size, dilation_size, max_no_c...
Demo/xbrain-demo-dvid.ipynb
nerdslab/xbrain
apache-2.0
Display results of new algorithm
# show results print("Vessel Segmentation") ndp.plot(input_data, vessel_map, slice = 50, alpha = 0.5) print("Cell Segmentation") ndp.plot(input_data, cell_map, slice = 50, alpha = 0.5)
Demo/xbrain-demo-dvid.ipynb
nerdslab/xbrain
apache-2.0
The directory where we store the data files:
basedir = "~/DataOceano/MyOcean/INSITU_GLO_NRT_OBSERVATIONS_013_030/monthly/" + str(year) + str(month).zfill(2) + '/' basedir = os.path.expanduser(basedir)
PythonNotebooks/PlatformPlots/Read_drifter_data_2.ipynb
CopernicusMarineInsitu/INSTACTraining
mit
Simple plot Configuration We start by defining some options for the scatter plot: * the range of temperature that will be shown; * the colormap; * the ticks to be put on the colorbar.
tempmin, tempmax = 5., 30. cmaptemp = plt.cm.RdYlBu_r normtemp = colors.Normalize(vmin=tempmin, vmax=tempmax) tempticks = np.arange(tempmin, tempmax+0.1,2.5)
PythonNotebooks/PlatformPlots/Read_drifter_data_2.ipynb
CopernicusMarineInsitu/INSTACTraining
mit
Loop on the files We create a loop on the netCDF files located in our directory. Longitude, latitude and dept are read from every file, while the temperature is not always available. For the plot, we only take the data with have a depth dimension equal to 1.
fig = plt.figure(figsize=(12, 8)) nfiles_notemp = 0 filelist = sorted(glob.glob(basedir+'*.nc')) for datafiles in filelist: with netCDF4.Dataset(datafiles) as nc: lon = nc.variables['LONGITUDE'][:] lat = nc.variables['LATITUDE'][:] depth = nc.variables['DEPH'][:] try: ...
PythonNotebooks/PlatformPlots/Read_drifter_data_2.ipynb
CopernicusMarineInsitu/INSTACTraining
mit
We also counted how many files don't have the temperature variable:
print 'Number of files: ' + str(len(filelist)) print 'Number of files without temperature: ' + str(nfiles_notemp)
PythonNotebooks/PlatformPlots/Read_drifter_data_2.ipynb
CopernicusMarineInsitu/INSTACTraining
mit
Plot on a map Configuration of the projection We choose a Robin projection centered on 0ºE, with a cruse ('c') resolution for the coastline.
m = Basemap(projection='moll', lon_0=0, resolution='c')
PythonNotebooks/PlatformPlots/Read_drifter_data_2.ipynb
CopernicusMarineInsitu/INSTACTraining
mit
The rest of the configuration of the plot can be kept as it was. Loop on the files We can copy the part of the code used before. We need to add a line for the projection of the coordinates: lon, lat = m(lon, lat). After the loop we can add the coastline and the continents.
fig = plt.figure(figsize=(12, 8)) nfiles_notemp = 0 filelist = sorted(glob.glob(basedir+'*.nc')) for datafiles in filelist: with netCDF4.Dataset(datafiles) as nc: lon = nc.variables['LONGITUDE'][:] lat = nc.variables['LATITUDE'][:] depth = nc.variables['DEPH'][:] lon, lat ...
PythonNotebooks/PlatformPlots/Read_drifter_data_2.ipynb
CopernicusMarineInsitu/INSTACTraining
mit
Basic workflow <a class="anchor" id="Basic-workflow"></a> The pyro.contrib.epidemiology module provides a modeling language for a class of stochastic discrete-time discrete-count compartmental models, together with a number of black box inference algorithms to perform joint inference on global parameters and latent v...
class SimpleSIRModel(CompartmentalModel): def __init__(self, population, recovery_time, data): compartments = ("S", "I") # R is implicit. duration = len(data) super().__init__(compartments, duration, population) assert isinstance(recovery_time, float) assert recovery_time > ...
tutorial/source/epi_intro.ipynb
uber/pyro
apache-2.0
Note that we've stored data in the model. These models have a scikit-learn like interface: we instantiate a model class with data, then call a .fit_*() method to train, then call .predict() on a trained model. Note also that we've taken special care so that t can be either an integer or a slice. Under the hood, t is an...
population = 10000 recovery_time = 10. empty_data = [None] * 90 model = SimpleSIRModel(population, recovery_time, empty_data) # We'll repeatedly generate data until a desired number of infections is found. pyro.set_rng_seed(20200709) for attempt in range(100): synth_data = model.generate({"R0": 2.0}) total_inf...
tutorial/source/epi_intro.ipynb
uber/pyro
apache-2.0
The generated data contains both global variables and time series, packed into tensors.
for key, value in sorted(synth_data.items()): print("{}.shape = {}".format(key, tuple(value.shape))) plt.figure(figsize=(8,4)) for name, value in sorted(synth_data.items()): if value.dim(): plt.plot(value, label=name) plt.xlim(0, len(empty_data) - 1) plt.ylim(0.8, None) plt.xlabel("time step") plt.ylab...
tutorial/source/epi_intro.ipynb
uber/pyro
apache-2.0
Inference <a class="anchor" id="Inference"></a> Next let's recover estimates of the latent variables given only observations obs. To do this we'll create a new model instance from the synthetic observations.
obs = synth_data["obs"] model = SimpleSIRModel(population, recovery_time, obs)
tutorial/source/epi_intro.ipynb
uber/pyro
apache-2.0