markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Hyperparameters Tune the following parameters: * Set epochs to the number of iterations until the network stops learning or start overfitting * Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory: * 64 * 128 * 256 * ... * Set keep_probability to the probability of keeping a node using dropout
# TODO: Tune Parameters epochs = 15 batch_size = 512 keep_probability = 0.75
image-classification/dlnd_image_classification.ipynb
kvr777/deep-learning
mit
Remove color fill
import numpy as np num_categories = df.Category.unique().size p = sns.countplot(data=df, y = 'Category', hue = 'islong', saturation=1, xerr=7*np.arange(num_categories), edgecolor=(0,0,0), linewidth=2, fill=False) import numpy as np num_categories = df.Category.unique().size p = sns.countplot(data=df, y = 'Category', hue = 'islong', saturation=1, xerr=7*np.arange(num_categories), edgecolor=(0,0,0), linewidth=2) sns.set(font_scale=1.25) num_categories = df.Category.unique().size p = sns.countplot(data=df, y = 'Category', hue = 'islong', saturation=1, xerr=3*np.arange(num_categories), edgecolor=(0,0,0), linewidth=2) help(sns.set) plt.rcParams['font.family'] = "cursive" #sns.set(style="white",font_scale=1.25) num_categories = df.Category.unique().size p = sns.countplot(data=df, y = 'Category', hue = 'islong', saturation=1, xerr=3*np.arange(num_categories), edgecolor=(0,0,0), linewidth=2) plt.rcParams['font.family'] = 'Times New Roman' #sns.set_style({'font.family': 'Helvetica'}) sns.set(style="white",font_scale=1.25) num_categories = df.Category.unique().size p = sns.countplot(data=df, y = 'Category', hue = 'islong', saturation=1, xerr=3*np.arange(num_categories), edgecolor=(0,0,0), linewidth=2) bg_color = (0.25, 0.25, 0.25) sns.set(rc={"font.style":"normal", "axes.facecolor":bg_color, "figure.facecolor":bg_color, "text.color":"black", "xtick.color":"black", "ytick.color":"black", "axes.labelcolor":"black"}) #sns.set_style({'font.family': 'Helvetica'}) #sns.set(style="white",font_scale=1.25) num_categories = df.Category.unique().size p = sns.countplot(data=df, y = 'Category', hue = 'islong', saturation=1, xerr=3*np.arange(num_categories), edgecolor=(0,0,0), linewidth=2, palette="Dark2") leg = p.get_legend() leg.set_title("Duration") labs = leg.texts labs[0].set_text("Short") labs[1].set_text("Long") leg.get_title().set_color('white') for lab in labs: lab.set_color('white') p.axes.xaxis.label.set_text("Counts") plt.text(00,0, "Count Plot", fontsize = 95, color='white', fontstyle='italic') p.get_figure().savefig('../../figures/countplot.png')
visualizations/seaborn/notebooks/.ipynb_checkpoints/countplot-checkpoint.ipynb
apryor6/apryor6.github.io
mit
Model initialization The chromosphere model doesn't take any special initialization arguments, so the initialization is straightforward.
tm = ChromosphereModel()
notebooks/example_chromosphere_model.ipynb
hpparvi/PyTransit
gpl-2.0
Model use Evaluation The transit model can be evaluated using either a set of scalar parameters, a parameter vector (1D ndarray), or a parameter vector array (2D ndarray). The model flux is returned as a 1D ndarray in the first two cases, and a 2D ndarray in the last (one model per parameter vector). tm.evaluate_ps(k, t0, p, a, i, e=0, w=0) evaluates the model for a set of scalar parameters, where k is the radius ratio, t0 the zero epoch, p the orbital period, a the semi-major axis divided by the stellar radius, i the inclination in radians, e the eccentricity, and w the argument of periastron. Eccentricity and argument of periastron are optional, and omitting them defaults to a circular orbit. tm.evaluate_pv(pv) evaluates the model for a 1D parameter vector, or 2D array of parameter vectors. In the first case, the parameter vector should be array-like with elements [k, t0, p, a, i, e, w]. In the second case, the parameter vectors should be stored in a 2d ndarray with shape (npv, 7) as [[k1, t01, p1, a1, i1, e1, w1], [k2, t02, p2, a2, i2, e2, w2], ... [kn, t0n, pn, an, in, en, wn]] The reason for the different options is that the model implementations may have optimisations that make the model evaluation for a set of parameter vectors much faster than if computing them separately. This is especially the case for the OpenCL models.
def plot_transits(tm, fmt='k'): fig, axs = subplots(1, 3, figsize = (13,3), constrained_layout=True, sharey=True) flux = tm.evaluate_ps(k, t0, p, a, i, e, w) axs[0].plot(tm.time, flux, fmt) axs[0].set_title('Individual parameters') flux = tm.evaluate_pv(pvp[0]) axs[1].plot(tm.time, flux, fmt) axs[1].set_title('Parameter vector') flux = tm.evaluate_pv(pvp) axs[2].plot(tm.time, flux.T, 'k', alpha=0.2); axs[2].set_title('Parameter vector array') setp(axs[0], ylabel='Normalised flux') setp(axs, xlabel='Time [days]', xlim=tm.time[[0,-1]]) tm.set_data(times_sc) plot_transits(tm)
notebooks/example_chromosphere_model.ipynb
hpparvi/PyTransit
gpl-2.0
Supersampling The transit model can be supersampled by setting the nsamples and exptimes arguments in set_data.
tm.set_data(times_lc, nsamples=10, exptimes=0.01) plot_transits(tm)
notebooks/example_chromosphere_model.ipynb
hpparvi/PyTransit
gpl-2.0
Heterogeneous time series PyTransit allows for heterogeneous time series, that is, a single time series can contain several individual light curves (with, e.g., different time cadences and required supersampling rates) observed (possibly) in different passbands. If a time series contains several light curves, it also needs the light curve indices for each exposure. These are given through lcids argument, which should be an array of integers. If the time series contains light curves observed in different passbands, the passband indices need to be given through pbids argument as an integer array, one per light curve. Supersampling can also be defined on per-light curve basis by giving the nsamplesand exptimes as arrays with one value per light curve. For example, a set of three light curves, two observed in one passband and the third in another passband times_1 (lc = 0, pb = 0, sc) = [1, 2, 3, 4] times_2 (lc = 1, pb = 0, lc) = [3, 4] times_3 (lc = 2, pb = 1, sc) = [1, 5, 6] Would be set up as tm.set_data(time = [1, 2, 3, 4, 3, 4, 1, 5, 6], lcids = [0, 0, 0, 0, 1, 1, 2, 2, 2], pbids = [0, 0, 1], nsamples = [ 1, 10, 1], exptimes = [0.1, 1.0, 0.1]) Example: two light curves with different cadences
times_1 = linspace(0.85, 1.0, 500) times_2 = linspace(1.0, 1.15, 10) times = concatenate([times_1, times_2]) lcids = concatenate([full(times_1.size, 0, 'int'), full(times_2.size, 1, 'int')]) nsamples = [1, 10] exptimes = [0, 0.0167] tm.set_data(times, lcids, nsamples=nsamples, exptimes=exptimes) plot_transits(tm, 'k.-')
notebooks/example_chromosphere_model.ipynb
hpparvi/PyTransit
gpl-2.0
Now we can use interact to run plot_sample_stats with different values of n. Note: xlim sets the limits of the x-axis so the figure doesn't get rescaled as we vary n.
def sample_stat(sample): return sample.min() slider = widgets.IntSlider(min=10, max=1000, value=100) interact(plot_sample_stats, n=slider, xlim=fixed([55, 95])) None
pycon2016/tutorials/computation_statistics/sampling.ipynb
rawrgulmuffins/presentation_notes
mit
Other sample statistics This framework works with any other quantity we want to estimate. By changing sample_stat, you can compute the SE and CI for any sample statistic. Exercise 1: Fill in sample_stat below with any of these statistics: Standard deviation of the sample. Coefficient of variation, which is the sample standard deviation divided by the sample standard mean. Min or Max Median (which is the 50th percentile) 10th or 90th percentile. Interquartile range (IQR), which is the difference between the 75th and 25th percentiles. NumPy array methods you might find useful include std, min, max, and percentile. Depending on the results, you might want to adjust xlim.
def sample_stat(sample): # TODO: replace the following line with another sample statistic #return sample.std() return numpy.percentile(sample, 50) slider = widgets.IntSlider(min=10, max=1000, value=100) interact(plot_sample_stats, n=slider, xlim=fixed([10, 90])) None
pycon2016/tutorials/computation_statistics/sampling.ipynb
rawrgulmuffins/presentation_notes
mit
What values change, and what remain the same? When running the one sample t test, p is always higher than 0.05, meaning that at 5% significane level the sample provides sufficient evidence to conclude that the mean of the sample is the calculated mean in all cases (for both populations) Regarding the samples: means change, the lower the number of datapoints the less accurate the representation of the population. T-values increase with the size while p-values tend to zero showing that the difference in means is due to the difference in the populations and not just variability.
#Change the population value for pop1 to 0.3 pop1 = np.random.binomial(10, 0.3, 10000) pop2 = np.random.binomial(10,0.5, 10000) plt.hist(pop1, alpha=0.5, label='Population 1') plt.hist(pop2, alpha=0.5, label='Population 2') plt.legend(loc='upper right') plt.show() print(pop1.mean()) print(pop2.mean()) print(pop1.std()) print(pop2.std()) #Samples of the new pop1 and pop2 sample5 = np.random.choice(pop1, 100, replace=True) sample6 = np.random.choice(pop2, 100, replace=True) print(sample5.mean()) print(sample6.mean()) print(sample5.std()) print(sample6.std()) plt.hist(sample5, alpha=0.5, label='sample 5') plt.hist(sample6, alpha=0.5, label='sample 6') plt.legend(loc='upper right') plt.show() #Compare samples Calculate t-value & p-value. from scipy.stats import ttest_ind from scipy.stats import ttest_1samp mean1 = pop1.mean() mean2 = pop2.mean() print(ttest_1samp(sample5, mean1)) print(ttest_1samp(sample6, mean2)) print(ttest_ind(sample6, sample5, equal_var=False)) #Then change the population value p for group 1 to 0.4, and do it again pop1 = np.random.binomial(10, 0.4, 10000) pop2 = np.random.binomial(10,0.5, 10000) plt.hist(pop1, alpha=0.5, label='Population 1') plt.hist(pop2, alpha=0.5, label='Population 2') plt.legend(loc='upper right') plt.show() print(pop1.mean()) print(pop2.mean()) print(pop1.std()) print(pop2.std()) #Samples of the new pop1 and pop2 sample5 = np.random.choice(pop1, 100, replace=True) sample6 = np.random.choice(pop2, 100, replace=True) print(sample5.mean()) print(sample6.mean()) print(sample5.std()) print(sample6.std()) plt.hist(sample5, alpha=0.5, label='sample 5') plt.hist(sample6, alpha=0.5, label='sample 6') plt.legend(loc='upper right') plt.show() #Compare samples Calculate t-value & p-value. from scipy.stats import ttest_ind from scipy.stats import ttest_1samp mean1 = pop1.mean() mean2 = pop2.mean() print(ttest_1samp(sample5, mean1)) print(ttest_1samp(sample6, mean2)) print(ttest_ind(sample6, sample5, equal_var=False))
Drill Central Limit Theorem.ipynb
borja876/Thinkful-DataScience-Borja
mit
What changes, and why? The t-value decreases in the second case (when p1=0.4, p2=0.5) and the p-value is much higher. The t value decreases because the difference between means is lower and the increase if the p-value shows the that the noise due to variability is growing in weach case
#Change the distribution of your populations from binomial to a distribution of your choice pop3 = np.random.standard_t(25, 10000) pop4 = logistic = np.random.logistic(9,2, 10000) plt.hist(pop1, alpha=0.5, label='Population 3') plt.hist(pop2, alpha=0.5, label='Population 4') plt.legend(loc='upper right') plt.show() print(pop1.mean()) print(pop2.mean()) print(pop1.std()) print(pop2.std()) #Samples of the new pop1 and pop2. sample7 = np.random.choice(pop3, 100, replace=True) sample8 = np.random.choice(pop4, 100, replace=True) print(sample7.mean()) print(sample8.mean()) print(sample7.std()) print(sample8.std()) plt.hist(sample7, alpha=0.5, label='sample 7') plt.hist(sample8, alpha=0.5, label='sample 8') plt.legend(loc='upper right') plt.show() #Compare samples Calculate t-value & p-value. from scipy.stats import ttest_ind from scipy.stats import ttest_1samp mean1 = pop3.mean() mean2 = pop4.mean() print(ttest_1samp(sample7, mean1)) print(ttest_1samp(sample8, mean2)) print(ttest_ind(sample8, sample7, equal_var=False))
Drill Central Limit Theorem.ipynb
borja876/Thinkful-DataScience-Borja
mit
BERT Preprocessing with TF Text <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/text/guide/bert_preprocessing_guide"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/text/blob/master/docs/guide/bert_preprocessing_guide.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/text/blob/master/docs/guide/bert_preprocessing_guide.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/text/docs/guide/bert_preprocessing_guide.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> Overview Text preprocessing is the end-to-end transformation of raw text into a model’s integer inputs. NLP models are often accompanied by several hundreds (if not thousands) of lines of Python code for preprocessing text. Text preprocessing is often a challenge for models because: Training-serving skew. It becomes increasingly difficult to ensure that the preprocessing logic of the model's inputs are consistent at all stages of model development (e.g. pretraining, fine-tuning, evaluation, inference). Using different hyperparameters, tokenization, string preprocessing algorithms or simply packaging model inputs inconsistently at different stages could yield hard-to-debug and disastrous effects to the model. Efficiency and flexibility. While preprocessing can be done offline (e.g. by writing out processed outputs to files on disk and then reconsuming said preprocessed data in the input pipeline), this method incurs an additional file read and write cost. Preprocessing offline is also inconvenient if there are preprocessing decisions that need to happen dynamically. Experimenting with a different option would require regenerating the dataset again. Complex model interface. Text models are much more understandable when their inputs are pure text. It's hard to understand a model when its inputs require an extra, indirect encoding step. Reducing the preprocessing complexity is especially appreciated for model debugging, serving, and evaluation. Additionally, simpler model interfaces also make it more convenient to try the model (e.g. inference or training) on different, unexplored datasets. Text preprocessing with TF.Text Using TF.Text's text preprocessing APIs, we can construct a preprocessing function that can transform a user's text dataset into the model's integer inputs. Users can package preprocessing directly as part of their model to alleviate the above mentioned problems. This tutorial will show how to use TF.Text preprocessing ops to transform text data into inputs for the BERT model and inputs for language masking pretraining task described in "Masked LM and Masking Procedure" of BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. The process involves tokenizing text into subword units, combining sentences, trimming content to a fixed size and extracting labels for the masked language modeling task. Setup Let's import the packages and libraries we need first.
!pip install -q -U tensorflow-text import tensorflow as tf import tensorflow_text as text import functools
third_party/tensorflow-text/src/docs/guide/bert_preprocessing_guide.ipynb
nwjs/chromium.src
bsd-3-clause
Our data contains two text features and we can create a example tf.data.Dataset. Our goal is to create a function that we can supply Dataset.map() with to be used in training.
examples = { "text_a": [ b"Sponge bob Squarepants is an Avenger", b"Marvel Avengers" ], "text_b": [ b"Barack Obama is the President.", b"President is the highest office" ], } dataset = tf.data.Dataset.from_tensor_slices(examples) next(iter(dataset))
third_party/tensorflow-text/src/docs/guide/bert_preprocessing_guide.ipynb
nwjs/chromium.src
bsd-3-clause
Content Trimming The main input to BERT is a concatenation of two sentences. However, BERT requires inputs to be in a fixed-size and shape and we may have content which exceed our budget. We can tackle this by using a text.Trimmer to trim our content down to a predetermined size (once concatenated along the last axis). There are different text.Trimmer types which select content to preserve using different algorithms. text.RoundRobinTrimmer for example will allocate quota equally for each segment but may trim the ends of sentences. text.WaterfallTrimmer will trim starting from the end of the last sentence. For our example, we will use RoundRobinTrimmer which selects items from each segment in a left-to-right manner.
trimmer = text.RoundRobinTrimmer(max_seq_length=[_MAX_SEQ_LEN]) trimmed = trimmer.trim([segment_a, segment_b]) trimmed
third_party/tensorflow-text/src/docs/guide/bert_preprocessing_guide.ipynb
nwjs/chromium.src
bsd-3-clause
trimmed now contains the segments where the number of elements across a batch is 8 elements (when concatenated along axis=-1). Combining segments Now that we have segments trimmed, we can combine them together to get a single RaggedTensor. BERT uses special tokens to indicate the beginning ([CLS]) and end of a segment ([SEP]). We also need a RaggedTensor indicating which items in the combined Tensor belong to which segment. We can use text.combine_segments() to get both of these Tensor with special tokens inserted.
segments_combined, segments_ids = text.combine_segments( [segment_a, segment_b], start_of_sequence_id=_START_TOKEN, end_of_segment_id=_END_TOKEN) segments_combined, segments_ids
third_party/tensorflow-text/src/docs/guide/bert_preprocessing_guide.ipynb
nwjs/chromium.src
bsd-3-clause
Choosing the Masked Value The methodology described the original BERT paper for choosing the value for masking is as follows: For mask_token_rate of the time, replace the item with the [MASK] token: "my dog is hairy" -&gt; "my dog is [MASK]" For random_token_rate of the time, replace the item with a random word: "my dog is hairy" -&gt; "my dog is apple" For 1 - mask_token_rate - random_token_rate of the time, keep the item unchanged: "my dog is hairy" -&gt; "my dog is hairy." text.MaskedValuesChooser encapsulates this logic and can be used for our preprocessing function. Here's an example of what MaskValuesChooser returns given a mask_token_rate of 80% and default random_token_rate:
input_ids = tf.ragged.constant([[19, 7, 21, 20, 9, 8], [13, 4, 16, 5], [15, 10, 12, 11, 6]]) mask_values_chooser = text.MaskValuesChooser(_VOCAB_SIZE, _MASK_TOKEN, 0.8) mask_values_chooser.get_mask_values(input_ids)
third_party/tensorflow-text/src/docs/guide/bert_preprocessing_guide.ipynb
nwjs/chromium.src
bsd-3-clause
Padding Model Inputs Now that we have all the inputs for our model, the last step in our preprocessing is to package them into fixed 2-dimensional Tensors with padding and also generate a mask Tensor indicating the values which are pad values. We can use text.pad_model_inputs() to help us with this task.
# Prepare and pad combined segment inputs input_word_ids, input_mask = text.pad_model_inputs( masked_token_ids, max_seq_length=_MAX_SEQ_LEN) input_type_ids, _ = text.pad_model_inputs( masked_token_ids, max_seq_length=_MAX_SEQ_LEN) # Prepare and pad masking task inputs masked_lm_positions, masked_lm_weights = text.pad_model_inputs( masked_token_ids, max_seq_length=_MAX_PREDICTIONS_PER_BATCH) masked_lm_ids, _ = text.pad_model_inputs( masked_lm_ids, max_seq_length=_MAX_PREDICTIONS_PER_BATCH) model_inputs = { "input_word_ids": input_word_ids, "input_mask": input_mask, "input_type_ids": input_type_ids, "masked_lm_ids": masked_lm_ids, "masked_lm_positions": masked_lm_positions, "masked_lm_weights": masked_lm_weights, } model_inputs
third_party/tensorflow-text/src/docs/guide/bert_preprocessing_guide.ipynb
nwjs/chromium.src
bsd-3-clause
Review Let's review what we have so far and assemble our preprocessing function. Here's what we have:
def bert_pretrain_preprocess(vocab_table, features): # Input is a string Tensor of documents, shape [batch, 1]. text_a = features["text_a"] text_b = features["text_b"] # Tokenize segments to shape [num_sentences, (num_words)] each. tokenizer = text.BertTokenizer( vocab_table, token_out_type=tf.int64) segments = [tokenizer.tokenize(text).merge_dims( 1, -1) for text in (text_a, text_b)] # Truncate inputs to a maximum length. trimmer = text.RoundRobinTrimmer(max_seq_length=6) trimmed_segments = trimmer.trim(segments) # Combine segments, get segment ids and add special tokens. segments_combined, segment_ids = text.combine_segments( trimmed_segments, start_of_sequence_id=_START_TOKEN, end_of_segment_id=_END_TOKEN) # Apply dynamic masking task. masked_input_ids, masked_lm_positions, masked_lm_ids = ( text.mask_language_model( segments_combined, random_selector, mask_values_chooser, ) ) # Prepare and pad combined segment inputs input_word_ids, input_mask = text.pad_model_inputs( masked_input_ids, max_seq_length=_MAX_SEQ_LEN) input_type_ids, _ = text.pad_model_inputs( masked_input_ids, max_seq_length=_MAX_SEQ_LEN) # Prepare and pad masking task inputs masked_lm_positions, masked_lm_weights = text.pad_model_inputs( masked_input_ids, max_seq_length=_MAX_PREDICTIONS_PER_BATCH) masked_lm_ids, _ = text.pad_model_inputs( masked_lm_ids, max_seq_length=_MAX_PREDICTIONS_PER_BATCH) model_inputs = { "input_word_ids": input_word_ids, "input_mask": input_mask, "input_type_ids": input_type_ids, "masked_lm_ids": masked_lm_ids, "masked_lm_positions": masked_lm_positions, "masked_lm_weights": masked_lm_weights, } return model_inputs
third_party/tensorflow-text/src/docs/guide/bert_preprocessing_guide.ipynb
nwjs/chromium.src
bsd-3-clause
We previously constructed a tf.data.Dataset and we can now use our assembled preprocessing function bert_pretrain_preprocess() in Dataset.map(). This allows us to create an input pipeline for transforming our raw string data into integer inputs and feed directly into our model.
dataset = tf.data.Dataset.from_tensors(examples) dataset = dataset.map(functools.partial( bert_pretrain_preprocess, lookup_table)) next(iter(dataset))
third_party/tensorflow-text/src/docs/guide/bert_preprocessing_guide.ipynb
nwjs/chromium.src
bsd-3-clause
Applying a pretrained model In this tutorial, you will learn how to apply pyannote.audio models on an audio file, whose manual annotation is depicted below
# clone pyannote-audio Github repository and update ROOT_DIR accordingly ROOT_DIR = "/Users/bredin/Development/pyannote/pyannote-audio" AUDIO_FILE = f"{ROOT_DIR}/tutorials/assets/sample.wav" from pyannote.database.util import load_rttm REFERENCE = f"{ROOT_DIR}/tutorials/assets/sample.rttm" reference = load_rttm(REFERENCE)["sample"] reference
tutorials/applying_a_model.ipynb
pyannote/pyannote-audio
mit
Loading models from 🤗 hub Pretrained models are available on 🤗 Huggingface model hub and can be listed by looking for the pyannote-audio-model tag.
from huggingface_hub import HfApi available_models = [m.modelId for m in HfApi().list_models(filter="pyannote-audio-model")] available_models
tutorials/applying_a_model.ipynb
pyannote/pyannote-audio
mit
Let's load the speaker segmentation model...
from pyannote.audio import Model model = Model.from_pretrained("pyannote/segmentation")
tutorials/applying_a_model.ipynb
pyannote/pyannote-audio
mit
... which consists in SincNet feature extraction (sincnet) , LSTM sequence modeling (lstm), a few feed-forward layers (linear), and a final multi-label classifier:
model.summarize()
tutorials/applying_a_model.ipynb
pyannote/pyannote-audio
mit
More details about the model are provided by its specifications...
specs = model.specifications specs
tutorials/applying_a_model.ipynb
pyannote/pyannote-audio
mit
... which can be understood like that: duration = 5.0: the model ingests 5s-long audio chunks Resolution.FRAME and len(classes) == 4: the model output a sequence of frame-wise 4-dimensoinal scores Problem.MULTI_LABEL_CLASSIFICATION for each frame, more than one speaker can be active at once To apply the model on the audio file, we wrap it into an Inference instance:
from pyannote.audio import Inference inference = Inference(model, step=2.5) output = inference(AUDIO_FILE) output
tutorials/applying_a_model.ipynb
pyannote/pyannote-audio
mit
For each of the 11 positions of the 5s window, the model outputs a 4-dimensional vector every 16ms (293 frames for 5 seconds), corresponding to the probabilities that each of (up to) 4 speakers is active.
output.data.shape
tutorials/applying_a_model.ipynb
pyannote/pyannote-audio
mit
Processing a file from memory In case the audio file is not stored on disk, pipelines can also process audio provided as a {"waveform": ..., "sample_rate": ...} dictionary.
import torchaudio waveform, sample_rate = torchaudio.load(AUDIO_FILE) print(f"{type(waveform)=}") print(f"{waveform.shape=}") print(f"{waveform.dtype=}") audio_in_memory = {"waveform": waveform, "sample_rate": sample_rate} output = inference(audio_in_memory) output
tutorials/applying_a_model.ipynb
pyannote/pyannote-audio
mit
Processing part of a file If needed, Inference can be used to process only part of a file:
from pyannote.core import Segment output = inference.crop(AUDIO_FILE, Segment(10, 20)) output
tutorials/applying_a_model.ipynb
pyannote/pyannote-audio
mit
Mesoporous pore size distribution Let's start by analysing the mesoporous size distribution of some of our nitrogen physisorption samples. The MCM-41 sample should have a very well defined, singular pore size in the mesopore range, with the pores as open-ended cylinders. We can use a common method, relying on a description of the adsorbate in the pores based on the Kelvin equation and the thickness of the adsorbed layer. These methods are derivatives of the BJH (Barrett, Joyner and Halenda) method.
isotherm = next(i for i in isotherms_n2_77k if i.material=='MCM-41') print(isotherm.material) results = pgc.psd_mesoporous( isotherm, pore_geometry='cylinder', verbose=True, )
docs/examples/psd.ipynb
pauliacomi/pyGAPS
mit
The distribution is what we expected, a single narrow peak. Since we asked for extra verbosity, the function has generated a graph. The graph automatically sets a minimum limit of 1.5 angstrom, where the Kelvin equation methods break down. The result dictionary returned contains the x and y points of the graph. Depending on the sample, the distribution can be a well defined or broad, single or multimodal, or, in the case of adsorbents without mesoporoes, not relevant at all. For example, using the Takeda 5A carbon, and specifying a slit pore geometry:
isotherm = next(i for i in isotherms_n2_77k if i.material=='Takeda 5A') print(isotherm.material) result_dict_meso = pgc.psd_mesoporous( isotherm, psd_model='pygaps-DH', pore_geometry='slit', verbose=True, )
docs/examples/psd.ipynb
pauliacomi/pyGAPS
mit
Now let's break down the available settings with the mesoporous PSD function. A psd_model parameter to select specific implementations of the methods, such as the BJH method or the DH method. A pore_geometry parameter can be used to specify the known pore geometry of the pore. The Kelvin equation parameters change appropriately. Classical models are commonly applied on the desorption branch of the isotherm. This is also the default, although the user can specify the adsorption branch to be used with the branch parameter. The function used for evaluating the layer thickness can be specified by the thickness_model parameter. Either a named internal model ('Halsey', 'Harkins/Jura', etc.) or a custom user function which takes pressure as an argument is accepted. If the user wants to use a custom Kelvin model, they can do so through the kelvin_model parameters. This must be a name of an internal model ('Kelvin', 'Kelvin-KJS', etc.) or a custom function. Below we use the adsorption branch and the Halsey thickness curve to look at the MCM-41 pores. We use the Kruck-Jaroniec-Sayari correction of the Kelvin model.
isotherm = next(i for i in isotherms_n2_77k if i.material=='MCM-41') print(isotherm.material) result_dict = pgc.psd_mesoporous( isotherm, psd_model='DH', pore_geometry='cylinder', branch='ads', thickness_model='Halsey', kelvin_model='Kelvin-KJS', verbose=True, )
docs/examples/psd.ipynb
pauliacomi/pyGAPS
mit
<div class="alert alert-info"> **Note:** If the user wants to customise the standard plots which are displayed, they are available for use in the `pygaps.graphing.calc_graphs` module </div> Microporous pore size distribution For microporous samples, we can use the psd_microporous function. The available model is an implementation of the Horvath-Kawazoe (HK) method. The HK model uses a list of parameters which describe the interaction between the adsorbate and the adsorbent. These should be selected on a per case basis by using the adsorbate_model and adsorbent_model keywords. If they are not specified, the function assumes a carbon model for the sample surface and takes the required adsorbate properties (magnetic susceptibility, polarizability, molecular diameter, surface density, liquid density and molar mass) from the isotherm adsorbate. The pore geometry is also assumed to be slit-like. Let's look at using the function on the carbon sample:
isotherm = next(i for i in isotherms_n2_77k if i.material == 'Takeda 5A') print(isotherm.material) result_dict_micro = pgc.psd_microporous( isotherm, psd_model='HK', verbose=True, )
docs/examples/psd.ipynb
pauliacomi/pyGAPS
mit
We see that we could have a peak around 0.7 nm, but could use more adsorption data at low pressure for better resolution. It should be noted that the model breaks down with pores bigger than around 3 nm. The framework comes with other models for the surface, like as the Saito-Foley derived oxide-ion model. Below is an attempt to use the HK method with these parameters for the UiO-66 sample and some user-specified parameters for the adsorbate interaction. We should not expect the results to be very accurate, due to the different surface properties and heterogeneity of the MOF.
adsorbate_params = { 'molecular_diameter': 0.3, 'polarizability': 1.76e-3, 'magnetic_susceptibility': 3.6e-8, 'surface_density': 6.71e+18, 'liquid_density': 0.806, 'adsorbate_molar_mass': 28.0134 } isotherm = next(i for i in isotherms_n2_77k if i.material == 'UiO-66(Zr)') print(isotherm.material) result_dict = pgc.psd_microporous( isotherm, psd_model='HK', material_model='AlSiOxideIon', adsorbate_model=adsorbate_params, verbose=True, )
docs/examples/psd.ipynb
pauliacomi/pyGAPS
mit
Finally, other types of H-K modified models are also available, like the Rege-Yang adapted model (RY), or a Cheng-Yang modification of both H-K (HK-CY) and R-Y (RY-CY) models.
isotherm = next(i for i in isotherms_n2_77k if i.material == 'Takeda 5A') print(isotherm.material) result_dict_micro = pgc.psd_microporous( isotherm, psd_model='RY', verbose=True, )
docs/examples/psd.ipynb
pauliacomi/pyGAPS
mit
Kernel fit pore size distribution The kernel fitting method is currently the most powerful method for pore size distribution calculations. It requires a DFT kernel, or a collection of previously simulated adsorption isotherms which cover the entire pore range which we want to investigate. The calculation of the DFT kernel is currently not in the scope of this framework. The user can specify their own kernel, in a CSV format, which will be used for the isotherm fitting on the psd_dft function. A common DFT kernel is included with the framework, which is simulated with nitrogen on a carbon material and slit-like pores in the range of 0.4-10 nanometres. Let's run the fitting of this internal kernel on the carbon sample:
isotherm = next(i for i in isotherms_n2_77k if i.material=='Takeda 5A') result_dict_dft = {} result_dict_dft = pgc.psd_dft( isotherm, branch='ads', kernel='DFT-N2-77K-carbon-slit', verbose=True, p_limits=None, )
docs/examples/psd.ipynb
pauliacomi/pyGAPS
mit
The output is automatically smoothed using a b-spline method. Further (or less) smoothing can be specified by the bspline_order parameter. The higher the order, more smoothing is applied. Specify "0" to return the data as-fitted.
isotherm = next(i for i in isotherms_n2_77k if i.material == 'Takeda 5A') result_dict_dft = pgc.psd_dft( isotherm, bspline_order=5, verbose=True, )
docs/examples/psd.ipynb
pauliacomi/pyGAPS
mit
Comparing all the PSD methods For comparison purposes, we will compare the pore size distributions obtained through all the methods above. The sample on which all methods are applicable is the Takeda carbon. We will first plot the data using the existing function plot, then use the graph returned to plot the remaining results.
from pygaps.graphing.calc_graphs import psd_plot ax = psd_plot( result_dict_dft['pore_widths'], result_dict_dft['pore_distribution'], method='comparison', labeldiff='DFT', labelcum=None, left=0.4, right=8 ) ax.plot( result_dict_micro['pore_widths'], result_dict_micro['pore_distribution'], label='microporous', ) ax.plot( result_dict_meso['pore_widths'], result_dict_meso['pore_distribution'], label='mesoporous', ) ax.legend(loc='best')
docs/examples/psd.ipynb
pauliacomi/pyGAPS
mit
Let's create a minimalist class that behaves as CoTeDe would like to. It is like a dictionary of relevant variables with a propriety attrs with some metatada.
class DummyDataset(object): """Minimalist data object that contains data and attributes """ def __init__(self): """Two dictionaries to store the data and attributes """ self.attrs = {} self.data = {} def __getitem__(self, key): """Return the requested item from the data """ return self.data[key] def keys(self): """Show the available variables in data """ return self.data.keys() @property def attributes(self): """Temporary requirement while Gui is refactoring CoTeDe. This will be soon unecessary """ return self.attrs
docs/notebooks/GenericDataObject.ipynb
castelao/CoTeDe
bsd-3-clause
Let's create an empty data object.
mydata = DummyDataset()
docs/notebooks/GenericDataObject.ipynb
castelao/CoTeDe
bsd-3-clause
Let's define some metadata as position and time that the profile was measured.
mydata.attrs['datetime'] = datetime(2016,6,4) mydata.attrs['latitude'] = 15 mydata.attrs['longitude'] = -38 print(mydata.attrs)
docs/notebooks/GenericDataObject.ipynb
castelao/CoTeDe
bsd-3-clause
Now let's create some data. Here I'll use create pressure, temperature, and salinity. I'm using masked array, but it could be a simple array. Here I'm creating these values, but in a real world case we would be reading from a netCDF, an ASCII file, an SQL query, or whatever is your data source.
mydata.data['PRES'] = ma.fix_invalid([2, 6, 10, 21, 44, 79, 100, 150, 200, 400, 410, 650, 1000, 2000, 5000]) mydata.data['TEMP'] = ma.fix_invalid([25.32, 25.34, 25.34, 25.31, 24.99, 23.46, 21.85, 17.95, 15.39, 11.08, 6.93, 7.93, 5.71, 3.58, np.nan]) mydata.data['PSAL'] = ma.fix_invalid([36.49, 36.51, 36.52, 36.53, 36.59, 36.76, 36.81, 36.39, 35.98, 35.30, 35.28, 34.93, 34.86, np.nan, np.nan])
docs/notebooks/GenericDataObject.ipynb
castelao/CoTeDe
bsd-3-clause
Let's check the available variables
mydata.keys()
docs/notebooks/GenericDataObject.ipynb
castelao/CoTeDe
bsd-3-clause
Let's check one of the variables, temperature:
mydata['TEMP']
docs/notebooks/GenericDataObject.ipynb
castelao/CoTeDe
bsd-3-clause
Now that we have our data and metadata as this object, CoTeDe can do its job. On this example let's evaluate this fictious profile using the EuroGOOS recommended QC test. For that we can use ProfileQC() like:
pqced = ProfileQC(mydata, cfg='eurogoos')
docs/notebooks/GenericDataObject.ipynb
castelao/CoTeDe
bsd-3-clause
The returned object (pqced) has the same content of the original mydata. Let's check the variables again,
pqced.keys()
docs/notebooks/GenericDataObject.ipynb
castelao/CoTeDe
bsd-3-clause
But now there is a new propriety named 'flags' which is a dictionary with all tests applied and the flag resulted. Those flags are groupped by variables.
pqced.flags.keys()
docs/notebooks/GenericDataObject.ipynb
castelao/CoTeDe
bsd-3-clause
Let's see which flags are available for temperature,
pqced.flags['TEMP'].keys()
docs/notebooks/GenericDataObject.ipynb
castelao/CoTeDe
bsd-3-clause
Let's check the flags for the test gradient conditional to the depth, as defined by EuroGOOS
pqced.flags['TEMP']['gradient_depthconditional']
docs/notebooks/GenericDataObject.ipynb
castelao/CoTeDe
bsd-3-clause
One means that that measurement was approved by this test. Nine means that the data was not available or not valid at that level. And zero means no QC. For the gradient test it is not possible to evaluate the first or the last values (check the manual), so those measurements exist but the flag was zero. The overall flag is a special one that combines all other flags by taking the most critical assessment. If a single test identify a problem and flag as 4 (bad data), the overall flag for that measurement will be 4 even if that measurement passed in all other tests. Therefore, a measurement with flag 1 (good value) means that it was approved in all other tests.
pqced.flags['PSAL']['overall']
docs/notebooks/GenericDataObject.ipynb
castelao/CoTeDe
bsd-3-clause
Job Queues Job Queues are one of the key classes of the library. You place jobs in them, and they run them and retrieve the data. You do not have to bother of where exactly things are run and how they are retrieved, everything is abstracted away and already adapted the specific clusters that we are using. In one line you can change clusters or execute it locally instead. Adapting it to a new cluster should take a really short time (~10 lines of code). Defining several job queue configs: local, multiprocess local, and several clusters. NB: <span style="color:red">SSH usage: </span>To use plafrim, you <b>must have a working entry 'plafrim-ext'</b> in your .ssh/config. For the other clusters, if you don't have a corresponding entry (avakas or anyone), you should provide your username. You will then be asked your password and if you want to create a key and export it on the cluster to further automatize the connection.
jq_cfg_local = {'jq_type':'local'} virtualenv = 'test_py3' # by default root python. ex: virtualenv = 'test_xp_man' for venv in ~/virtualenvs/test_xp_man jq_cfg_plafrim = {'jq_type':'plafrim', 'modules':['slurm','language/python/3.5.2'], 'virtual_env': virtualenv, 'requirements': [pip_arg_xp_man], #'username':'schuelle', } jq_cfg_avakas = {'jq_type':'avakas', 'modules':['torque','maui','python3/3.6.0'], 'without_epilogue':True, #'username':'wschueller', 'virtual_env':virtualenv,#virtualenv, #'requirements': [pip_arg_xp_man], IMPORTANT: install on avakas through github and https is broken due to the git version being too old. You have to install manually and via SSH... } jq_cfg_anyone = {'jq_type':'anyone', 'modules':[], 'virtual_env':'test_279', #'requirements': [pip_arg_xp_man], "hostname":"cluster_roma" } jq_cfg_docker = {'jq_type':'slurm', 'modules':[], #'virtual_env':virtualenv, #'requirements': [pip_arg_xp_man], 'ssh_cfg':{ 'username':'root', 'hostname':'172.19.0.2', 'password':'dockerslurm',} } jq_cfg_local_multiprocess = {'jq_type':'local_multiprocess', #'nb_process':4, #default value: number of CPUs on the local machine }
notebook/JobQueues.ipynb
wschuell/experiment_manager
agpl-3.0
The requirements section tells job queues to install a version of the library on the cluster if it does not exist yet. You can add other libraries, or add them for specific jobs. By default, virtual_env is set to None, meaning that everything runs and requirements are installed in the root python interpretor. If you provide a < name > for the value virtual_env attribute, it will search for a virtualenv in ~/virtualenvs/< name > . The pip_arg_xp_man refers here to the pip command necessary to install the library on the clusters, which is needed to run the jobs. You can use the same syntax to automatically update your own software to a given commit or branch of your own git repository. You can choose below which one of the job queue configuration you want to use. The job queue object is initialized under the variable name jq.
jq_cfg = jq_cfg_local_multiprocess jq = xp_man.job_queue.get_jobqueue(**jq_cfg) print(jq.get_status_string())
notebook/JobQueues.ipynb
wschuell/experiment_manager
agpl-3.0
Jobs Jobs are the objects that need to be executed. Here we will use a simple type of job, ExampleJob. It goes through a loop of 24 steps, prints the value of the counter variable, waits a random time between 1 and 2 seconds between each steps, and at the end saves the value in a file < job.descr >data.dat Other types of jobs, and defining own classes of jobs as subclasses of the root Job class, will be explained in another notebook. However, there is a documented template provided with the library, found in experiment_manager/job/template_job.py We define job configurations, create the job objects, and add them to the job queue jq.
job_cfg = { 'estimated_time':120,#in seconds #'virtual_env':'test', #'requirements':[], #..., } job = xp_man.job.ExampleJob(**job_cfg) jq.add_job(job) # of course, you can add as many jobs as you want, like in next cell print(jq.get_status_string()) for i in range(20): job_cfg_2 = { 'descr' : str(i), 'estimated_time':120,#a description for the example job } job = xp_man.job.ExampleJob(**job_cfg_2) jq.add_job(job) print(jq.get_status_string())
notebook/JobQueues.ipynb
wschuell/experiment_manager
agpl-3.0
Last step is to update the queue. One update will check the current status of each job attached to jq, and process its next step, being sending it to the cluster, retrieving it, unpacking it, etc
#jq.ssh_session.reconnect() jq.update_queue()
notebook/JobQueues.ipynb
wschuell/experiment_manager
agpl-3.0
You can tell jq to automatically do updates until all jobs are done or in error status:
jq.auto_finish_queue()
notebook/JobQueues.ipynb
wschuell/experiment_manager
agpl-3.0
Define the non-orthogonal curvilinear coordinates. Here psi=(u, v) are the new coordinates and rv is the position vector $$ \vec{r} = u \mathbf{i} + v\left(1- \frac{\sin 2u}{10} \right) \mathbf{j} $$ where $\mathbf{i}$ and $\mathbf{j}$ are the Cartesian unit vectors in $x$- and $y$-directions, respectively.
u = sp.Symbol('x', real=True, positive=True) v = sp.Symbol('y', real=True) psi = (u, v) rv = (u, v*(1-sp.sin(2*u)/10))
binder/non-orthogonal-poisson.ipynb
spectralDNS/shenfun
bsd-2-clause
Now choose basis functions and create tensor product space. Notice that one has to use complex Fourier space and not the real, because the integral measure is a function of u.
N = 20 #B0 = FunctionSpace(N, 'C', bc=(0, 0), domain=(0, 2*np.pi)) B0 = FunctionSpace(N, 'F', dtype='D', domain=(0, 2*np.pi)) B1 = FunctionSpace(N, 'L', bc=(0, 0), domain=(-1, 1)) T = TensorProductSpace(comm, (B0, B1), dtype='D', coordinates=(psi, rv, sp.Q.negative(sp.sin(2*u)-10) & sp.Q.negative(sp.sin(2*u)/10-1))) p = TrialFunction(T) q = TestFunction(T) b = T.coors.get_covariant_basis() T.coors.sg sp.Matrix(T.coors.get_covariant_metric_tensor()) Math(T.coors.latex_basis_vectors(symbol_names={u: 'u', v: 'v'}))
binder/non-orthogonal-poisson.ipynb
spectralDNS/shenfun
bsd-2-clause
Plot the mesh to see the domain.
mesh = T.local_cartesian_mesh() x, y = mesh plt.figure(figsize=(10, 4)) for i, (xi, yi) in enumerate(zip(x, y)): plt.plot(xi, yi, 'b') plt.plot(x[:, i], y[:, i], 'r')
binder/non-orthogonal-poisson.ipynb
spectralDNS/shenfun
bsd-2-clause
Print the Laplace operator in curvilinear coordinates. We use replace to simplify the expression.
dp = div(grad(p)) g = sp.Symbol('g', real=True, positive=True) replace = [(1-sp.sin(2*u)/10, sp.sqrt(g)), (sp.sin(2*u)-10, -10*sp.sqrt(g)), (5*sp.sin(2*u)-50, -50*sp.sqrt(g))] Math((dp*T.coors.sg**2).tolatex(funcname='p', symbol_names={u: 'u', v: 'v'}, replace=replace))
binder/non-orthogonal-poisson.ipynb
spectralDNS/shenfun
bsd-2-clause
Solve Poisson's equation. First define a manufactured solution and assemble the right hand side
ue = sp.sin(2*u)*(1-v**2) f = (div(grad(p))).tosympy(basis=ue, psi=psi) fj = Array(T, buffer=f*T.coors.sg) f_hat = Function(T) f_hat = inner(q, fj, output_array=f_hat)
binder/non-orthogonal-poisson.ipynb
spectralDNS/shenfun
bsd-2-clause
Then assemble the left hand side and solve using a generic 2D solver
M = inner(q, div(grad(p))*T.coors.sg) #M = inner(grad(q*T.coors.sg), -grad(p)) u_hat = Function(T) Sol1 = Solver2D(M) u_hat = Sol1(f_hat, u_hat) uj = u_hat.backward() uq = Array(T, buffer=ue) print('Error =', np.linalg.norm(uj-uq)) for i in range(len(M)): print(len(M[i].mats[0].keys()), len(M[i].mats[1].keys()), M[i].mats[0].measure, M[i].mats[1].measure)
binder/non-orthogonal-poisson.ipynb
spectralDNS/shenfun
bsd-2-clause
Plot the solution in the wavy Cartesian domain
xx, yy = T.local_cartesian_mesh() plt.figure(figsize=(12, 4)) plt.contourf(xx, yy, uj.real) plt.colorbar()
binder/non-orthogonal-poisson.ipynb
spectralDNS/shenfun
bsd-2-clause
Inspect the sparsity pattern of the generated matrix on the left hand side
from matplotlib.pyplot import spy plt.figure() spy(Sol1.mat, markersize=0.1) from scipy.sparse.linalg import eigs mats = inner(q*T.coors.sg, -div(grad(p))) Sol1 = Solver2D(mats) BB = inner(p, q*T.coors.sg) Sol2 = Solver2D([BB]) f = eigs(Sol1.mat, k=20, M=Sol2.mat, which='LM', sigma=0) mats l = 10 u_hat = Function(T) u_hat[:, :-2] = f[1][:, l].reshape(T.dims()) plt.contourf(xx, yy, u_hat.backward().real, 100) plt.colorbar()
binder/non-orthogonal-poisson.ipynb
spectralDNS/shenfun
bsd-2-clause
We've succesfully loaded our data, but there are still a couple preprocessing steps to go through first. Specifically, we're going to: Change the row labels from dcids to names for readability. Change the column name "dc/e9gftzl2hm8h9" to the more human readable "Commute_Time" The raw commute time values from Data Commons shows the total amount of minutes spent for everyone in the city. Let's instead look at the average commute time for a single person, which we'll get by dividing the raw commute time (Commute_Time) by population size (Count_Person) Similarly, we'll get a Percent_NoHealthInsurance by dividing the the count of people without health insurance (Count_Person_NoHealthInsurance) by population size. To perform classification, we need to convert our obesity rate data into labels. In this lesson, we'll look at binary classification, and will split our cities into "Low obesity rate" (label 0, obesity% < 30%) and "High obesity rate" (label 1, obesity% >= 30%) categories.
# Make Row Names More Readable # --- First, we'll copy the dcids into their own column # --- Next, we'll get the "name" property of each dcid # --- Then add the returned dictionary to our data frame as a new column # --- Finally, we'll set this column as the new index df = raw_features_df.copy(deep=True) df['DCID'] = df.index city_name_dict = datacommons.get_property_values(city_dcids, 'name') city_name_dict = {key:value[0] for key, value in city_name_dict.items()} df['City'] = pd.Series(city_name_dict) df.set_index('City', inplace=True) # Rename column "dc/e9gftzl2jm8h9" to "Commute_Time" df.rename(columns={"dc/e9gftzl2hm8h9":"Commute_Time"}, inplace=True) # Convert commute_time value avg_commute_time = df["Commute_Time"]/df["Count_Person"] df["Commute_Time"] = avg_commute_time # Convert Count of No Health Insurance to Percentage percent_noHealthInsurance = df["Count_Person_NoHealthInsurance"]/df["Count_Person"] df["Percent_NoHealthInsurance"] = percent_noHealthInsurance # Create labels based on the Obesity rate of each city # --- Percent_Person_Obesity < 30 will be Label 0 # --- Percent_Person_Obesity >= 30 will be label 1 df["Label"] = df['Percent_Person_Obesity'] >= 30.0 df["Label"] = df["Label"].astype(int) # Display results display(df)
notebooks/intro_data_science/Classification_and_Model_Evaluation.ipynb
datacommonsorg/api-python
apache-2.0
Now that we have our features and labels set, it's time to start modeling! 1) Model Selection The results of our models are only good if our models are correct in the first place. "Good" here can mean different things depending on your application -- we'll talk more about that later in this assignment. What's important to know for now is that the conclusions we draw are subject to the assumptions and limitations of our underlying model. Thus, making sure we choose the right models to analyze is important! But how does one choose the right model? 1.1) Building Intuition -- Which model do you think is best? To build some intuition, let's start off with a simple example. We'll use a subset of our data. For ease of visualization, we'll start with just two features, and just 10 cities.
# For ease of visualization, we'll focus on just a few cities subset_city_dcids = ["geoId/0667000", # San Francisco, CA "geoId/3651000", # NYC, NY "geoId/1304000", # Atlanta, GA "geoId/2404000", # Baltimore, MD "geoId/3050200", # Missoula, MT "geoId/4835000", # Houston, TX "geoId/2622000", # Detroit, MI "geoId/5363000", # Seattle, WA "geoId/2938000", # Kansas City, MO "geoId/4752006" # Nashville, TN ] # Create a subset data frame with just those cities subset_df = df.loc[df['DCID'].isin(subset_city_dcids)] # We'll just use 2 features for ease of visualization X = subset_df[["Percent_Person_PhysicalInactivity", "Percent_Person_SleepLessThan7Hours"]] Y = subset_df[['Label']] # Visualize the data colors = ['#1f77b4', '#ff7f0e'] markers = ['s', '^'] fig, ax = plt.subplots() ax.set_title('Original Data') ax.set_ylabel('Percent_Person_SleepLessThan7Hours') ax.set_xlabel('Percent_Person_PhysicalInactivity') for i in range(X.shape[0]): ax.scatter(X["Percent_Person_PhysicalInactivity"][i], X["Percent_Person_SleepLessThan7Hours"][i], c=colors[Y['Label'][i]], marker=markers[Y['Label'][i]], ) ax.legend([0, 1]) plt.show()
notebooks/intro_data_science/Classification_and_Model_Evaluation.ipynb
datacommonsorg/api-python
apache-2.0
The following blocks of code will generate 2 candidate classifiers, for labeling the datapoints as either label 0 (high obesity rate), or label 1 (high obesity rate). The code will also output an accuracy score, which for this section is defined as: $\text{Accuracy} = \frac{\text{# correctly labeled}}{\text{# total datapoints}}$
# Classifier 1 classifier1 = svm.SVC() classifier1.fit(X, Y["Label"]) fig, ax = plt.subplots() ax.set_title('Classifier 1') ax.set_ylabel('Percent_Person_SleepLessThan7Hours') ax.set_xlabel('Percent_Person_PhysicalInactivity') plot_decision_regions(X.to_numpy(), Y["Label"].to_numpy(), clf=classifier1, legend=2) plt.show() print('Accuracy of this classifier is:', classifier1.score(X,Y["Label"])) # Classifier 2 classifier2 = tree.DecisionTreeClassifier(random_state=0) classifier2.fit(X, Y["Label"]) fig, ax = plt.subplots() ax.set_title('Classifier 2') ax.set_ylabel('Percent_Person_SleepLessThan7Hours') ax.set_xlabel('Percent_Person_PhysicalInactivity') plot_decision_regions(X.to_numpy(), Y["Label"].to_numpy(), clf=classifier2, legend=2) plt.show() print('Accuracy of this classifier is:', classifier2.score(X,Y["Label"]))
notebooks/intro_data_science/Classification_and_Model_Evaluation.ipynb
datacommonsorg/api-python
apache-2.0
1A) Which model do you think is better, Classifier 1 or Classifier 2? Explain your reasoning. 1B) Classifier 2 has a higher accuracy than Classifier 1, but has a more complicated decision boundary. Which do you think would generalize best to new data? 1.2) The Importance of Generalizability So, how did we do? Let's see what happens when we add back the rest of the cities (we'll keep using just the 2 features for ease of visualization.)
# Original Data X_full = df[["Percent_Person_PhysicalInactivity", "Percent_Person_SleepLessThan7Hours"]] Y_full = df[['Label']] # Visualize the data cCycle = ['#1f77b4', '#ff7f0e'] mCycle = ['s', '^'] fig, ax = plt.subplots() ax.set_title('Original Data') ax.set_ylabel('Percent_Person_SleepLessThan7Hours') ax.set_xlabel('Percent_Person_PhysicalInactivity') for i in range(X_full.shape[0]): ax.scatter(X_full["Percent_Person_PhysicalInactivity"][i], X_full["Percent_Person_SleepLessThan7Hours"][i], c=cCycle[Y_full['Label'][i]], marker=mCycle[Y_full['Label'][i]], ) ax.legend([0, 1]) plt.show() # Classifier 1 fig, ax = plt.subplots() ax.set_title('Classifier 1') ax.set_ylabel('Percent_Person_SleepLessThan7Hours') ax.set_xlabel('Percent_Person_PhysicalInactivity') plot_decision_regions(X_full.to_numpy(), Y_full["Label"].to_numpy(), clf=classifier1, legend=2) plt.show() print('Accuracy of this classifier is: %.2f' % classifier1.score(X_full,Y_full["Label"])) # Classifier 2 fig, ax = plt.subplots() ax.set_title('Classifier 2') ax.set_ylabel('Percent_Person_SleepLessThan7Hours') ax.set_xlabel('Percent_Person_PhysicalInactivity') plot_decision_regions(X_full.to_numpy(), Y_full["Label"].to_numpy(), clf=classifier2, legend=2) plt.show() print('Accuracy of this classifier is: %.2f' % classifier2.score(X_full,Y_full["Label"]))
notebooks/intro_data_science/Classification_and_Model_Evaluation.ipynb
datacommonsorg/api-python
apache-2.0
2A) In light of all the new datapoints, now which classifier do you think is better, Classifer 1 or Classifier 2? Explain your reasoning. 2B) In Question 1, Classifier 1 had a lower accuracy than Classifier 2. After adding more datapoints, we now see the reverse, with Classifier 1 having a higher accuracy than Classifier 2. What happened? Give an explanation (or at least your best guess) for why this is. 2) Evaluation Metrics In question 1, we were able to visualize how well our models performed by plotting our data and decision boundaries. However, this was only possible because we limited ourselves to just 2 features. Unfortunately for us, humans are only good at visualizing up to 3 dimesnions. As you increase the number of features and/or complexity of your models, creating meaningful visualizations quickly become intractable. Thus, we'll need other methods to measure how well our models perform. In this section, we'll cover some common strategies for evaluating models. To start, let's finally fit a model to all of our available data (e.g. 500 cities and 8 features). Because the features have different scales, we'll also take care to standardize their values.
# Use all features that aren't obesity X_large = df.dropna()[[ "Median_Income_Person", "Percent_NoHealthInsurance", "Percent_Person_PhysicalInactivity", "Percent_Person_SleepLessThan7Hours", "Percent_Person_WithHighBloodPressure", "Percent_Person_WithMentalHealthNotGood", "Percent_Person_WithHighCholesterol", "Commute_Time" ]] Y_large = df.dropna()["Label"] # Standardize the data scaler = StandardScaler().fit(X_large) X_large = scaler.transform(X_large) # Create a model large_model = linear_model.Perceptron() large_model.fit(X_large, Y_large)
notebooks/intro_data_science/Classification_and_Model_Evaluation.ipynb
datacommonsorg/api-python
apache-2.0
2.1) Accuracy 2.1.1) Classification Accuracy We've seen an example of an evaluation metric already -- accuracy! The accuracy score used question 1 is more commonly known as classification accuracy, and is the most common metric used in classification problems. As a refresher, the classification accuracy is the ratio of number of correct predictions to the total number of datapoints. classification accuracy: $Accuracy = \frac{\text{# correctly labeled}}{\text{# total datapoints}}$ Note that sometimes the classification accuracy can be misleading! Consider the following scenario: There are two classes, A and B. We have 100 data points in our dataset. Of these 100 data points, 99 points are labeled class A, while only 1 of the data points are labeled class B. 2.1A) Consider a model that always predicts class A. What is the accuracy of this always-A model? 2.1B) How well do you expect the always-A model to perform on new, previously unseen data? Assume the new data follows the same distribution as the original 100 data points. 2.1C) Run the following code block to calculate the classification accuracy of our large model. Is the accuracy higher or lower than you expected?
print('Accuracy of the large model is: %.2f' % large_model.score(X_large,Y_large))
notebooks/intro_data_science/Classification_and_Model_Evaluation.ipynb
datacommonsorg/api-python
apache-2.0
2.2) Train/Test Splits The ability of a model to perform well on new, previously unseen data (drawn from the same distribution as the data used the create the model) is called Generalization. For most applications, we prefer models that generalize well over those that don't. One way to check the generalizability of a model is to perform an analysis similar to what we did in question 1. We'll take our data, and randomly split it into two subsets, a training set that we'll use to build our model, and a test set, which we'll hold out until the model is complete and use it to evaluate how well our model can generalize to simulate new, previously unseen data. 2.2.1) Choosing Split Sizes But what percentage of our datapoints should go into our training and test sets respectively? There are no hard and fast rules for this, the right split often depends on the application and how much data we have. The next few questions explore the key tradeoffs: 2.2A) Consider a scenario with 5 data points in the training set and 95 data points in the test set. How accurate of a model do you think we're likely to train? 2.2B) Does your answer to 2.2A change if we have 500 training and 9500 test points instead? 2.2C) Consider a scenario with 95 data points in the training set and 5 data points in the test set. Is the test accuracy still a good measure of generalizability? 2.2D) Does your answer to 2.2C change if we have 9500 training and 500 test points instead? 2.2.2) Try for yourself! 2.2E) Play around with a couple values of test_size in the code box below. Find a split ratio that seems to work well, and report what that ratio is.
''' Try a variety of different splits by changing the test_size variable, which represents the ratio of points to use in the test set. For example, for a 75% Training, 25% Test split, use test_size=0.25 ''' test_size = 0.25 # Change me! Enter a value between 0 and 1 print(f'{np.round((1-test_size)*100)}% Training, {(test_size)*100}% Test Split' ) # Randomly split data into Train and Test Sets x_train, x_test, y_train, y_test = train_test_split(X_large, Y_large, test_size=test_size) # Fit a model on the training set large_model.fit(x_train, y_train) print('The TRAINING accuracy is: %.2f' % large_model.score(x_train, y_train)) # Evaluate on the test Set print('The TEST accuracy is: %.2f' % large_model.score(x_test, y_test))
notebooks/intro_data_science/Classification_and_Model_Evaluation.ipynb
datacommonsorg/api-python
apache-2.0
2.2.3) Training vs. Test Accuracy As you may have noticed from 2.2.2, we can calculate two different accuracies after performing a train/test split. A training accuracy based on how well the model performs on the data it was trained on, and a test accuracy based on how well the model performs on held out data. Typically, we select models based on test accuracy. Afterall, a model's performance on new data after being deployed is usually more important than how well that model performed on the training data. So why measure training accuracy at all? It turns out training accuracy is often useful for diagnosing some common issues with models. For example, consider the following scenario: After performing a train/test split, a model is found to have 100% training accuracy, but only 33% test accuracy. 2.2F) What's going on with the model in the scenario? Come up with a hypothetical setup that could result in these train and test accuracies. Hint: This situation is called overfitting. If you're stuck, feel free to look it up! 2.3) Cross-Validation If you haven't already, run the code box in 2.2.2 multiple times without changing the test_size variable. Notice how the accuracies can be different between runs? The problem is that each time we randomly select a train/test split, sometimes we'll get luckier or unlucky with a particular distribution of training or test data. To borrow a term from statistics, a sample size of $n=1$ is too small! We can do better. To get a better estimate of test accuracy, a common strategy is to use k-fold cross-validation. The general proceedure is: Split the data into $k$ groups. Then for each group (called a fold): hold that group out as the test set, and use the remaining groups as a training set. Fit a new model on the training set and record the resulting accuracy on the test set. Take the average of all test accuracies. A Note on Choosing k The number of folds to use depends on your data. Setting the number of folds implicitly also sets your train/test split ratio. For example, using 10 folds implies 10 (90% train, 10% test) splits. Common choices are $k=10$ or $k=5$.
''' Set the number of folds by changing k. ''' k = 5 # Enter an integer >=2. Number of folds. print(f'Test accuracies for {k} splits:') scores = cross_val_score(large_model, X_large, Y_large, cv=k) for i in range(k): print('\tFold %d: %.2f' % (i+1, scores[i])) print('Average score across all folds: %.2f' % np.mean(scores))
notebooks/intro_data_science/Classification_and_Model_Evaluation.ipynb
datacommonsorg/api-python
apache-2.0
2.3A) Play around with the code box above to find a good value of $k$. What happens if $k$ is very large or very small? 2.2B) How does the average score across all folds change with $k$? 2.4) Other Metrics Worth Knowing 2.4.1) What about Regression? -- Mean Squared Error Different models and different problems often use different accuracy metrics. You may have noticed classification accuracy doesn't make much sense for regression problems, where instead of predicting a label, the model predicts a numeric value. In regression, a common accuracy metric is the Mean Squared Error, or MSE. $ MSE = \frac{1}{\text{# total data points}}\sum_{\text{all data points}}(\text{predicted value} - \text{actual value})^2$ It is a measure of the average difference between the predicted value and the actual value. The square ($^2$) can seem counterintuitive at first, but offers some nice mathematical properties. 2.4.2) More Classification Metrics Accuracy alone never tells the full story.There are a number of other metrics borrowed from statistics that are commonly used on classification models. It's possible for a model to have a high accuracy, but score very low on some of the following metrics: True Positives: The cases where we predicted positively, and the actual label was positive. True Negatives: The cases where we predicted negatively, and the actual label was negative. False Positives: The cases where we predicted positively, but the actual label was negative. False Negatives: The cases where we predicted negatively, but the actual label was positive. False Positive Rate: Corresponds to the proportion of negative datapoints incorrectly considered positive relative to all negative points. $FPR = \frac{FP}{TN + FP}$ Sensitivity: (Also known as True Positive Rate) corresponds to the proportion of positive datapoints correctly considered as positive relative to all positive points. $TPR = \frac{TP}{TP + FN}$ Specificity: (Also known as True Negative Rate) corresponds to the proportion of negative datapoints correctly considered negative relative to all negative points. $TNR = \frac{TN}{TN + FP}$ Precision: Proportion of correctly labeled positive points relative to the number of positive predictions $Precision = \frac{TP}{TP+FP}$ Recall: Proportion of correctly labeled positive points relative to all points that were actually positive. $Recall = \frac{TP}{TP+FN}$ F1 score: Measure of a balance between precision and recall. $F1 = 2 \cdot \frac{1}{\frac{1}{precision} + \frac{1}{recall}}$ 2.4.3) Tradeoffs Between Metrics Oftentimes, our definition of a "good" model varies by situation or application case. In some cases, we might prefer a different tradeoff between accuracy, false positive rate, and false negative rate. 2.4) Read through the following scenarios. For each case, state which metrics you would prioritize, and why. Scenario 1: Doctors have identified a new extremely rare, but also very deadly disease. Fortunately, they also discover a simple medication, that if taken early enough, can prevent the disease. The doctors plan to use a machine learning model to predict which of their patients are at high-risk for getting the disease. A positively labeled patient is high-risk, while a negatively labeled patient is low-risk. Scenario 2: Data Is Cool Inc. is a company that attracts many highly (and equally) qualified applicants to its job posting. They are overwhelmed with the number of applications received, so the company implements a machine learning model to sort all the incoming resumes. A positively labeled resume gets passed to a recruiter for a very thorough, but time-costly review. Negatively labeled resumes are held for future job openings. 3) Tying It All Together -- Choosing A Model to Deploy Now that we've seen many different evaluation metrics, let's put what we've learned into practice! One of the most common problems you'll encounter as a data scientist is to decide between a set of candidate models. Each of the following code boxes below generates a candidate classifier for predicting high vs low obesity rates in cities. The models can differ in different ways: number of features, learning algorithm used, number of datapoints, etc.
# Classifier A x_A = df[["Count_Person", "Median_Income_Person"]] y_A = df["Label"] classifierA = linear_model.Perceptron() classifierA.fit(x_A, y_A) scores = cross_val_score(classifierA, x_A, y_A, cv=5) print('Classifier A') print('-------------') print('Number of Data Points:', x_A.shape[0]) print('Number of Features:', x_A.shape[1]) print('Classification Accuracy: %.2f' % classifierA.score(x_A, y_A)) print('5-Fold Cross Validation Accuracy: %.2f' % np.mean(scores)) # Classifier A x_A = df[["Percent_Person_PhysicalInactivity", "Median_Income_Person"]] y_A = df["Label"] classifierA = svm.SVC() classifierA.fit(x_A, y_A) scores = cross_val_score(classifierA, x_A, y_A, cv=5) print('Classifier A') print('-------------') print('Number of Data Points:', x_A.shape[0]) print('Number of Features:', x_A.shape[1]) print('Training Classification Accuracy: %.2f' % classifierA.score(x_A, y_A)) print('5-Fold Cross Validation Accuracy: %.2f' % np.mean(scores)) # Classifier B x_B = df.dropna()[[ "Percent_NoHealthInsurance", "Percent_Person_PhysicalInactivity", "Percent_Person_SleepLessThan7Hours", "Percent_Person_WithHighBloodPressure", "Percent_Person_WithMentalHealthNotGood", "Percent_Person_WithHighCholesterol" ]] y_B = df.dropna()["Label"] classifierB = tree.DecisionTreeClassifier() classifierB.fit(x_B, y_B) scores = cross_val_score(classifierB, x_B, y_B, cv=5) print('Classifier B') print('-------------') print('Number of Data Points:', x_B.shape[0]) print('Number of Features:', x_B.shape[1]) print('Training Classification Accuracy: %.2f' % classifierB.score(x_B, y_B)) print('5-Fold Cross Validation Accuracy: %.2f' % np.mean(scores)) # Classifier C x_C = df.dropna()[[ "Percent_NoHealthInsurance", "Percent_Person_PhysicalInactivity", "Percent_Person_SleepLessThan7Hours", "Percent_Person_WithHighBloodPressure", "Percent_Person_WithMentalHealthNotGood", "Percent_Person_WithHighCholesterol" ]] y_C = df.dropna()["Label"] classifierC = linear_model.Perceptron() classifierC.fit(x_C, y_C) scores = cross_val_score(classifierC, x_C, y_C, cv=5) print('Classifier C') print('-------------') print('Number of Data Points:', x_C.shape[0]) print('Number of Features:', x_C.shape[1]) print('Training Classification Accuracy: %.2f' % classifierC.score(x_C, y_C)) print('5-Fold Cross Validation Accuracy: %.2f' % np.mean(scores))
notebooks/intro_data_science/Classification_and_Model_Evaluation.ipynb
datacommonsorg/api-python
apache-2.0
3A) Run the code boxes above and select which model you would choose to deploy. Justify your answer. 3B) Consider a new Classifier D. Its results look like this: Number of Data Points: 5,000 \ Number of Features: 10,000 \ Training Classification Accuracy: 98% \ 5-Fold Cross Validation Accuracy: 95%. Would you deploy classifier D? Name one advantage and one disadvantage of such a model. 4) Extension: What about YOUR city? Now that we've got a model trained up, let's play with it! Use the Data Commons Place Explorer to find the DCID of a town or city local to you. Use the code box below to add your local town or city's data, and run the model that data. Note: Data may not be available for all locations. If you encounter errors, please try a different location!
your_local_dcid = "geoId/0649670" # Replace with your own! # Get your local data from data commons local_data = datacommons_pandas.build_multivariate_dataframe(your_local_dcid,stat_vars_to_query) # Cleaning and Preprocessing local_data['DCID'] = local_data.index city_name_dict = datacommons.get_property_values(city_dcids, 'name') city_name_dict = {key:value[0] for key, value in city_name_dict.items()} local_data['City'] = pd.Series(city_name_dict) local_data.set_index('City', inplace=True) local_data.rename(columns={"dc/e9gftzl2hm8h9":"Commute_Time"}, inplace=True) avg_commute_time = local_data["Commute_Time"]/local_data["Count_Person"] local_data["Commute_Time"] = avg_commute_time percent_noHealthInsurance = local_data["Count_Person_NoHealthInsurance"]/local_data["Count_Person"] local_data["Percent_NoHealthInsurance"] = percent_noHealthInsurance local_data["Label"] = local_data['Percent_Person_Obesity'] >= 30.0 local_data["Label"] = local_data["Label"].astype(int) # Build data to feed into model x_local = local_data[[ "Median_Income_Person", "Percent_NoHealthInsurance", "Percent_Person_PhysicalInactivity", "Percent_Person_SleepLessThan7Hours", "Percent_Person_WithHighBloodPressure", "Percent_Person_WithMentalHealthNotGood", "Percent_Person_WithHighCholesterol", "Commute_Time" ]] x_local = scaler.transform(x_local) y_local = local_data["Label"] # Make Prediction prediction = large_model.predict(x_local) # Report Results print(f'Prediction for {local_data.index[0]}:') print(f'\tThe predicted label was {prediction[0]}') print(f'\tThe actual label was {y_local[0]}')
notebooks/intro_data_science/Classification_and_Model_Evaluation.ipynb
datacommonsorg/api-python
apache-2.0
读写压缩文件 gzip 和 bz2 模块可以很容易的处理这些文件。 两个模块都为 open() 函数提供了另外的实现来解决这个问题 ```Python 读 gzip compression import gzip with gzip.open('somefile.gz', 'rt') as f: text = f.read() bz2 compression import bz2 with bz2.open('somefile.bz2', 'rt') as f: text = f.read() ``` ```Python 写 gzip compression import gzip with gzip.open('somefile.gz', 'wt') as f: f.write(text) bz2 compression import bz2 with bz2.open('somefile.bz2', 'wt') as f: f.write(text) ``` CSV 文件 对于大多数的CSV格式的数据读写问题,都可以使用 csv 库 python import csv with open('stocks.csv') as f: f_csv = csv.reader(f) headers = next(f_csv) for row in f_csv: # Process row ... python from collections import namedtuple with open('stock.csv') as f: f_csv = csv.reader(f) headings = next(f_csv) Row = namedtuple('Row', headings) for r in f_csv: row = Row(*r) # Process row ... JSON 数据 json 模块提供了一种很简单的方式来编码和解码JSON数据。 其中两个主要的函数是 json.dumps() 和 json.loads() , 要比其他序列化函数库如pickle的接口少得多(字符串操作) 如果你要处理的是文件而不是字符串,你可以使用 json.dump() 和 json.load() 来编码和解码JSON数据
import json data = { 'name' : 'ACME', 'shares' : 100, 'price' : 542.23 } json_str = json.dumps(data) json_str data = json.loads(json_str) data
nbs/IO_file.ipynb
AutuanLiu/Python
mit
```python Writing JSON data with open('data.json', 'w') as f: json.dump(data, f) Reading data back with open('data.json', 'r') as f: data = json.load(f) ```
d = {'a': True, 'b': 'Hello', 'c': None} json.dumps(d)
nbs/IO_file.ipynb
AutuanLiu/Python
mit
Date Preprocessing Hint: How you divide training and test data set? And apply other techinques we have learned if needed. You could take a look at the Iris data set case in the textbook.
#Your code comes here import numpy as np from sklearn.metrics import accuracy_score if Version(sklearn_version) < '0.18': from sklearn.cross_validation import train_test_split else: from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, random_state = 0) num_training = y_train.shape[0] num_test = y_test.shape[0] print('training: ' + str(num_training) + ', test: ' + str(num_test)) from sklearn.preprocessing import StandardScaler sc = StandardScaler() sc.fit(X_train) X_train_std = sc.transform(X_train) X_test_std = sc.transform(X_test)
assignments/ex03_xdnian.ipynb
xdnian/pyml
mit
Classifier #1 Perceptron
#Your code, including traing and testing, to observe the accuracies. from sklearn.linear_model import Perceptron ppn = Perceptron(n_iter = 40, eta0 = 0.1, random_state = 0) ppn.fit(X_train_std, y_train) y_pred = ppn.predict(X_test_std) print('[Perceptron] Accuracy: %.2f' % accuracy_score(y_test, y_pred))
assignments/ex03_xdnian.ipynb
xdnian/pyml
mit
Classifier #2 Logistic Regression
#Your code, including traing and testing, to observe the accuracies. from sklearn.linear_model import LogisticRegression lr = LogisticRegression(C = 1000.0, random_state = 0) lr.fit(X_train_std, y_train) y_pred = lr.predict(X_test_std) print('[Logistic Regression] Accuracy: %.2f' % accuracy_score(y_test, y_pred))
assignments/ex03_xdnian.ipynb
xdnian/pyml
mit
Classifier #3 SVM
#Your code, including traing and testing, to observe the accuracies. from sklearn.svm import SVC #Linear SVM svm0 = SVC(kernel='linear', C = 1.0, random_state = 0) svm0.fit(X_train_std, y_train) y_pred0 = svm0.predict(X_test_std) #RBF SVM svm1 = SVC(kernel='rbf', random_state = 0, gamma = 0.1, C = 1.0) svm1.fit(X_train_std, y_train) y_pred1 = svm1.predict(X_test_std) print('[Linear SVM] Accuracy: %.2f' % accuracy_score(y_test, y_pred0)) print('[RBF SVM] Accuracy: %.2f' % accuracy_score(y_test, y_pred1))
assignments/ex03_xdnian.ipynb
xdnian/pyml
mit
Classifier #4 Decision Tree
#Your code, including traing and testing, to observe the accuracies. from sklearn.tree import DecisionTreeClassifier dt = DecisionTreeClassifier(criterion='entropy', max_depth=10, random_state=0) dt.fit(X_train_std, y_train) y_pred = dt.predict(X_test_std) print('[Decision Tree] Accuracy: %.2f' % accuracy_score(y_test, y_pred))
assignments/ex03_xdnian.ipynb
xdnian/pyml
mit
Classifer #5 Random Forest
#Your code, including traing and testing, to observe the accuracies. from sklearn.ensemble import RandomForestClassifier rf =RandomForestClassifier(criterion='entropy', n_estimators=10, random_state=1, n_jobs=2) rf.fit(X_train_std, y_train) y_pred = rf.predict(X_test_std) print('[Random Forest] Accuracy: %.2f' % accuracy_score(y_test, y_pred))
assignments/ex03_xdnian.ipynb
xdnian/pyml
mit
Classifier #6 KNN
#Your code, including traing and testing, to observe the accuracies. from sklearn.neighbors import KNeighborsClassifier knn = KNeighborsClassifier(n_neighbors = 5, p = 2, metric = 'minkowski') knn.fit(X_train_std, y_train) y_pred = knn.predict(X_test_std) print('[KNN] Accuracy: %.2f' % accuracy_score(y_test, y_pred))
assignments/ex03_xdnian.ipynb
xdnian/pyml
mit
Classifier #7 Naive Bayes
#Your code, including traing and testing, to observe the accuracies. from sklearn.naive_bayes import GaussianNB gnb = GaussianNB() gnb.fit(X_train_std, y_train) y_pred = gnb.predict(X_test_std) print('[Naive Bayes] Accuracy: %.2f' % accuracy_score(y_test, y_pred))
assignments/ex03_xdnian.ipynb
xdnian/pyml
mit
Main Contributors The blame file incorporates every single line of code with the author that changed that line at last.
top10 = blame_log.author.value_counts().head(10) top10 %matplotlib inline top10_authors.plot.pie();
notebooks/No-Go-Areas.ipynb
feststelltaste/software-analytics
gpl-3.0
No-Go Areas We want to find the components, where knowledge is probably outdated.
blame_log.timestamp = pd.to_datetime(blame_log.timestamp) blame_log.head() blame_log['age'] = pd.Timestamp('today') - blame_log.timestamp blame_log.head() blame_log['component'] = blame_log.path.str.split("/").str[:2].str.join(":") blame_log.head() age_per_component = blame_log.groupby('component') \ .age.min().sort_values() age_per_component.head()
notebooks/No-Go-Areas.ipynb
feststelltaste/software-analytics
gpl-3.0
These are the oldest 10 components
age_per_component.tail(10)
notebooks/No-Go-Areas.ipynb
feststelltaste/software-analytics
gpl-3.0
For all components, we create an overview with a bar chart.
age_per_component.plot.bar(figsize=[15,5])
notebooks/No-Go-Areas.ipynb
feststelltaste/software-analytics
gpl-3.0
Explore the Data Play around with view_sentence_range to view different parts of the data.
view_sentence_range = (500, 600) """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()}))) scenes = text.split('\n\n') print('Number of scenes: {}'.format(len(scenes))) sentence_count_scene = [scene.count('\n') for scene in scenes] print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene))) sentences = [sentence for scene in scenes for sentence in scene.split('\n')] print('Number of lines: {}'.format(len(sentences))) word_count_sentence = [len(sentence.split()) for sentence in sentences] print('Average number of words in each line: {}'.format(np.average(word_count_sentence))) print() print('The sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
tv-script-generation/dlnd_tv_script_generation.ipynb
scottquiring/Udacity_Deeplearning
mit
Implement Preprocessing Functions The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below: - Lookup Table - Tokenize Punctuation Lookup Table To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries: - Dictionary to go from the words to an id, we'll call vocab_to_int - Dictionary to go from the id to word, we'll call int_to_vocab Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
import numpy as np import problem_unittests as tests def create_lookup_tables(text): """ Create lookup tables for vocabulary :param text: The text of tv scripts split into words :return: A tuple of dicts (vocab_to_int, int_to_vocab) """ # TODO: Implement Function words = set(text) return {w:k for k,w in enumerate(words)}, {k:w for k,w in enumerate(words)} """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_create_lookup_tables(create_lookup_tables)
tv-script-generation/dlnd_tv_script_generation.ipynb
scottquiring/Udacity_Deeplearning
mit
Tokenize Punctuation We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!". Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token: - Period ( . ) - Comma ( , ) - Quotation Mark ( " ) - Semicolon ( ; ) - Exclamation mark ( ! ) - Question mark ( ? ) - Left Parentheses ( ( ) - Right Parentheses ( ) ) - Dash ( -- ) - Return ( \n ) This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
def token_lookup(): """ Generate a dict to turn punctuation into a token. :return: Tokenize dictionary where the key is the punctuation and the value is the token """ return { '.': '||Period||', ',': '||Comma||', '"': '||Quotation_Mark||', ';': '||Semicolon||', '!': '||Exclamation_Mark||', '?': '||Question_Mark||', '(': '||Left_Parentheses||', ')': '||Right_Parentheses||', '--': '||Dash||', '\n': '||Return||' } """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_tokenize(token_lookup)
tv-script-generation/dlnd_tv_script_generation.ipynb
scottquiring/Udacity_Deeplearning
mit
Input Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders: - Input text placeholder named "input" using the TF Placeholder name parameter. - Targets placeholder - Learning Rate placeholder Return the placeholders in the following tuple (Input, Targets, LearningRate)
def get_inputs(): """ Create TF Placeholders for input, targets, and learning rate. :return: Tuple (input, targets, learning rate) """ # TODO: Implement Function Input = tf.placeholder(tf.int32, (None, None), "input") Targets = tf.placeholder(tf.int32, (None, None), "targets") LearningRate = tf.placeholder(tf.float32) return Input, Targets, LearningRate """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_inputs(get_inputs)
tv-script-generation/dlnd_tv_script_generation.ipynb
scottquiring/Udacity_Deeplearning
mit
Build RNN Cell and Initialize Stack one or more BasicLSTMCells in a MultiRNNCell. - The Rnn size should be set using rnn_size - Initalize Cell State using the MultiRNNCell's zero_state() function - Apply the name "initial_state" to the initial state using tf.identity() Return the cell and initial state in the following tuple (Cell, InitialState)
def get_init_cell(batch_size, rnn_size, lstm_layers=2): """ Create an RNN Cell and initialize it. :param batch_size: Size of batches :param rnn_size: Size of RNNs :return: Tuple (cell, initialize state) """ lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size) cell = tf.contrib.rnn.MultiRNNCell([lstm] * lstm_layers) initial_state = cell.zero_state(batch_size, tf.float32) # TODO: Implement Function return cell, tf.identity(initial_state, 'initial_state') """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_init_cell(get_init_cell)
tv-script-generation/dlnd_tv_script_generation.ipynb
scottquiring/Udacity_Deeplearning
mit
Word Embedding Apply embedding to input_data using TensorFlow. Return the embedded sequence.
def get_embed(input_data, vocab_size, embed_dim): """ Create embedding for <input_data>. :param input_data: TF placeholder for text input. :param vocab_size: Number of words in vocabulary. :param embed_dim: Number of embedding dimensions :return: Embedded input. """ # TODO: Implement Function embed_values = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1,1), name="word_embedding") embed_lookup = tf.nn.embedding_lookup(embed_values, input_data) return embed_lookup """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_embed(get_embed)
tv-script-generation/dlnd_tv_script_generation.ipynb
scottquiring/Udacity_Deeplearning
mit
Build RNN You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN. - Build the RNN using the tf.nn.dynamic_rnn() - Apply the name "final_state" to the final state using tf.identity() Return the outputs and final_state state in the following tuple (Outputs, FinalState)
def build_rnn(cell, inputs, initial_state=None): """ Create a RNN using a RNN Cell :param cell: RNN Cell :param inputs: Input text data :return: Tuple (Outputs, Final State) """ # TODO: Implement Function outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32, initial_state=initial_state) return outputs, tf.identity(final_state, name="final_state") """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_build_rnn(build_rnn)
tv-script-generation/dlnd_tv_script_generation.ipynb
scottquiring/Udacity_Deeplearning
mit
Build the Neural Network Apply the functions you implemented above to: - Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function. - Build RNN using cell and your build_rnn(cell, inputs) function. - Apply a fully connected layer with a linear activation and vocab_size as the number of outputs. Return the logits and final state in the following tuple (Logits, FinalState)
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim): """ Build part of the neural network :param cell: RNN cell :param rnn_size: Size of rnns :param input_data: Input data :param vocab_size: Vocabulary size :param embed_dim: Number of embedding dimensions :return: Tuple (Logits, FinalState) """ # TODO: Implement Function embed = get_embed(input_data, vocab_size, embed_dim) rnn, final_state = build_rnn(cell, embed) logits = tf.contrib.layers.fully_connected(rnn, vocab_size, activation_fn=tf.nn.elu) return logits, final_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_build_nn(build_nn)
tv-script-generation/dlnd_tv_script_generation.ipynb
scottquiring/Udacity_Deeplearning
mit
Batches Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements: - The first element is a single batch of input with the shape [batch size, sequence length] - The second element is a single batch of targets with the shape [batch size, sequence length] If you can't fill the last batch with enough data, drop the last batch. For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2) would return a Numpy array of the following: ``` [ # First Batch [ # Batch of Input [[ 1 2], [ 7 8], [13 14]] # Batch of targets [[ 2 3], [ 8 9], [14 15]] ] # Second Batch [ # Batch of Input [[ 3 4], [ 9 10], [15 16]] # Batch of targets [[ 4 5], [10 11], [16 17]] ] # Third Batch [ # Batch of Input [[ 5 6], [11 12], [17 18]] # Batch of targets [[ 6 7], [12 13], [18 1]] ] ] ``` Notice that the last target value in the last batch is the first input value of the first batch. In this case, 1. This is a common technique used when creating sequence batches, although it is rather unintuitive.
def get_batches(int_text, batch_size, seq_length): """ Return batches of input and target :param int_text: Text with the words replaced by their ids :param batch_size: The size of batch :param seq_length: The length of sequence :return: Batches as a Numpy array """ # TODO: Implement Function n_batches = len(int_text) // (batch_size * seq_length) print(len(int_text),n_batches,batch_size,seq_length) rv = np.zeros((n_batches, 2, batch_size, seq_length)) for i in range(batch_size): seq_start = i * n_batches * seq_length for j in range(n_batches): batch_offset = j * seq_length a = seq_start + batch_offset b = a + seq_length rv[j,0,i,:] = int_text[a:b] rv[j,1,i,:] = int_text[a+1:b+1] rv[-1,-1,-1,-1] = int_text[0] return rv """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_batches(get_batches)
tv-script-generation/dlnd_tv_script_generation.ipynb
scottquiring/Udacity_Deeplearning
mit
Neural Network Training Hyperparameters Tune the following parameters: Set num_epochs to the number of epochs. Set batch_size to the batch size. Set rnn_size to the size of the RNNs. Set embed_dim to the size of the embedding. Set seq_length to the length of sequence. Set learning_rate to the learning rate. Set show_every_n_batches to the number of batches the neural network should print progress.
# Number of Epochs num_epochs = 55 # Batch Size batch_size = 128 # RNN Size rnn_size = 512 # Embedding Dimension Size embed_dim = 300 # Sequence Length seq_length = 12 # Learning Rate learning_rate = 0.01 # Show stats for every n number of batches show_every_n_batches = 40 run_name = "e={},b={},rnn={},embed={},seq={},lr={}".format(num_epochs,batch_size, rnn_size,embed_dim,seq_length, learning_rate) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ save_dir = './save' run_name
tv-script-generation/dlnd_tv_script_generation.ipynb
scottquiring/Udacity_Deeplearning
mit
Build the Graph Build the graph using the neural network you implemented.
""" DON'T MODIFY ANYTHING IN THIS CELL """ from tensorflow.contrib import seq2seq train_graph = tf.Graph() with train_graph.as_default(): vocab_size = len(int_to_vocab) input_text, targets, lr = get_inputs() input_data_shape = tf.shape(input_text) cell, initial_state = get_init_cell(input_data_shape[0], rnn_size) logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim) # Probabilities for generating words probs = tf.nn.softmax(logits, name='probs') # Loss function cost = seq2seq.sequence_loss( logits, targets, tf.ones([input_data_shape[0], input_data_shape[1]])) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients) tf.summary.scalar('cost',cost) merged = tf.summary.merge_all()
tv-script-generation/dlnd_tv_script_generation.ipynb
scottquiring/Udacity_Deeplearning
mit
Train Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forums to see if anyone is having the same problem.
import sys """ DON'T MODIFY ANYTHING IN THIS CELL """ batches = get_batches(int_text, batch_size, seq_length) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) train_logger = tf.summary.FileWriter('./logs/{}'.format(run_name), sess.graph) for epoch_i in range(num_epochs): state = sess.run(initial_state, {input_text: batches[0][0]}) for batch_i, (x, y) in enumerate(batches): feed = { input_text: x, targets: y, initial_state: state, lr: learning_rate} s, train_loss, state, _ = sess.run([merged, cost, final_state, train_op], feed) sys.stdout.write('.') sys.stdout.flush() train_logger.add_summary(s, (epoch_i * len(batches) + batch_i)) # Show every <show_every_n_batches> batches if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0: print('\nEpoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format( epoch_i, batch_i, len(batches), train_loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_dir) print('Model Trained and Saved')
tv-script-generation/dlnd_tv_script_generation.ipynb
scottquiring/Udacity_Deeplearning
mit
Choose Word Implement the pick_word() function to select the next word using probabilities.
def pick_word(probabilities, int_to_vocab): """ Pick the next word in the generated text :param probabilities: Probabilites of the next word :param int_to_vocab: Dictionary of word ids as the keys and words as the values :return: String of the predicted word """ probabilities = probabilities ** 2 probabilities = probabilities/np.sum(probabilities) return int_to_vocab[np.random.choice(range(len(probabilities)), p=probabilities)] """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_pick_word(pick_word)
tv-script-generation/dlnd_tv_script_generation.ipynb
scottquiring/Udacity_Deeplearning
mit
Congratulations! You first MPL plot. Let's make this a little bit larger, use a style to make it look better, and add some annotations.
mpl.style.use('bmh') fig, ax = plt.subplots(1) ax.plot(data[32, 32, 15, :]) ax.set_xlabel('Time (TR)') ax.set_ylabel('MRI signal (a.u.)') ax.set_title('Time-series from voxel [32, 32, 15]') fig.set_size_inches([12, 6])
beginner-python/002-plots.ipynb
ohbm/brain-hacking-101
apache-2.0
Impressions about the data? If we want to compare several voxels side by side we can plot them on the same axis:
fig, ax = plt.subplots(1) ax.plot(data[32, 32, 15, :]) ax.plot(data[32, 32, 14, :]) ax.plot(data[32, 32, 13, :]) ax.plot(data[32, 32, 12, :]) ax.set_xlabel('Time (TR)') ax.set_ylabel('MRI signal (a.u.)') ax.set_title('Time-series from a few voxels') fig.set_size_inches([12, 6])
beginner-python/002-plots.ipynb
ohbm/brain-hacking-101
apache-2.0