markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Next, we need to set the hp tuning values used to train our model. Check HyperparameterSpec for more info. In this config file several key things are set: * maxTrials - How many training trials should be attempted to optimize the specified hyperparameters. * maxParallelTrials: 5 - The number of training trials to run ...
%%writefile ./hptuning_config.yaml #!/usr/bin/env python # Copyright 2018 Google LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # U...
notebooks/xgboost/HyperparameterTuningWithXGBoostInCMLE.ipynb
GoogleCloudPlatform/cloudml-samples
apache-2.0
Lastly, we need to install the dependencies used in our model. Check adding_standard_pypi_dependencies for more info. To do this, AI Platform uses a setup.py file to install your dependencies.
%%writefile ./setup.py #!/usr/bin/env python # Copyright 2018 Google LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless requir...
notebooks/xgboost/HyperparameterTuningWithXGBoostInCMLE.ipynb
GoogleCloudPlatform/cloudml-samples
apache-2.0
Submit the training job.
! gcloud ml-engine jobs submit training auto_mpg_hp_tuning_$(date +"%Y%m%d_%H%M%S") \ --job-dir $JOB_DIR \ --package-path $TRAINER_PACKAGE_PATH \ --module-name $MAIN_TRAINER_MODULE \ --region $REGION \ --runtime-version=$RUNTIME_VERSION \ --python-version=$PYTHON_VERSION \ --scale-tier basic \ --config ...
notebooks/xgboost/HyperparameterTuningWithXGBoostInCMLE.ipynb
GoogleCloudPlatform/cloudml-samples
apache-2.0
[Optional] StackDriver Logging You can view the logs for your training job: 1. Go to https://console.cloud.google.com/ 1. Select "Logging" in left-hand pane 1. In left-hand pane, go to "AI Platform" and select Jobs 1. In filter by prefix, use the value of $JOB_NAME to view the logs On the logging page of your model, yo...
! gsutil ls $JOB_DIR/*
notebooks/xgboost/HyperparameterTuningWithXGBoostInCMLE.ipynb
GoogleCloudPlatform/cloudml-samples
apache-2.0
Going from a vector back to the metadata reference: By keeping an 'id_list', we can look up the identifier for any vector in the list from the database we've made for this clustering attemp. This lets us look up what the reference for that is, and where we can find it:
from clustering import ClusterDB db = ClusterDB(DBFILE) print(dict(db.vecidtoitem(id_list[-1]))) print(data.toarray()[-1]) from burney_data import BurneyDB bdb = BurneyDB("burney.db") bdb.get_title_row(titleAbbreviation="B0574REMEMBRA")
Clustering running notepad.ipynb
BL-Labs/poetryhunt
mit
Initial data woes There was a considerable discrepancy between the x1 average indent and the column "box" left edge. Looking at the data, the presence of a few outliers can really affect this value. Omitting the 2 smallest and largest x values might be enough to avoid this biasing the sample too badly. Also, the initia...
from scipy import cluster from matplotlib import pyplot as plt import numpy as np # Where is the K-means 'elbow'? # Try between 1 and 10 # use only the x1 and x2 variences vset = [cluster.vq.kmeans(data.toarray()[:, [3,6]], i) for i in range(1,10)] plt.plot([v for (c,v) in vset]) plt.show()
Clustering running notepad.ipynb
BL-Labs/poetryhunt
mit
Seems the elbow is quite wide and not sharply defined, based on just the line variences. Let's see what it looks like in general.
# Mask off leaving just the front and end variance columns npdata = data.toarray() mask = np.ones((8), dtype=bool) mask[[0,1,2,4,5,7]] = False marray = npdata[:,mask]
Clustering running notepad.ipynb
BL-Labs/poetryhunt
mit
Attempting K-Means What sort of clustering algorithm to employ is actually a good question. K-means can give fairly meaningless responses if the data is of a given sort. Generally, it can be useful but cannot be used blindly. Given the data above, it might be a good start however.
#trying a different KMeans from sklearn.cluster import KMeans estimators = {'k_means_3': KMeans(n_clusters=3), 'k_means_5': KMeans(n_clusters=5), 'k_means_8': KMeans(n_clusters=8),} fignum = 1 for name, est in estimators.items(): fig = plt.figure(fignum, figsize=(8, 8)) plt.clf() ...
Clustering running notepad.ipynb
BL-Labs/poetryhunt
mit
Interesting! The lack of really well defined clusters bolstered the "elbow" test above. K-means is likely not put to good use here, with just these two variables. The left edge of the scatterplot is a region that contains those blocks of text with lines aligned to the left edge of the paper's column, but have some cons...
mpld3.disable_notebook() # switch off the interactive graph functionality which doesn't work well with the 3D library from mpl_toolkits.mplot3d import Axes3D X = npdata[:, [3,5,6]] fignum = 1 for name, est in estimators.items(): fig = plt.figure(fignum, figsize=(8, 8)) plt.clf() ax = Axes3D(fig, rect=[0...
Clustering running notepad.ipynb
BL-Labs/poetryhunt
mit
How about the area density? In other words, what does it look like if the total area of the block is compared to the area taken up by just the words themselves?
X = npdata[:, [3,0,6]] fignum = 1 for name, est in estimators.items(): fig = plt.figure(fignum, figsize=(8, 8)) plt.clf() ax = Axes3D(fig, rect=[0, 0, .95, 1], elev=25, azim=40) plt.cla() est.fit(X) labels = est.labels_ ax.scatter(X[:,0], X[:,2], X[:,1], c=labels.astype(np.float)) ...
Clustering running notepad.ipynb
BL-Labs/poetryhunt
mit
More outliers skewing the results. This time for blocks with nearly zero varience at either end, but a huge amount of letter area attributed to it by the ocr, but sweeping out a very small overall area. Perhaps mask out the columns which aren't actually columns but dividers mistaken for text? ie skip all blocks that ar...
mask = npdata[:,1] > 40 * 5 # mask based on the ltcount value print(mask) print("Amount of vectors: {0}, Vectors with ltcount < 50: {1}".format(len(npdata), sum([1 for item in mask if item == False]))) m_npdata = npdata[mask, :] X = m_npdata[:, [3,0,6]] # Let's just plot one graph to see: est = estimators['k_means_...
Clustering running notepad.ipynb
BL-Labs/poetryhunt
mit
What country are most billionaires from? For the top ones, how many billionaires per billion people?
df['citizenship'].value_counts().head() df.groupby('citizenship')['networthusbillion'].sum().sort_values(ascending=False) us_pop = 318.9 #billion (2014) us_bill = df[df['citizenship'] == 'United States'] print("There are", us_pop/len(us_bill), "billionaires per billion people in the United States.") germ_pop = 0.0...
07/billionaires.ipynb
M0nica/python-foundations-hw
mit
Who are the top 10 richest billionaires?
recent = df[df['year'] == 2014] # if it is not recent then there are duplicates for diff years recent.sort_values('rank').head(10) recent['networthusbillion'].describe()
07/billionaires.ipynb
M0nica/python-foundations-hw
mit
Maybe plot their net worth vs age (scatterplot) Make a bar graph of the top 10 or 20 richest
recent.plot(kind='scatter', x='networthusbillion', y='age') recent.plot(kind='scatter', x='age', y='networthusbillion', alpha = 0.2)
07/billionaires.ipynb
M0nica/python-foundations-hw
mit
Designing a music-generating class The rhythm-makers we studied yesterday help us think about rhythm in a formal way. Today we'll extend the rhythm-makers' pattern with pitches, articulations and dynamics. In this notebook we'll develop the code we need; in the next notebook we'll encapsulate our work in a class. Makin...
pairs = [(4, 4), (3, 4), (7, 16), (6, 8)] time_signatures = [abjad.TimeSignature(_) for _ in pairs] durations = [_.duration for _ in time_signatures] time_signature_total = sum(durations) counts = [1, 2, -3, 4] denominator = 16 talea = rmakers.Talea(counts, denominator) talea_index = 0
day-3/1-making-music.ipynb
Abjad/intensive
mit
We can ask our talea for as many durations as we want. (Taleas output nonreduced fractions instead of durations. This is to allow talea output to model either durations or time signatures, depending on the application.) We include some negative values, which we will later interpret as rests. We can ask our talea for te...
talea[:10]
day-3/1-making-music.ipynb
Abjad/intensive
mit
Let's use our talea to make notes and rests, stopping when the duration of the accumulated notes and rests sums to that of the four time signatures defined above:
events = [] accumulated_duration = abjad.Duration(0) while accumulated_duration < time_signature_total: duration = talea[talea_index] if 0 < duration: pitch = abjad.NamedPitch("c'") else: pitch = None duration = abs(duration) if time_signature_total < (duration + accumulated_dur...
day-3/1-making-music.ipynb
Abjad/intensive
mit
To attach the four time signatures defined above, we must split our notes and rests at measure boundaries. Then we can attach a time signature to the first note or rest in each of the four selections that result:
selections = abjad.mutate.split(staff[:], time_signatures, cyclic=True) for time_signature, selection in zip(time_signatures, selections): first_leaf = abjad.get.leaf(selection, 0) abjad.attach(time_signature, first_leaf) abjad.show(staff)
day-3/1-making-music.ipynb
Abjad/intensive
mit
Then we group our notes and rests by measure, and metrically respell each group:
measure_selections = abjad.select(staff).leaves().group_by_measure() for time_signature, measure_selection in zip(time_signatures, measure_selections): abjad.Meter.rewrite_meter(measure_selection, time_signature) abjad.show(staff)
day-3/1-making-music.ipynb
Abjad/intensive
mit
Pitching notes We can pitch our notes however we like. First we define a cycle of pitches:
string = "d' fs' a' d'' g' ef'" strings = string.split() pitches = abjad.CyclicTuple(strings)
day-3/1-making-music.ipynb
Abjad/intensive
mit
Then we loop through pitched logical ties, pitching notes as we go:
plts = abjad.select(staff).logical_ties(pitched=True) for i, plt in enumerate(plts): pitch = pitches[i] for note in plt: note.written_pitch = pitch abjad.show(staff)
day-3/1-making-music.ipynb
Abjad/intensive
mit
Attaching articulations and dynamics Abjad's run selector selects notes and chords, separated by rests:
for selection in abjad.select(staff).runs(): print(selection)
day-3/1-making-music.ipynb
Abjad/intensive
mit
We can use Abjad's run selector to loop through the runs in our music, attaching articulations and dynamics along the way:
for selection in abjad.select(staff).runs(): articulation = abjad.Articulation("tenuto") abjad.attach(articulation, selection[0]) if 3 <= len(selection): abjad.hairpin("p < f", selection) else: dynamic = abjad.Dynamic("ppp") abjad.attach(dynamic, selection[0]) abjad.override(staf...
day-3/1-making-music.ipynb
Abjad/intensive
mit
Read in the total SFRs from https://wwwmpa.mpa-garching.mpg.de/SDSS/DR7/sfrs.html . These SFRs are derived from spectra but later aperture corrected using Salim et al.(2007)'s method.
# data with the galaxy information data_gals = mrdfits(UT.dat_dir()+'gal_info_dr7_v5_2.fit.gz') # data with the SFR information data_sfrs = mrdfits(UT.dat_dir()+'gal_totsfr_dr7_v5_2.fits.gz') if len(data_gals.ra) != len(data_sfrs.median): raise ValueError("the data should have the same number of galaxies")
centralms/notebooks/notes_SFRmpajhu_uncertainty.ipynb
changhoonhahn/centralMS
mit
spherematch using 3'' for 10,000 galaxies. Otherwise laptop explodes.
#ngal = len(data_gals.ra) ngal = 10000 matches = spherematch(data_gals.ra[:10000], data_gals.dec[:10000], data_gals.ra[:10000], data_gals.dec[:10000], 0.000833333, maxmatch=0) m0, m1, d_m = matches n_matches = np.zeros(ngal) sfr_list = [[] for i in range(ngal)] for i in...
centralms/notebooks/notes_SFRmpajhu_uncertainty.ipynb
changhoonhahn/centralMS
mit
The task: Identify patients with pulmonary embolism from radiology reports Step 1: how is the concept of pulmonary embolism represented in the reports - fill in the list below with literals you want to use.
mytargets = itemData.itemData() mytargets.extend([["pulmonary embolism", "CRITICAL_FINDING", "", ""], ["pneumonia", "CRITICAL_FINDING", "", ""]]) print(mytargets) !pip install -U radnlp==0.2.0.8
IntroductionToPyConTextNLP.ipynb
chapmanbe/nlm_clinical_nlp
mit
Sentence Splitting pyConTextNLP operates on a sentence level and so the first step we need to take is to split our document into individual sentences. pyConTextNLP comes with a simple sentence splitter class.
import pyConTextNLP.helpers as helpers spliter = helpers.sentenceSplitter() spliter.splitSentences("This is Dr. Chapman's first sentence. This is the 2.0 sentence.")
IntroductionToPyConTextNLP.ipynb
chapmanbe/nlm_clinical_nlp
mit
However, sentence splitting is a common NLP task and so most full-fledged NLP applications provide sentence splitters. We usually rely on the sentence splitter that is part of the TextBlob package, which in turn relies on the Natural Language Toolkit (NLTK). So before proceeding we need to download some NLTK resources ...
!python -m textblob.download_corpora
IntroductionToPyConTextNLP.ipynb
chapmanbe/nlm_clinical_nlp
mit
Combining cross_fields and best_fields Based on previous tuning, we have the following optimal parameters for each multi_match query type.
cross_fields_params = { 'operator': 'OR', 'minimum_should_match': 50, 'tie_breaker': 0.25, 'url|boost': 1.0129720302556104, 'title|boost': 5.818478716515356, 'body|boost': 3.736613263685484, } best_fields_params = { 'tie_breaker': 0.3936135232328522, 'url|boost': 0.0, 'title|boost':...
Machine Learning/Query Optimization/notebooks/Appendix B - Combining queries.ipynb
elastic/examples
apache-2.0
We've seen the process to optimize field boosts on two different multi_match queries but it would be interesting to see if combining them in some way might actually result in even better MRR@100. Let's give it a shot and find out. Side note: Combining queries where each sub-query is always executed may improve relevanc...
def prefix_keys(d, prefix): return {f'{prefix}{k}': v for k, v in d.items()} # prefix key of each sub-query # add default boosts all_params = { **prefix_keys(cross_fields_params, 'cross_fields|'), 'cross_fields|boost': 1.0, **prefix_keys(best_fields_params, 'best_fields|'), 'best_fields|boost': 1.0...
Machine Learning/Query Optimization/notebooks/Appendix B - Combining queries.ipynb
elastic/examples
apache-2.0
Baseline evaluation
%%time _ = evaluate_mrr100_dev(es, max_concurrent_searches, index, template_id, params=all_params)
Machine Learning/Query Optimization/notebooks/Appendix B - Combining queries.ipynb
elastic/examples
apache-2.0
Query tuning Here we'll just tune the boosts for each sub-query. Note that this takes twice as long as tuning individual queries because we have two queries combined.
%%time _, _, final_params_boosts, metadata_boosts = optimize_query_mrr100(es, max_concurrent_searches, index, template_id, config_space=Config.parse({ 'num_iterations': 30, 'num_initial_points': 15, 'space': { 'cross_fields|boost': { 'low': 0.0, 'high': 5.0 }, 'best_...
Machine Learning/Query Optimization/notebooks/Appendix B - Combining queries.ipynb
elastic/examples
apache-2.0
Seems that there's not much to tune here, but let's keep going.
%%time _ = evaluate_mrr100_dev(es, max_concurrent_searches, index, template_id, params=final_params_boosts)
Machine Learning/Query Optimization/notebooks/Appendix B - Combining queries.ipynb
elastic/examples
apache-2.0
So that's the same as without tuning. What's going on? Debugging Plot scores from each sub-query to determine why we don't really see an improvement over individual queries.
import matplotlib.pyplot as plt from itertools import chain from qopt.notebooks import ROOT_DIR from qopt.search import temporary_search_template, search_template from qopt.trec import load_queries_as_tuple_list, load_qrels def collect_scores(): def _search(template_id, query_string, params, doc_id): ...
Machine Learning/Query Optimization/notebooks/Appendix B - Combining queries.ipynb
elastic/examples
apache-2.0
Check the dataframe to see which columns contain 0's. Based on the data type of each column, do these 0's all make sense? Which 0's are suspicious?
for name in names: print(name, ':', any(df.loc[:, name] == 0))
python-machine-learning/code/ch04exercises.ipynb
jeancochrane/learning
mit
Answer: Columns 2-6 (glucose, blood pressure, skin fold thickness, insulin, and BMI) all contain zeros, but none of these measurements should ever be 0 in a human. Assume that 0s indiciate missing values, and fix them in the dataset by eliminating samples with missing features. Then run a logistic regression, and measu...
import numpy as np from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler from sklearn.linear_model import LogisticRegression for i in range(1,6): df.loc[df.loc[:, names[i]] == 0, names[i]] = np.nan df_no_nan = df.dropna(axis=0, how='any') X = df_no_nan.iloc[:, :8]....
python-machine-learning/code/ch04exercises.ipynb
jeancochrane/learning
mit
Next, replace missing features through mean imputation. Run a regression and measure the performance of the model.
from sklearn.preprocessing import Imputer imputer = Imputer(missing_values='NaN', strategy='mean', axis=1) X = imputer.fit_transform(df.iloc[:, :8].values) y = df.iloc[:, 8].values fit_and_score_rlr(X, y)
python-machine-learning/code/ch04exercises.ipynb
jeancochrane/learning
mit
Comment on your results. Answer: Interestingly, there's not a huge performance improvement between the two approaches! In my run, using mean imputation corresponded to about a 3 point increase in model performance. Some ideas for why this might be: This is a small dataset to start out with, so removing ~half its sampl...
data_url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/tae/tae.data' names = ['native_speaker', 'instructor', 'course', 'season', 'class_size', 'rating'] df = pandas.read_csv(data_url, header=None, index_col=False, names=names) print(df)
python-machine-learning/code/ch04exercises.ipynb
jeancochrane/learning
mit
Which of the features are categorical? Are they ordinal, or nominal? Which features are numeric? Answer: According to the documentation: Native speaker: categorical (nominal) Instructor: categorical (nominal) Course: categorical (nominal) Season: categorical (nominal) Class size: numeric Rating: categorical (ordinal) ...
X = df.iloc[:, :-1].values y = df.iloc[:, -1].values fit_and_score_rlr(X, y, normalize=True)
python-machine-learning/code/ch04exercises.ipynb
jeancochrane/learning
mit
Now, encode the categorical variables with a one-hot encoder. Again, run a classification and measure performance.
from sklearn.preprocessing import OneHotEncoder enc = OneHotEncoder(categorical_features=range(5)) X_encoded = enc.fit_transform(X) fit_and_score_rlr(X_encoded, y, normalize=False)
python-machine-learning/code/ch04exercises.ipynb
jeancochrane/learning
mit
Comment on your results. Feature scaling Raschka mentions that decision trees and random forests do not require standardized features prior to classification, while the rest of the classifiers we've seen so far do. Why might that be? Explain the intuition behind this idea based on the differences between tree-based cla...
class SBS(object): """ Class to select the k-best features in a dataset via sequential backwards selection. """ def __init__(self): """ Initialize the SBS model. """ pass def fit(self): """ Fit SBS to a dataset. """ pass d...
python-machine-learning/code/ch04exercises.ipynb
jeancochrane/learning
mit
Train SVM on features Using the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
# Use the validation set to tune the learning rate and regularization strength from cs231n.classifiers.linear_classifier import LinearSVM learning_rates = [1e-9, 1e-8, 1e-7] regularization_strengths = [5e4, 5e5, 5e6] #learning_rates = list(map(lambda x: x*1e-9, np.arange(0.9, 2, 0.1))) #regularization_strengths = lis...
assignment1/features.ipynb
miguelfrde/stanford-cs231n
mit
Inline question 1: Describe the misclassification results that you see. Do they make sense? It makes sense given that we are using color histogram features, so for some results the background seems to affect. For example, blue background/flat background for a plane, trucks as cars (street + background) and the other wa...
print(X_train_feats.shape) from cs231n.classifiers.neural_net import TwoLayerNet input_dim = X_train_feats.shape[1] hidden_dim = 500 num_classes = 10 best_net = None ################################################################################ # TODO: Train a two-layer neural network on image features. You may w...
assignment1/features.ipynb
miguelfrde/stanford-cs231n
mit
First, I made a mistake naming the data set! It's 2015 data, not 2014 data. But yes, still use 311-2014.csv. You can rename it. Importing and preparing your data Import your data, but only the first 200,000 rows. You'll also want to change the index to be a datetime based on the Created Date column - you'll want to ch...
df=pd.read_csv("311-2014.csv", nrows=200000) dateutil.parser.parse(df['Created Date'][0]) def parse_date(str_date): return dateutil.parser.parse(str_date) df['created_datetime']=df['Created Date'].apply(parse_date) df.index=df['created_datetime']
12/homework 12 - 311 time series homework.ipynb
spm2164/foundations-homework
artistic-2.0
What was the most popular type of complaint, and how many times was it filed?
df['Complaint Type'].describe()
12/homework 12 - 311 time series homework.ipynb
spm2164/foundations-homework
artistic-2.0
Make a horizontal bar graph of the top 5 most frequent complaint types.
df.groupby(by='Complaint Type')['Complaint Type'].count().sort_values(ascending=False).head(5).plot(kind='barh').invert_yaxis()
12/homework 12 - 311 time series homework.ipynb
spm2164/foundations-homework
artistic-2.0
Which borough has the most complaints per capita? Since it's only 5 boroughs, you can do the math manually.
df.groupby(by='Borough')['Borough'].count() boro_pop={ 'BRONX': 1438159, 'BROOKLYN': 2621793, 'MANHATTAN': 1636268, 'QUEENS': 2321580, 'STATEN ISLAND': 473279} boro_df=pd.Series.to_frame(df.groupby(by='Borough')['Borough'].count()) boro_df['Population']=pd.DataFrame.from_dict(boro_pop, orient='in...
12/homework 12 - 311 time series homework.ipynb
spm2164/foundations-homework
artistic-2.0
According to your selection of data, how many cases were filed in March? How about May?
df['2015-03']['Created Date'].count() df['2015-05']['Created Date'].count()
12/homework 12 - 311 time series homework.ipynb
spm2164/foundations-homework
artistic-2.0
What was the most popular type of complaint on April 1st?
df['2015-04-01'].groupby(by='Complaint Type')['Complaint Type'].count().sort_values(ascending=False).head(1)
12/homework 12 - 311 time series homework.ipynb
spm2164/foundations-homework
artistic-2.0
What were the most popular three types of complaint on April 1st
df['2015-04-01'].groupby(by='Complaint Type')['Complaint Type'].count().sort_values(ascending=False).head(3)
12/homework 12 - 311 time series homework.ipynb
spm2164/foundations-homework
artistic-2.0
What month has the most reports filed? How many? Graph it.
df.resample('M')['Unique Key'].count().sort_values(ascending=False) df.resample('M').count().plot(y='Unique Key')
12/homework 12 - 311 time series homework.ipynb
spm2164/foundations-homework
artistic-2.0
What week of the year has the most reports filed? How many? Graph the weekly complaints.
df.resample('W')['Unique Key'].count().sort_values(ascending=False).head(5) df.resample('W').count().plot(y='Unique Key')
12/homework 12 - 311 time series homework.ipynb
spm2164/foundations-homework
artistic-2.0
Noise complaints are a big deal. Use .str.contains to select noise complaints, and make an chart of when they show up annually. Then make a chart about when they show up every day (cyclic).
noise_df=df[df['Complaint Type'].str.contains('Noise')] noise_df.resample('M').count().plot(y='Unique Key') noise_df.groupby(by=noise_df.index.hour).count().plot(y='Unique Key')
12/homework 12 - 311 time series homework.ipynb
spm2164/foundations-homework
artistic-2.0
What hour of the day are the most complaints? Graph a day of complaints.
df['Unique Key'].groupby(by=df.index.hour).count().sort_values(ascending=False) df['Unique Key'].groupby(df.index.hour).count().plot()
12/homework 12 - 311 time series homework.ipynb
spm2164/foundations-homework
artistic-2.0
One of the hours has an odd number of complaints. What are the most common complaints at that hour, and what are the most common complaints the hour before and after?
df[df.index.hour==0].groupby(by='Complaint Type')['Complaint Type'].count().sort_values(ascending=False).head(5) df[df.index.hour==1].groupby(by='Complaint Type')['Complaint Type'].count().sort_values(ascending=False).head(5) df[df.index.hour==11].groupby(by='Complaint Type')['Complaint Type'].count().sort_values(asc...
12/homework 12 - 311 time series homework.ipynb
spm2164/foundations-homework
artistic-2.0
So odd. What's the per-minute breakdown of complaints between 12am and 1am? You don't need to include 1am.
midnight_df = df[df.index.hour==0] midnight_df.groupby(midnight_df.index.minute)['Unique Key'].count().sort_values(ascending=False)
12/homework 12 - 311 time series homework.ipynb
spm2164/foundations-homework
artistic-2.0
Looks like midnight is a little bit of an outlier. Why might that be? Take the 5 most common agencies and graph the times they file reports at (all day, not just midnight).
df.groupby('Agency')['Unique Key'].count().sort_values(ascending=False).head(5) ax=df[df['Agency']=='NYPD'].groupby(df[df['Agency']=='NYPD'].index.hour)['Unique Key'].count().plot(legend=True, label='NYPD') df[df['Agency']=='HPD'].groupby(df[df['Agency']=='HPD'].index.hour)['Unique Key'].count().plot(ax=ax, legend=Tru...
12/homework 12 - 311 time series homework.ipynb
spm2164/foundations-homework
artistic-2.0
Graph those same agencies on an annual basis - make it weekly. When do people like to complain? When does the NYPD have an odd number of complaints?
ax=df[df['Agency']=='NYPD'].groupby(df[df['Agency']=='NYPD'].index.week)['Unique Key'].count().plot(legend=True, label='NYPD') df[df['Agency']=='HPD'].groupby(df[df['Agency']=='HPD'].index.week)['Unique Key'].count().plot(ax=ax, legend=True, label='HPD') df[df['Agency']=='DOT'].groupby(df[df['Agency']=='DOT'].index.wee...
12/homework 12 - 311 time series homework.ipynb
spm2164/foundations-homework
artistic-2.0
Maybe the NYPD deals with different issues at different times? Check the most popular complaints in July and August vs the month of May. Also check the most common complaints for the Housing Preservation Bureau (HPD) in winter vs. summer.
nypd=df[df['Agency']=='NYPD'] nypd[(nypd.index.month==7) | (nypd.index.month==8)].groupby('Complaint Type')['Complaint Type'].count().sort_values(ascending=False).head(5) nypd[nypd.index.month==5].groupby('Complaint Type')['Complaint Type'].count().sort_values(ascending=False).head(5) # seems like mostly noise compl...
12/homework 12 - 311 time series homework.ipynb
spm2164/foundations-homework
artistic-2.0
You should see the output "Hello World!". Once you've verfied this, interupt the above running cell by hitting the stop button. Create and build a Docker image Now we will create a docker image called hello_node.docker that will do the following: Start from the node image found on the Docker hub by inhereting from nod...
import os PROJECT_ID = "your-gcp-project-here" # REPLACE WITH YOUR PROJECT NAME os.environ["PROJECT_ID"] = PROJECT_ID %%bash docker build -f dockerfiles/hello_node.docker -t gcr.io/${PROJECT_ID}/hello-node:v1 .
notebooks/docker_and_kubernetes/solutions/3_k8s_hello_node.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
A declarative approach is being used here. Rather than starting or stopping new instances, you declare how many instances should be running at all times. Kubernetes reconciliation loops makes sure that reality matches what you requested and takes action if needed. Here's a diagram summarizing the state of your Kubernet...
%%bash docker build -f dockerfiles/hello_node.docker -t gcr.io/${PROJECT_ID}/hello-node:v2 . docker push gcr.io/${PROJECT_ID}/hello-node:v2
notebooks/docker_and_kubernetes/solutions/3_k8s_hello_node.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
OpenSpiel This Colab gets you started the basics of OpenSpiel. OpenSpiel is a framework for reinforcement learning in games. The code is hosted on github. There is an accompanying video tutorial that works through this colab. It will be linked here once it is live. There is also an OpenSpiel paper with more detail. I...
!pip install --upgrade open_spiel
open_spiel/colabs/OpenSpielTutorial.ipynb
deepmind/open_spiel
apache-2.0
Part 1. OpenSpiel API Basics.
# Importing pyspiel and showing the list of supported games. import pyspiel print(pyspiel.registered_names()) # Loading a game (with no/default parameters). game = pyspiel.load_game("tic_tac_toe") print(game) # Some properties of the games. print(game.num_players()) print(game.max_utility()) print(game.min_utility())...
open_spiel/colabs/OpenSpielTutorial.ipynb
deepmind/open_spiel
apache-2.0
Part 2. Normal-form Games and Evolutionary Dynamics in OpenSpiel.
import pyspiel game = pyspiel.create_matrix_game([[1, -1], [-1, 1]], [[-1, 1], [1, -1]]) print(game) # name not provided: uses a default state = game.new_initial_state() print(state) # action names also not provided; defaults used # Normal-form games are 1-step simultaneous-move games. print(state.current_player())...
open_spiel/colabs/OpenSpielTutorial.ipynb
deepmind/open_spiel
apache-2.0
Part 3. Chance Nodes and Partially-Observable Games.
# Kuhn poker: simplified poker with a 3-card deck (https://en.wikipedia.org/wiki/Kuhn_poker) import pyspiel game = pyspiel.load_game("kuhn_poker") print(game.num_distinct_actions()) # bet and fold # Chance nodes. state = game.new_initial_state() print(state.current_player()) # special chance player id print(st...
open_spiel/colabs/OpenSpielTutorial.ipynb
deepmind/open_spiel
apache-2.0
Part 4. Basic RL: Self-play Q-Learning in Tic-Tac-Toe.
# Let's do independent Q-learning in Tic-Tac-Toe, and play it against random. # RL is based on python/examples/independent_tabular_qlearning.py from open_spiel.python import rl_environment from open_spiel.python import rl_tools from open_spiel.python.algorithms import tabular_qlearner # Create the environment env = rl...
open_spiel/colabs/OpenSpielTutorial.ipynb
deepmind/open_spiel
apache-2.0
<a id = "section1">Data Generation and Visualization</a> Transformation of features to Shogun format using <a href="http://www.shogun-toolbox.org/doc/en/current/classshogun_1_1DenseFeatures.html">RealFeatures</a> and <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1BinaryLabels.html">BinaryLables</a>...
shogun_feats_linear = sg.create_features(sg.read_csv(os.path.join(SHOGUN_DATA_DIR, 'toy/classifier_binary_2d_linear_features_train.dat'))) shogun_labels_linear = sg.create_labels(sg.read_csv(os.path.join(SHOGUN_DATA_DIR, 'toy/classifier_binary_2d_linear_labels_train.dat'))) shogun_feats_non_linear = sg.create_features...
doc/ipython-notebooks/classification/Classification.ipynb
geektoni/shogun
bsd-3-clause
Data visualization methods.
def plot_binary_data(plot,X_train, y_train): """ This function plots 2D binary data with different colors for different labels. """ plot.xlabel(r"$x$") plot.ylabel(r"$y$") plot.plot(X_train[0, np.argwhere(y_train == 1)], X_train[1, np.argwhere(y_train == 1)], 'ro') plot.plot(X_train[0, np.ar...
doc/ipython-notebooks/classification/Classification.ipynb
geektoni/shogun
bsd-3-clause
<a id="section2" href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1SVM.html">Support Vector Machine</a> <a id="section2a" href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1LibLinear.html">Linear SVM</a> Shogun provide <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1LibL...
plt.figure(figsize=(15,5)) c = 0.5 epsilon = 1e-3 svm_linear = sg.create_machine("LibLinear", C1=c, C2=c, labels=shogun_labels_linear, epsilon=epsilon, liblinear_solver_type="L2R_L2LOSS_SVC") svm_linear.train(shogun_feats_linear) classifiers_lin...
doc/ipython-notebooks/classification/Classification.ipynb
geektoni/shogun
bsd-3-clause
SVM - Kernels Shogun provides many options for using kernel functions. Kernels in Shogun are based on two classes which are <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1Kernel.html">Kernel</a> and <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1KernelMachine.html">KernelMachin...
gaussian_c = 0.7 gaussian_kernel_linear = sg.create_kernel("GaussianKernel", width=20) gaussian_svm_linear = sg.create_machine('LibSVM', C1=gaussian_c, C2=gaussian_c, kernel=gaussian_kernel_linear, labels=shogun_labels_linear) gaussian_svm_linear.train(shogun_feats_linear) classifiers...
doc/ipython-notebooks/classification/Classification.ipynb
geektoni/shogun
bsd-3-clause
<a id ="section2c" href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CSigmoidKernel.html">Sigmoid Kernel</a>
sigmoid_c = 0.9 sigmoid_kernel_linear = sg.create_kernel("SigmoidKernel", cache_size=200, gamma=1, coef0=0.5) sigmoid_kernel_linear.init(shogun_feats_linear, shogun_feats_linear) sigmoid_svm_linear = sg.create_machine('LibSVM', C1=sigmoid_c, C2=sigmoid_c, kernel=sigmoid_kernel_linear,...
doc/ipython-notebooks/classification/Classification.ipynb
geektoni/shogun
bsd-3-clause
<a id ="section2d" href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CPolyKernel.html">Polynomial Kernel</a>
poly_c = 0.5 degree = 4 poly_kernel_linear = sg.create_kernel('PolyKernel', degree=degree, c=1.0) poly_kernel_linear.init(shogun_feats_linear, shogun_feats_linear) poly_svm_linear = sg.create_machine('LibSVM', C1=poly_c, C2=poly_c, kernel=poly_kernel_linear, labels=shogun_labels_linear) p...
doc/ipython-notebooks/classification/Classification.ipynb
geektoni/shogun
bsd-3-clause
<a id ="section3" href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1GaussianNaiveBayes.html">Naive Bayes</a>
multiclass_labels_linear = shogun_labels_linear.get('labels') for i in range(0,len(multiclass_labels_linear)): if multiclass_labels_linear[i] == -1: multiclass_labels_linear[i] = 0 multiclass_labels_non_linear = shogun_labels_non_linear.get('labels') for i in range(0,len(multiclass_labels_non_linear)): ...
doc/ipython-notebooks/classification/Classification.ipynb
geektoni/shogun
bsd-3-clause
<a id ="section4" href="http://www.shogun-toolbox.org/doc/en/current/classshogun_1_1KNN.html">Nearest Neighbors</a>
number_of_neighbors = 10 distances_linear = sg.create_distance('EuclideanDistance') distances_linear.init(shogun_feats_linear, shogun_feats_linear) knn_linear = sg.create_machine("KNN", k=number_of_neighbors, distance=distances_linear, labels=shogun_labels_linear) knn_linear.train() cla...
doc/ipython-notebooks/classification/Classification.ipynb
geektoni/shogun
bsd-3-clause
<a id ="section5" href="http://www.shogun-toolbox.org/doc/en/current/classshogun_1_1CLDA.html">Linear Discriminant Analysis</a>
gamma = 0.1 lda_linear = sg.create_machine('LDA', gamma=gamma, labels=shogun_labels_linear) lda_linear.train(shogun_feats_linear) classifiers_linear.append(lda_linear) classifiers_names.append("LDA") fadings.append(True) plt.figure(figsize=(15,5)) plt.subplot(121) plt.title("LDA - Linear Features") plot_model(plt,lda...
doc/ipython-notebooks/classification/Classification.ipynb
geektoni/shogun
bsd-3-clause
<a id ="section6" href="http://www.shogun-toolbox.org/doc/en/current/classshogun_1_1QDA.html">Quadratic Discriminant Analysis</a>
qda_linear = sg.create_machine("QDA", labels=shogun_multiclass_labels_linear) qda_linear.train(shogun_feats_linear) classifiers_linear.append(qda_linear) classifiers_names.append("QDA") fadings.append(False) plt.figure(figsize=(15,5)) plt.subplot(121) plt.title("QDA - Linear Features") plot_model(plt,qda_linear,feats_...
doc/ipython-notebooks/classification/Classification.ipynb
geektoni/shogun
bsd-3-clause
<a id ="section7" href="http://www.shogun-toolbox.org/doc/en/current/classshogun_1_1GaussianProcessBinaryClassification.html">Gaussian Process</a> <a id ="section7a">Logit Likelihood model</a> Shogun's <a href= "http://www.shogun-toolbox.org/doc/en/current/classshogun_1_1LogitLikelihood.html">LogitLikelihood</a> and <a...
# create Gaussian kernel with width = 5.0 kernel = sg.create_kernel("GaussianKernel", width=5.0) # create zero mean function zero_mean = sg.create_gp_mean("ZeroMean") # create logit likelihood model likelihood = sg.create_gp_likelihood("LogitLikelihood") # specify EP approximation inference method inference_model_linea...
doc/ipython-notebooks/classification/Classification.ipynb
geektoni/shogun
bsd-3-clause
<a id ="section7b">Probit Likelihood model</a> Shogun's <a href="http://www.shogun-toolbox.org/doc/en/current/classshogun_1_1ProbitLikelihood.html">ProbitLikelihood</a> class is used.
likelihood = sg.create_gp_likelihood("ProbitLikelihood") inference_model_linear = sg.create_gp_inference("EPInferenceMethod", kernel=kernel, features=shogun_feats_linear, mean_function=zero_mean, ...
doc/ipython-notebooks/classification/Classification.ipynb
geektoni/shogun
bsd-3-clause
Run the modification of check_test_score.py so that it can work with superclass representation.
import numpy as np import pylearn2.utils import pylearn2.config import theano import neukrill_net.dense_dataset import neukrill_net.utils import sklearn.metrics import argparse import os import pylearn2.config.yaml_parse
notebooks/model_run_and_result_analyses/Analyse hierarchy-based model with no augmentation or 8 augmentation.ipynb
Neuroglycerin/neukrill-net-work
mit
Check which core is free.
%env THEANO_FLAGS = 'device=gpu3,floatX=float32,base_compiledir=~/.theano/stonesoup3' verbose = False augment = 1 settings = neukrill_net.utils.Settings("settings.json")
notebooks/model_run_and_result_analyses/Analyse hierarchy-based model with no augmentation or 8 augmentation.ipynb
Neuroglycerin/neukrill-net-work
mit
Give the path to .json.
run_settings = neukrill_net.utils.load_run_settings('run_settings/alexnet_based_extra_convlayer_with_superclasses.json', settings, force=True) model = pylearn2.utils.serial.load(run_settings['pickle abspath']) # format the YAML yaml_string = neukrill_net.utils.format_yaml(run_settings, settings) # load p...
notebooks/model_run_and_result_analyses/Analyse hierarchy-based model with no augmentation or 8 augmentation.ipynb
Neuroglycerin/neukrill-net-work
mit
Best .pkl scores as:
logloss = sklearn.metrics.log_loss(dataset.y[:, :len(settings.classes)], y[:, :len(settings.classes)]) print("Log loss: {0}".format(logloss))
notebooks/model_run_and_result_analyses/Analyse hierarchy-based model with no augmentation or 8 augmentation.ipynb
Neuroglycerin/neukrill-net-work
mit
Recent .pkl scores as: (rerun relevant cells with a different path)
logloss = sklearn.metrics.log_loss(dataset.y[:, :len(settings.classes)], y[:, :len(settings.classes)]) print("Log loss: {0}".format(logloss)) %env THEANO_FLAGS = device=gpu2,floatX=float32,base_compiledir=~/.theano/stonesoup2 %env
notebooks/model_run_and_result_analyses/Analyse hierarchy-based model with no augmentation or 8 augmentation.ipynb
Neuroglycerin/neukrill-net-work
mit
Check the same model with 8 augmentation.
import numpy as np import pylearn2.utils import pylearn2.config import theano import neukrill_net.dense_dataset import neukrill_net.utils import sklearn.metrics import argparse import os import pylearn2.config.yaml_parse verbose = False augment = 1 settings = neukrill_net.utils.Settings("settings.json") run_settings ...
notebooks/model_run_and_result_analyses/Analyse hierarchy-based model with no augmentation or 8 augmentation.ipynb
Neuroglycerin/neukrill-net-work
mit
Best .pkl scored as:
logloss = sklearn.metrics.log_loss(dataset.y[:, :len(settings.classes)], y[:, :len(settings.classes)]) print("Log loss: {0}".format(logloss))
notebooks/model_run_and_result_analyses/Analyse hierarchy-based model with no augmentation or 8 augmentation.ipynb
Neuroglycerin/neukrill-net-work
mit
Strange. Not as good as we hoped. Is there a problem with augmentation? Let's plot the nll.
import pylearn2.utils import pylearn2.config import theano import neukrill_net.dense_dataset import neukrill_net.utils import numpy as np %matplotlib inline import matplotlib.pyplot as plt #import holoviews as hl #load_ext holoviews.ipython import sklearn.metrics m = pylearn2.utils.serial.load( "/disk/scratch/neur...
notebooks/model_run_and_result_analyses/Analyse hierarchy-based model with no augmentation or 8 augmentation.ipynb
Neuroglycerin/neukrill-net-work
mit
Looks like it's pretty stable at 4 and had this random strange glitch which gave the best result. Look at the best pkl of the none-aug model again: (just to confirm that it was indeed good)
import numpy as np import pylearn2.utils import pylearn2.config import theano import neukrill_net.dense_dataset import neukrill_net.utils import sklearn.metrics import argparse import os import pylearn2.config.yaml_parse verbose = False augment = 1 settings = neukrill_net.utils.Settings("settings.json") run_settings ...
notebooks/model_run_and_result_analyses/Analyse hierarchy-based model with no augmentation or 8 augmentation.ipynb
Neuroglycerin/neukrill-net-work
mit
It was. Annoying. Let's plot the nll too:
m = pylearn2.utils.serial.load( "/disk/scratch/neuroglycerin/models/alexnet_based_extra_convlayer_with_superclasses.pkl") channel = m.monitor.channels["valid_y_y_1_nll"] plt.plot(channel.example_record,channel.val_record)
notebooks/model_run_and_result_analyses/Analyse hierarchy-based model with no augmentation or 8 augmentation.ipynb
Neuroglycerin/neukrill-net-work
mit
The SeqRecord Object The SeqRecord (Sequence Record) class is defined in the Bio.SeqRecord module. This class allows higher level features such as identifiers and features to be associated with a sequence, and is the basic data type for the Bio.SeqIO sequence input/output interface. The SeqRecord class itself is quite ...
from Bio.Seq import Seq simple_seq = Seq("GATC") simple_seq_r = SeqRecord(simple_seq)
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
Additionally, you can also pass the id, name and description to the initialization function, but if not they will be set as strings indicating they are unknown, and can be modified subsequently:
simple_seq_r.id simple_seq_r.id = "AC12345" simple_seq_r.description = "Made up sequence I wish I could write a paper about" print(simple_seq_r.description) simple_seq_r.seq print(simple_seq_r.seq)
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
Including an identifier is very important if you want to output your SeqRecord to a file. You would normally include this when creating the object:
simple_seq = Seq("GATC") simple_seq_r = SeqRecord(simple_seq, id="AC12345")
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
As mentioned above, the SeqRecord has an dictionary attribute annotations. This is used for any miscellaneous annotations that doesn't fit under one of the other more specific attributes. Adding annotations is easy, and just involves dealing directly with the annotation dictionary:
simple_seq_r.annotations["evidence"] = "None. I just made it up." print(simple_seq_r.annotations) print(simple_seq_r.annotations["evidence"])
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
Working with per-letter-annotations is similar, letter_annotations is a dictionary like attribute which will let you assign any Python sequence (i.e. a string, list or tuple) which has the same length as the sequence:
simple_seq_r.letter_annotations["phred_quality"] = [40, 40, 38, 30] print(simple_seq_r.letter_annotations) print(simple_seq_r.letter_annotations["phred_quality"])
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
The dbxrefs and features attributes are just Python lists, and should be used to store strings and SeqFeature objects (discussed later) respectively. SeqRecord objects from FASTA files This example uses a fairly large FASTA file containing the whole sequence for \textit{Yersinia pestis biovar Microtus} str. 91001 plasm...
from Bio import SeqIO record = SeqIO.read("data/NC_005816.fna", "fasta") record
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
Now, let's have a look at the key attributes of this SeqRecord individually - starting with the seq attribute which gives you a Seq object:
record.seq
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
Next, the identifiers and description:
record.id record.name record.description
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
As you can see above, the first word of the FASTA record's title line (after removing the greater than symbol) is used for both the id and name attributes. The whole title line (after removing the greater than symbol) is used for the record description. This is deliberate, partly for backwards compatibility reasons, bu...
record.dbxrefs record.annotations record.letter_annotations record.features
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
In this case our example FASTA file was from the NCBI, and they have a fairly well defined set of conventions for formatting their FASTA lines. This means it would be possible to parse this information and extract the GI number and accession for example. However, FASTA files from other sources vary, so this isn't possi...
record = SeqIO.read("data/NC_005816.gb", "genbank") record
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
You should be able to spot some differences already! But taking the attributes individually, the sequence string is the same as before, but this time Bio.SeqIO has been able to automatically assign a more specific alphabet:
record.seq
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
The name comes from the LOCUS line, while the \verb|id| includes the version suffix. The description comes from the DEFINITION line:
record.id record.name record.description
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit