markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Therefore, the function to extract the comments is:
def parse_comments(comments): ''' comment = { "bloggerId": "author", "sentences": [], # all sentences in a comment, "parents": [] # the order depends on how beautifulsoup gives me the parents } ''' parsed_comments = {} for c in comments: comment = {} ...
testdataextractor/TestDataExtractor.ipynb
betoesquivel/onforums-application
mit
Clustering just the sentences Vectorizing the sentences (TFIDF)
from sklearn.feature_extraction.text import TfidfVectorizer import nltk.stem english_stemmer = nltk.stem.SnowballStemmer('english') class StemmedTfidfVectorizer(TfidfVectorizer): def build_analyzer(self): analyzer=super(StemmedTfidfVectorizer,self).build_analyzer() return lambda doc:(english_stemm...
testdataextractor/TestDataExtractor.ipynb
betoesquivel/onforums-application
mit
Dimensionality reduction and Normalization
import gensim #Dimensionality reduction using LSI. Go from 6D to 2D. X = sentences_vectors.todense() dct = gensim.corpora.Dictionary(X) lsi_docs = {} num_topics = 500 lsi_model = gensim.models.LsiModel(dct, num_topics=500) print lsi_model.shape print lsi_model[:50]
testdataextractor/TestDataExtractor.ipynb
betoesquivel/onforums-application
mit
Clustering with MeanShift WHY ARE ALL VECTORS VALUED AT 0!???
import numpy as np from sklearn.cluster import MeanShift, estimate_bandwidth bandwidth = estimate_bandwidth(X, quantile=0.3) ms = MeanShift(bandwidth=bandwidth, bin_seeding=True) ms.fit(X) labels = ms.labels_ cluster_centers = ms.cluster_centers_ labels_unique = np.unique(labels) n_clusters_ = len(labels_unique) pr...
testdataextractor/TestDataExtractor.ipynb
betoesquivel/onforums-application
mit
Using the same approach as a movie clusterer http://brandonrose.org/clustering Imports
import numpy as np import pandas as pd import nltk import re import os import codecs from sklearn import feature_extraction import mpld3
testdataextractor/TestDataExtractor.ipynb
betoesquivel/onforums-application
mit
Stopwords, stemming, and tokenizing
stopwords = nltk.corpus.stopwords.words('english') from nltk.stem.snowball import SnowballStemmer stemmer = SnowballStemmer("english") print 'Done' def tokenize_and_stem(sentences): tokens = [word for sent in sentences for word in nltk.word_tokenize(sent)] filtered_tokens = [] for token in t...
testdataextractor/TestDataExtractor.ipynb
betoesquivel/onforums-application
mit
Make vocabulary stemmmed and not-stemmed
totalvocab_stemmed = [] totalvocab_tokenized = [] allwords_stemmed = tokenize_and_stem(article['sentences'].values()) totalvocab_stemmed.extend(allwords_stemmed) allwords_tokenized = tokenize_only(article['sentences'].values()) totalvocab_tokenized.extend(allwords_tokenized)
testdataextractor/TestDataExtractor.ipynb
betoesquivel/onforums-application
mit
Pandas data frame to visualize the vocabulary
vocab_frame = pd.DataFrame({'words': totalvocab_tokenized}, index = totalvocab_stemmed) print 'there are ' + str(vocab_frame.shape[0]) + ' items in vocab_frame' print 'here are the first words in the vocabulary' vocab_frame.head()
testdataextractor/TestDataExtractor.ipynb
betoesquivel/onforums-application
mit
TF-IDF and document similarity
from sklearn.feature_extraction.text import TfidfVectorizer tfidf_vectorizer = TfidfVectorizer(max_df=0.8, max_features=20000, min_df=0.2, stop_words='english', use_idf=True, tokenizer=tokenize_and_stem, ngram_range=(...
testdataextractor/TestDataExtractor.ipynb
betoesquivel/onforums-application
mit
Cosine Similarity
from sklearn.metrics.pairwise import cosine_similarity dist = 1 - cosine_similarity(tfidf_matrix) dist_frame = pd.DataFrame(dist) print dist
testdataextractor/TestDataExtractor.ipynb
betoesquivel/onforums-application
mit
K-means clustering
from sklearn.cluster import KMeans num_clusters = 5 km = KMeans(n_clusters=num_clusters) %time km.fit(tfidf_matrix) clusters = km.labels_.tolist() clusters
testdataextractor/TestDataExtractor.ipynb
betoesquivel/onforums-application
mit
Multidimensional scaling to plot?
import os import matplotlib.pyplot as plt import matplotlib as mpl from sklearn.manifold import MDS MDS() mds = MDS(n_components=2, dissimilarity="precomputed", random_state=1) pos = mds.fit_transform(dist) xs, ys = pos[:,0], pos[:, 1]
testdataextractor/TestDataExtractor.ipynb
betoesquivel/onforums-application
mit
Plot
cluster_colors = {0: '#1b9e77', 1: '#d95f02', 2: '#7570b3', 3: '#e7298a', 4: '#66a61e'} cluster_names = {0: 'C0', 1: 'C1', 2: 'C2', 3: 'C3', 4: 'C4'} # iPython now will show matplotlib plots inline %matplotlib inline df = pd.DataFrame(dict(x=xs, y=ys...
testdataextractor/TestDataExtractor.ipynb
betoesquivel/onforums-application
mit
Hierarchical document clustering The Ward clustering algorithm !!!!
from scipy.cluster.hierarchy import ward, dendrogram linkage_matrix = ward(dist) #define the linkage_matrix # using ward clustering pre-computed distances fig, ax = plt.subplots(figsize=(15,20)) # set size ax = dendrogram(linkage_matrix, orientation="right", labels=["s{0}".format(x) for x in range(190)]) plt.tick_par...
testdataextractor/TestDataExtractor.ipynb
betoesquivel/onforums-application
mit
Extracting the links
soup = BeautifulSoup(article_text, "lxml") def is_valid_link(tag): if tag.name != 'link': return False link = tag l_conf = link['link_confidence'] l_val = link['validation'] arg = link.find_next_sibling('argument') sent = link.find_next_sibling('sentiment') a_val = arg['validation'] ...
testdataextractor/TestDataExtractor.ipynb
betoesquivel/onforums-application
mit
Exercise 6.1 Impute the missing values of the age and Embarked
titanic.Age.fillna(titanic.Age.median(), inplace=True) titanic.isnull().sum() titanic.Embarked.mode() titanic.Embarked.fillna('S', inplace=True) titanic.isnull().sum()
exercises/06-Titanic_cross_validation.ipynb
MonicaGutierrez/PracticalMachineLearningClass
mit
Exercise 6.3 Convert the Sex and Embarked to categorical features
titanic['Sex_Female'] = titanic.Sex.map({'male':0, 'female':1}) titanic.head() embarkedummy = pd.get_dummies(titanic.Embarked, prefix='Embarked') embarkedummy.drop(embarkedummy.columns[0], axis=1, inplace=True) titanic = pd.concat([titanic, embarkedummy], axis=1) titanic.head()
exercises/06-Titanic_cross_validation.ipynb
MonicaGutierrez/PracticalMachineLearningClass
mit
Exercise 6.3 (2 points) From the set of features ['Pclass', 'Sex', 'Age', 'SibSp', 'Parch', 'Fare', 'Embarked'] *Note, use the created categorical features for Sex and Embarked Select the features that maximize the accuracy the model using K-Fold cross-validation
y = titanic['Survived'] features = ['Pclass', 'Age', 'SibSp', 'Parch', 'Fare','Sex_Female', 'Embarked_Q', 'Embarked_S'] import numpy as np def comb(n,k) : return np.math.factorial(n) / (np.math.factorial(n-k) * np.math.factorial(k)) np.sum([comb(8,i) for i in range(0,8)]) import itertools possible_models = [] ...
exercises/06-Titanic_cross_validation.ipynb
MonicaGutierrez/PracticalMachineLearningClass
mit
Looking at the data The training data contains a row per comment, with an id, the text of the comment, and 6 different labels that we'll try to predict.
train.head()
DEEP LEARNING/NLP/text analyses/NB-SVM strong linear baseline - classif.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Here's a couple of examples of comments, one toxic, and one with no labels.
train['comment_text'][0] train['comment_text'][2]
DEEP LEARNING/NLP/text analyses/NB-SVM strong linear baseline - classif.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
The length of the comments varies a lot.
lens = train.comment_text.str.len() lens.mean(), lens.std(), lens.max() lens.hist();
DEEP LEARNING/NLP/text analyses/NB-SVM strong linear baseline - classif.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
We'll create a list of all the labels to predict, and we'll also create a 'none' label so we can see how many comments have no labels. We can then summarize the dataset.
label_cols = ['toxic', 'severe_toxic', 'obscene', 'threat', 'insult', 'identity_hate'] train['none'] = 1-train[label_cols].max(axis=1) train.describe() len(train),len(test)
DEEP LEARNING/NLP/text analyses/NB-SVM strong linear baseline - classif.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
There are a few empty comments that we need to get rid of, otherwise sklearn will complain.
COMMENT = 'comment_text' train[COMMENT].fillna("unknown", inplace=True) test[COMMENT].fillna("unknown", inplace=True)
DEEP LEARNING/NLP/text analyses/NB-SVM strong linear baseline - classif.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Building the model We'll start by creating a bag of words representation, as a term document matrix. We'll use ngrams, as suggested in the NBSVM paper.
import re, string re_tok = re.compile(f'([{string.punctuation}“”¨«»®´·º½¾¿¡§£₤‘’])') def tokenize(s): return re_tok.sub(r' \1 ', s).split()
DEEP LEARNING/NLP/text analyses/NB-SVM strong linear baseline - classif.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
It turns out that using TF-IDF gives even better priors than the binarized features used in the paper. I don't think this has been mentioned in any paper before, but it improves leaderboard score from 0.59 to 0.55.
n = train.shape[0] vec = TfidfVectorizer(ngram_range=(1,2), tokenizer=tokenize, min_df=3, max_df=0.9, strip_accents='unicode', use_idf=1, smooth_idf=1, sublinear_tf=1 ) trn_term_doc = vec.fit_transform(train[COMMENT]) test_term_doc = vec.transform(test[COMMENT])
DEEP LEARNING/NLP/text analyses/NB-SVM strong linear baseline - classif.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
This creates a sparse matrix with only a small number of non-zero elements (stored elements in the representation below).
trn_term_doc, test_term_doc
DEEP LEARNING/NLP/text analyses/NB-SVM strong linear baseline - classif.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Here's the basic naive bayes feature equation:
def pr(y_i, y): p = x[y==y_i].sum(0) return (p+1) / ((y==y_i).sum()+1) x = trn_term_doc test_x = test_term_doc
DEEP LEARNING/NLP/text analyses/NB-SVM strong linear baseline - classif.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Fit a model for one dependent at a time:
def get_mdl(y): y = y.values r = np.log(pr(1,y) / pr(0,y)) m = LogisticRegression(C=4, dual=True) x_nb = x.multiply(r) return m.fit(x_nb, y), r preds = np.zeros((len(test), len(label_cols))) for i, j in enumerate(label_cols): print('fit', j) m,r = get_mdl(train[j]) preds[:,i] = m.predi...
DEEP LEARNING/NLP/text analyses/NB-SVM strong linear baseline - classif.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
And finally, create the submission file.
submid = pd.DataFrame({'id': subm["id"]}) submission = pd.concat([submid, pd.DataFrame(preds, columns = label_cols)], axis=1) submission.to_csv('submission.csv', index=False)
DEEP LEARNING/NLP/text analyses/NB-SVM strong linear baseline - classif.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
[4-2] x=0.5における接線を描いて、その傾きを求める関数derivativeを定義します。
def derivative(f, filename): fig = plt.figure(figsize=(4,4)) images = [] x0, d = 0.5, 0.5 for _ in range(10): subplot = fig.add_subplot(1,1,1) subplot.set_xlim(0, 1) subplot.set_ylim(0, 1) slope = (f(x0+d)-f(x0)) / d linex = np.linspace(0, 1, 100) image0 ...
No5/Figure11 - derivative_animation.ipynb
enakai00/jupyter_NikkeiLinux
apache-2.0
[4-3] 二次関数 y=x*x を用意して、関数derivativeを呼び出します。 GIF動画ファイル「derivative01.gif」が作成されます。
def f(x): y = x*x return y derivative(f, 'derivative01.gif')
No5/Figure11 - derivative_animation.ipynb
enakai00/jupyter_NikkeiLinux
apache-2.0
Bayesian optimization or sequential model-based optimization uses a surrogate model to model the expensive to evaluate function func. There are several choices for what kind of surrogate model to use. This example compares the performance of: gaussian processes, extra trees, and random forests as surrogate models. A...
from skopt.benchmarks import branin as _branin def branin(x, noise_level=0.): return _branin(x) + noise_level * np.random.randn() from matplotlib.colors import LogNorm def plot_branin(): fig, ax = plt.subplots() x1_values = np.linspace(-5, 10, 100) x2_values = np.linspace(0, 15, 100) x_ax, y_ax ...
examples/strategy-comparison.ipynb
glouppe/scikit-optimize
bsd-3-clause
This shows the value of the two-dimensional branin function and the three minima. Objective The objective of this example is to find one of these minima in as few iterations as possible. One iteration is defined as one call to the branin function. We will evaluate each model several times using a different seed for the...
from functools import partial from skopt import gp_minimize, forest_minimize, dummy_minimize func = partial(branin, noise_level=2.0) bounds = [(-5.0, 10.0), (0.0, 15.0)] x0 = [2.5, 7.5] n_calls = 80 def run(minimizer, n_iter=20): return [minimizer(func, bounds, x0=x0, n_calls=n_calls, random_state=n) ...
examples/strategy-comparison.ipynb
glouppe/scikit-optimize
bsd-3-clause
Note that this can take a few minutes.
from skopt.plots import plot_convergence plot_convergence(("dummy_minimize", dummy_res), ("gp_minimize", gp_res), ("forest_minimize('rf')", rf_res), ("forest_minimize('et)", et_res), true_minimum=0.397887, yscale="log")
examples/strategy-comparison.ipynb
glouppe/scikit-optimize
bsd-3-clause
Labor Force Status The notion of women making up to 23 less cents on the dollar than men has been challenged numerous times. Many claim, including Resident Fellow at the Harvard Institute of Politics, Karen Agness, that this statistic in manipulated and misled by popular media and the government. The extent of systemic...
fig, ax = plt.subplots() fig.set_size_inches(11.7, 8.27) ax.set_title('Figure 2. Income Per Week From Main Job', weight='bold', fontsize = 17) sns.set_style("whitegrid") sns.violinplot(x='Sex',y='Main Job Income/Wk', data = atus) plt.xlabel('Sex',weight='bold',fontsize=13) plt.ylabel('Main Job Income/Wk ($)',weight='bo...
UG_S16/Jerry_Allen_Gender_Pay_Gap.ipynb
NYUDataBootcamp/Projects
mit
Differences in Main Stream of Income Figure 2 clearly illustrates men earning more income than women. There's a sizable share of women earning less than 500/week, while there are very few making more than 1500/week. On the other hand, the men's income is a more evenly distributed, as opposed to being as bottom heavy a...
fig, ax = plt.subplots() fig.set_size_inches(11.7, 8.27) ax.set_title('Figure 3. Hours Worked Per Week', weight='bold',fontsize = 17) sns.set_style('whitegrid') sns.boxplot(x='Sex', y='Hours Worked/Wk', data= atus) plt.xlabel('Sex',weight='bold',fontsize=13) plt.ylabel('Hours Worked/Wk',weight='bold', fontsize=13)
UG_S16/Jerry_Allen_Gender_Pay_Gap.ipynb
NYUDataBootcamp/Projects
mit
Differences in Hours Worked One obvious factor to investigate is the number of hours worked for both men and women. This will surely have an impact on the earnings for each sex. Figure 3 shows that males work considerably more hours than females. A clear indicator of this is the upper quartile for women being 40 hours/...
fig, ax = plt.subplots() fig.set_size_inches(11.7, 8.27) ax.set(xlim=(0, 1400)) ax.set_title('Figure 4. Mins/Day Providing Secondary Child Care (<13y/o)', weight='bold', fontsize = 17) sns.violinplot(data= atus, x='Secondary Child Care (mins)', y='Sex') plt.xlabel('Secondary Child Care (Mins/Day)',weight='bold',fontsiz...
UG_S16/Jerry_Allen_Gender_Pay_Gap.ipynb
NYUDataBootcamp/Projects
mit
The Differences in the Time Spent Providing Child Care Secondary child care is referring to time spent looking after children, while taking on something else as a primary activity. In sum, it is keeping a watchful eye over children, without providing one's full and undivided attention. Harvard Economics Professor, Clau...
fig, ax = plt.subplots() fig.set_size_inches(11.27, 5.5) ax.set(ylim=(0, 1400)) ax.set_title("Figure 5. Mins/Day Providing Elderly Care", weight='bold',fontsize = 17) sns.set_style("whitegrid") sns.swarmplot(x='Sex', y='Elderly Care (mins)', data= atus) plt.xlabel('Sex',weight='bold',fontsize=13) plt.ylabel('Elderly Ca...
UG_S16/Jerry_Allen_Gender_Pay_Gap.ipynb
NYUDataBootcamp/Projects
mit
Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space.
from urllib.request import urlretrieve from os.path import isfile, isdir from tqdm import tqdm import zipfile dataset_folder_path = 'data' dataset_filename = 'text8.zip' dataset_name = 'Text8 Dataset' class DLProgress(tqdm): last_block = 0 def hook(self, block_num=1, block_size=1, total_size=None): s...
embeddings/.ipynb_checkpoints/Skip-Grams-Solution-checkpoint.ipynb
zhuanxuhit/deep-learning
mit
Preprocessing Here I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation into tokens, so a period is changed to &lt;PERIOD&gt;. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all wo...
words = utils.preprocess(text) print(words[:30]) print("Total words: {}".format(len(words))) print("Unique words: {}".format(len(set(words))))
embeddings/.ipynb_checkpoints/Skip-Grams-Solution-checkpoint.ipynb
zhuanxuhit/deep-learning
mit
And here I'm creating dictionaries to covert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_wor...
vocab_to_int, int_to_vocab = utils.create_lookup_tables(words) int_words = [vocab_to_int[word] for word in words]
embeddings/.ipynb_checkpoints/Skip-Grams-Solution-checkpoint.ipynb
zhuanxuhit/deep-learning
mit
Subsampling Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ i...
from collections import Counter import random threshold = 1e-5 word_counts = Counter(int_words) total_count = len(int_words) freqs = {word: count/total_count for word, count in word_counts.items()} p_drop = {word: 1 - np.sqrt(threshold/freqs[word]) for word in word_counts} train_words = [word for word in int_words if ...
embeddings/.ipynb_checkpoints/Skip-Grams-Solution-checkpoint.ipynb
zhuanxuhit/deep-learning
mit
Making batches Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$. From Mikolov et al.: "Since the more distant words are usually l...
def get_target(words, idx, window_size=5): ''' Get a list of words in a window around an index. ''' R = np.random.randint(1, window_size+1) start = idx - R if (idx - R) > 0 else 0 stop = idx + R target_words = set(words[start:idx] + words[idx+1:stop+1]) return list(target_words)
embeddings/.ipynb_checkpoints/Skip-Grams-Solution-checkpoint.ipynb
zhuanxuhit/deep-learning
mit
Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per inp...
def get_batches(words, batch_size, window_size=5): ''' Create a generator of word batches as a tuple (inputs, targets) ''' n_batches = len(words)//batch_size # only full batches words = words[:n_batches*batch_size] for idx in range(0, len(words), batch_size): x, y = [], [] ...
embeddings/.ipynb_checkpoints/Skip-Grams-Solution-checkpoint.ipynb
zhuanxuhit/deep-learning
mit
Building the graph From Chris McCormick's blog, we can see the general structure of our network. The input words are passed in as one-hot encoded vectors. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal. The idea here is to train...
train_graph = tf.Graph() with train_graph.as_default(): inputs = tf.placeholder(tf.int32, [None], name='inputs') labels = tf.placeholder(tf.int32, [None, None], name='labels')
embeddings/.ipynb_checkpoints/Skip-Grams-Solution-checkpoint.ipynb
zhuanxuhit/deep-learning
mit
Embedding The embedding matrix has a size of the number of words by the number of neurons in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using one-hot encoded vectors for our inputs. When you do the matrix multiplication of the ...
n_vocab = len(int_to_vocab) n_embedding = 200 # Number of embedding features with train_graph.as_default(): embedding = tf.Variable(tf.random_uniform((n_vocab, n_embedding), -1, 1)) embed = tf.nn.embedding_lookup(embedding, inputs)
embeddings/.ipynb_checkpoints/Skip-Grams-Solution-checkpoint.ipynb
zhuanxuhit/deep-learning
mit
Negative sampling For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from th...
# Number of negative labels to sample n_sampled = 100 with train_graph.as_default(): softmax_w = tf.Variable(tf.truncated_normal((n_vocab, n_embedding), stddev=0.1)) softmax_b = tf.Variable(tf.zeros(n_vocab)) # Calculate the loss using negative sampling loss = tf.nn.sampled_softmax_loss(softmax_w, ...
embeddings/.ipynb_checkpoints/Skip-Grams-Solution-checkpoint.ipynb
zhuanxuhit/deep-learning
mit
Validation This code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings.
with train_graph.as_default(): ## From Thushan Ganegedara's implementation valid_size = 16 # Random set of words to evaluate similarity on. valid_window = 100 # pick 8 samples from (0,100) and (1000,1100) each ranges. lower id implies more frequent valid_examples = np.array(random.sample(range(vali...
embeddings/.ipynb_checkpoints/Skip-Grams-Solution-checkpoint.ipynb
zhuanxuhit/deep-learning
mit
Restore the trained network if you need to:
with train_graph.as_default(): saver = tf.train.Saver() with tf.Session(graph=train_graph) as sess: saver.restore(sess, tf.train.latest_checkpoint('checkpoints')) embed_mat = sess.run(embedding)
embeddings/.ipynb_checkpoints/Skip-Grams-Solution-checkpoint.ipynb
zhuanxuhit/deep-learning
mit
Visualizing the word vectors Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensi...
%matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt from sklearn.manifold import TSNE viz_words = 500 tsne = TSNE() embed_tsne = tsne.fit_transform(embed_mat[:viz_words, :]) fig, ax = plt.subplots(figsize=(14, 14)) for idx in range(viz_words): plt.scatter(*embed_tsne...
embeddings/.ipynb_checkpoints/Skip-Grams-Solution-checkpoint.ipynb
zhuanxuhit/deep-learning
mit
2. Indexing and slicing At the interactive prompt, define a list named L that contains four strings or numbers (e.g., L=[0,1,2,3] ). Then, experiment with the following boundary cases. You may never see these cases in real programs (especially not in the bizarre ways they appear here!), but they are intended to make yo...
X = 'spam' Y = 'eggs' X, Y = Y, X
home/python/learningPython5thED/Learning python 5th ed..ipynb
frainfreeze/studying
mit
5. Dictionary keys. You’ve learned that dictionaries aren’t accessed by offsets, so what’s going on here?
D = {} D[1] = 'a' D[2] = 'b' D
home/python/learningPython5thED/Learning python 5th ed..ipynb
frainfreeze/studying
mit
Does the following shed any light on the subject? (Hint: strings, integers, and tuples share which type category?)
D[(1, 2, 3)] = 'c' D
home/python/learningPython5thED/Learning python 5th ed..ipynb
frainfreeze/studying
mit
6. Dictionary indexing. Create a dictionary named D with three entries, for keys 'a' , 'b' , and 'c' . What happens if you try to index a nonexistent key ( D['d'] )? What does Python do if you try to assign to a nonexistent key 'd' (e.g., D['d']='spam' )? How does this compare to out-of-bounds assignments and reference...
!ls
home/python/learningPython5thED/Learning python 5th ed..ipynb
frainfreeze/studying
mit
Test your knowledge: Part III exercises 1. Coding basic loops Write a for loop that prints the ASCII code of each character in a string named S. Use the built-in function ord(character) to convert each character to an ASCII integer. This function technically returns a Unicode code point in Python 3.X, but if you restr...
for i in range(5): print('hello %d\n\a' % i, end="")
home/python/learningPython5thED/Learning python 5th ed..ipynb
frainfreeze/studying
mit
3. Sorting dictionaries. In Chapter 8, we saw that dictionaries are unordered collections. Write a for loop that prints a dictionary’s items in sorted (ascending) order. (Hint: use the dictionary keys and list sort methods, or the newer sorted built-in function.) 4. Program logic alternatives. Consider the following co...
def f1(a, b): print(a, b) # Normal args def f2(a, *b): print(a, b) # Positional varargs def f3(a, **b): print(a, b) # Keyword varargs def f4(a, *b, **c): print(a, b, c) # Mixed modes def f5(a, b=2, c=3): print(a, b, c) # Defaults def f6(a, b=2, *c): print(a, b, c) # Defaults and positional varargs
home/python/learningPython5thED/Learning python 5th ed..ipynb
frainfreeze/studying
mit
Test the following calls interactively, and try to explain each result; in some cases, you’ll probably need to fall back on the matching algorithm shown in Chapter 18. Do you think mixing matching modes is a good idea in general? Can you think of cases where it would be useful?
f1(1, 2) f1(b=2, a=1) f2(1, 2, 3) f3(1, x=2, y=3) f4(1, 2, 3, x=2, y=3) f5(1) f5(1, 4) f6(1) f6(1, 3, 4)
home/python/learningPython5thED/Learning python 5th ed..ipynb
frainfreeze/studying
mit
Après chaque question, on vérifie sur un petit exemple que cela fonctionne comme attendu. Exercice 1 Ce premier exercice aborde la problème d'un parcours de graphe non récursif. Q1
def adjacence(N): # on crée uen matrice vide mat = [ [ 0 for j in range(N) ] for i in range(N) ] for i in range(0,N-1): mat[i][i+1] = 1 return mat mat = adjacence(7) mat
_doc/notebooks/exams/td_note_2015.ipynb
sdpython/ensae_teaching_cs
mit
Q2 Il faut ajouter 5 arcs au hasard en évitant d'ajouter deux fois le même.
import random def ajoute_points(mat,nb=5): ajout = { } while len(ajout) < 5 : i,j = random.randint(0,len(mat)-1),random.randint(0,len(mat)-1) if i < j and (i,j) not in ajout: mat[i][j] = 1 ajout[i,j] = 1 ajoute_points(mat) mat
_doc/notebooks/exams/td_note_2015.ipynb
sdpython/ensae_teaching_cs
mit
Q3
def successeurs(adj,i): ligne = adj[i] # dans l'expression suivante, # s est la valeur de la matrice (0 ou 1) # i l'indice return [ i for i,s in enumerate(ligne) if s == 1 ] successeurs(mat, 1)
_doc/notebooks/exams/td_note_2015.ipynb
sdpython/ensae_teaching_cs
mit
Q4
def successeurs_dico(adj): return { i:successeurs(adj, i) for i in range(len(adj)) } dico = successeurs_dico(mat) dico
_doc/notebooks/exams/td_note_2015.ipynb
sdpython/ensae_teaching_cs
mit
Q5
def suites_chemin(chemin, dico): dernier = chemin[-1] res = [ ] for s in dico[dernier]: res.append ( chemin + [ s ] ) return res suites_chemin( [ 0, 1 ], dico)
_doc/notebooks/exams/td_note_2015.ipynb
sdpython/ensae_teaching_cs
mit
Q6
def parcours(adj): dico = successeurs_dico(adj) chemins = [ [ 0 ]] resultat = [ ] while len(chemins) > 0 : chemins2 = [] for chemin in chemins : res = suites_chemin(chemin, dico) if len(res) == 0: # chemin est un chemin qui ne peut être continué ...
_doc/notebooks/exams/td_note_2015.ipynb
sdpython/ensae_teaching_cs
mit
Q7 La différence entre un parcours en profondeur et un parcours en largeur tient au fait qu'on préfère d'abord explorer le successeur direct, puis le successeur direct plutôt que les voisins du successeurs directe. Dans le premier cas, on aboutit très vite à un chemin terminé. Dans le second cas, on obtient les chemins...
def adjacence8(N): # on crée uen matrice vide mat = [ [ 0 for j in range(N) ] for i in range(N) ] for i in range(0,N-1): for j in range(i+1,N): mat[i][j] = 1 return mat adj = adjacence8(7) adj che = parcours(adj) print("nombre",len(che)) che
_doc/notebooks/exams/td_note_2015.ipynb
sdpython/ensae_teaching_cs
mit
On fait une petite boucle pour intuiter le résultat :
for i in range(5,11): adj = adjacence8(i) che = parcours(adj) print(i, "-->",len(che))
_doc/notebooks/exams/td_note_2015.ipynb
sdpython/ensae_teaching_cs
mit
Cela ressemble beaucoup à des puissances de deux. Cela suggère un raisonnement par récurrence. Chaque noeud $i$ est connecté à tous les suivantes $i+1$, $i+2$... On remarque que tous les chemins se termine par le dernier noeud $n$. Lorsqu'on ajoute le noeud $n+1$ au graphe, il sera le successeur de tous les autres. Pou...
l = [ -1, 4, 6, 4, 1, 9, 5 ] l.sort() l[:3]
_doc/notebooks/exams/td_note_2015.ipynb
sdpython/ensae_teaching_cs
mit
Le problème qu'on cherche à résoudre est plus simple puisqu'il s'agit de ne garder que les trois premiers éléments. On n'a pas besoin de trier la fin de la liste. L'idée consiste à parcourir le tableau et à ne conserver que les trois premiers éléments. Si un élément est plus grand que le troisième élément, on ne s'en o...
def garde_3_element(tab): meilleur = [ ] for t in tab: if len(meilleur) < 3 : meilleur.append(t) meilleur.sort() elif t < meilleur[2] : meilleur[2] = t meilleur.sort() return meilleur garde_3_element(l)
_doc/notebooks/exams/td_note_2015.ipynb
sdpython/ensae_teaching_cs
mit
Même si on utilise un tri, le coût est en en $O(n)$ car le tri opère sur au plus trois éléments. Exercice 3 Q1
def word2dict(mot): return { i: mot[:i] for i in range(len(mot)+1) } word2dict("mot"), word2dict("python")
_doc/notebooks/exams/td_note_2015.ipynb
sdpython/ensae_teaching_cs
mit
Q2
def two_words2dict(d1,d2): return { (i,j): (d1[i],d2[j]) for i in d1 for j in d2 } mot1 = "python" mot2 = "piton" d1 = word2dict(mot1) d2 = word2dict(mot2) vertices = two_words2dict(d1,d2) vertices
_doc/notebooks/exams/td_note_2015.ipynb
sdpython/ensae_teaching_cs
mit
Q3 Il y a autant d'éléments que $(len(mot1) +1)*(len(mot2)+1)$ puisqu'on fait une double boucle sur toutes les positions + 1 pour 0. Donc $(p+1)(q+1)$ si $p$ et $q$ sont les tailles des deux mots.
len(vertices),(len(mot1)+1)*(len(mot2)+1)
_doc/notebooks/exams/td_note_2015.ipynb
sdpython/ensae_teaching_cs
mit
Q4
def add_edge_hv(vertices): edges = { } for edge1 in vertices: i1,j1 = edge1 for edge2 in vertices: i2,j2 = edge2 if (i2-i1==1 and j1==j2) or (j2-j1==1 and i1==i2) : edges[ edge1,edge2 ] = 1 return edges edges = add_edge_hv(vertices) edges
_doc/notebooks/exams/td_note_2015.ipynb
sdpython/ensae_teaching_cs
mit
Q5 Pour chaque noeud, on ajoute deux arcs excepté les noeuds qui correspond à la fin des mots. Donc $2(p+1)(q+1)-(p+1)-(q+1)=2pq+p+q$.
len(edges), 2*len(mot1)*len(mot2)+len(mot1)+len(mot2)
_doc/notebooks/exams/td_note_2015.ipynb
sdpython/ensae_teaching_cs
mit
Q6 On s'inspire de la fonction précédente. Il serait plus efficace de les fusionner.
def cout(m1,m2): c1 = m1[-1] c2 = m2[-1] if c1==c2 : return 0 else : return 1 def ajoute_diagonale(edges, vertices): # edges = { } # on n'ajoute surtout pas cette ligne, sinon c'est comme si on effaçait tout ce que contient # edges for edge1 in vertices: i1,j1 = edge1 for e...
_doc/notebooks/exams/td_note_2015.ipynb
sdpython/ensae_teaching_cs
mit
Q7 L'algorithme du plus court chemin.
def loop_on_edges(distance, edges): for edge,cout in edges.items() : v1,v2 = edge if v1 in distance and (v2 not in distance or distance[v2] > distance[v1] + cout) : distance[v2] = distance[v1] + cout
_doc/notebooks/exams/td_note_2015.ipynb
sdpython/ensae_teaching_cs
mit
Q8 La question était sans doute un peu mal posé car il est beaucoup plus facile pour la fonction loop_on_edges de savoir si le dictionnaire distance est modifié ou non. On la modifie pour qu'elle retourne le nombre de mises à jour.
def loop_on_edges(distance, edges): misejour = 0 for edge,cout in edges.items() : v1,v2 = edge if v1 in distance and (v2 not in distance or distance[v2] > distance[v1] + cout) : distance[v2] = distance[v1] + cout misejour += 1 return misejour
_doc/notebooks/exams/td_note_2015.ipynb
sdpython/ensae_teaching_cs
mit
Puis l'algorithme final :
def plus_court_chemin(edges): distance = { (0,0): 0 } m = 1 while m > 0: m = loop_on_edges(distance, edges) return distance resultat = plus_court_chemin(edges) resultat
_doc/notebooks/exams/td_note_2015.ipynb
sdpython/ensae_teaching_cs
mit
Q9 Comme on a tout fait avec ces deux mots, il suffit de prendre la bonne valeur dans le tableau distance :
print(mot1,mot2) resultat [ len(mot1), len(mot2) ]
_doc/notebooks/exams/td_note_2015.ipynb
sdpython/ensae_teaching_cs
mit
Exercice 4 On a un tableau d'entiers l = [1, 8, 5, 7, 3, 6, 9]. On veut placer les entiers pairs en premiers et les entiers impairs en derniers : 8, 6, 1, 5, 7, 3, 9. Ecrire une fonction qui fait cela. Le coût d'un tri est de $O(n \ln n)$. On construit d'abord le couple (parité, élément) pour chaque élément puis on tri...
l = [1, 8, 5, 7, 3, 6, 9] l2 = [ (i%2, i) for i in l] l2.sort() res = [ b for a,b in l2 ] res
_doc/notebooks/exams/td_note_2015.ipynb
sdpython/ensae_teaching_cs
mit
Dans cas précis, on ne souhaite pas trier sur les nombres mais sur leur parité. En quelque sorte, on ne s'intéresse pas de savoir dans quel ordre deux nombres pairs seront triés. Cela réduit le nombre d'opérations à effectuer. Une idée consiste à parcourir le tableau par les deux bouts et à échanger deux nombres dès qu...
def trie_parite(l): i = 0 j = len(l)-1 while i < j : while i < j and l[i]%2 == 0 : i += 1 while i < j and l[j]%2 == 1 : j -= 1 if i < j: ech = l[i] l[i] = l[j] l[j] = ech i += 1 j -= 1 l = l.copy() trie_parite(l) l
_doc/notebooks/exams/td_note_2015.ipynb
sdpython/ensae_teaching_cs
mit
Github https://github.com/jbwhit/OSCON-2015/commit/6750b962606db27f69162b802b5de4f84ac916d5 A few Python Basics
# Create a [list] days = ['Monday', # multiple lines 'Tuesday', # acceptable 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday', ] # trailing comma is fine! days # Simple for-loop for day in days: print(day) # Double for-loop for day in days: fo...
notebooks/07-Some_basics.ipynb
jbwhit/WSP-312-Tips-and-Tricks
mit
List Comprehensions
length_of_days = [len(day) for day in days] length_of_days letters = [letter for day in days for letter in day] print(letters) letters = [letter for day in days for letter in day] print(letters) [num for num in xrange(10) if num % 2] [num for num in xrange(10) if num % 2 else "doesn't work"] [...
notebooks/07-Some_basics.ipynb
jbwhit/WSP-312-Tips-and-Tricks
mit
Dictionaries Python dictionaries are awesome. They are hash tables and have a lot of neat CS properties. Learn and use them well.
from IPython.display import IFrame, HTML HTML('<iframe src=https://en.wikipedia.org/wiki/Hash_table width=100% height=550></iframe>') fellows = ["Jonathan", "Alice", "Bob"] universities = ["UCSD", "UCSD", "Vanderbilt"] for x, y in zip(fellows, universities): print(x, y) # Don't do this {x: y for x, y in zip(fell...
notebooks/07-Some_basics.ipynb
jbwhit/WSP-312-Tips-and-Tricks
mit
Beampy Positioning system Beampy has a positioning system that allows to make automatic, fixed or relative positioning. The default behavior is set by the theme used in the presentation. The default theme sets the coordinates to: x='center' which means that element is centered in the horizontal direction x element anc...
from beampy import * from beampy.utils import bounding_box, draw_axes doc = document(quiet=True) with slide(): draw_axes(show_ticks=True) t1 = text('This is the default theme behaviour') t2 = text('x are centered and y equally spaced') for t in [t1, t2]: t.add_border() display_matplotlib(gcs...
doc-src/auto_tutorials/positioning_system.ipynb
hchauvet/beampy
gpl-3.0
Automatic positioning Beampy as some simple automatic positioning, which are 'centering' the Beampy module with center, and equally spaced distribution of Beampy modules that have auto as coordinates Centering +++++++++
with slide(): draw_axes() rectangle(x='center', y='center', width=400, height=200, color='lightgreen', edgecolor=None) text('x and y are centered for the text and the rectangle modules', x='center', y='center', width=350) display_matplotlib(gcs())
doc-src/auto_tutorials/positioning_system.ipynb
hchauvet/beampy
gpl-3.0
Auto ++++ Equally spaced vertically ~~~~~~~~~~~~~~~~~~~~~~~~~
with slide(): draw_axes() for c in ['gold', 'crimson', 'orangered']: rectangle(x='center', y='auto', width=100, height=100, color=c, edgecolor=None) display_matplotlib(gcs())
doc-src/auto_tutorials/positioning_system.ipynb
hchauvet/beampy
gpl-3.0
Equally spaced horizontally ~~~~~~~~~~~~~~~~~~~~~~~~~~~
with slide(): draw_axes() for c in ['gold', 'crimson', 'orangered']: rectangle(x='auto', y='center', width=100, height=100, color=c, edgecolor=None) display_matplotlib(gcs())
doc-src/auto_tutorials/positioning_system.ipynb
hchauvet/beampy
gpl-3.0
Equally spaced in xy directions ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
with slide(): draw_axes() for c in ['gold', 'crimson', 'orangered']: rectangle(x='auto', y='auto', width=100, height=100, color=c, edgecolor=None) display_matplotlib(gcs())
doc-src/auto_tutorials/positioning_system.ipynb
hchauvet/beampy
gpl-3.0
Absolute positioning units +++++ Absolute coordinates could be given as follow: (int or float) <= 1.0, the position is a percent of the slide or group width for x and y (by default, but could be changed). (int or float) > 1.0, the position is in pixels. Given as a string, the position is in pixels or in the unit giv...
with slide(): draw_axes() text('x and y relative to width', x=0.5, y=0.5) text('x and y relative to width, with aspect ratio for y', x=0.5, y=0.5*(3/4.), width=300) text('x and y given in pixels', x=100, y=100) text('x and y given in centimetres', x='2cm', y='5cm') display_matplotlib(gcs()...
doc-src/auto_tutorials/positioning_system.ipynb
hchauvet/beampy
gpl-3.0
Anchors +++++++ We could also change the anchor of the Beampy module using the center, right, bottom function in the coordinate.
with slide(): draw_axes() t1 = text('Top-left absolute positioning $$x=x^2$$', x=400, y=100) t2 = text('Top-right absolute positioning $$x=x^2$$', x=right(400), y=200) t3 = text('Middle-middle absolute positioning $$x=x^2$$', x=center(400), y=center(300)) t4 = text('Bottom-right absolute positio...
doc-src/auto_tutorials/positioning_system.ipynb
hchauvet/beampy
gpl-3.0
Relative positioning When a Beampy module as been placed on a slide, we could position an other element relative to this first one. To do so Beampy module have methods to refer to their anchors (module.left, module.right, module.top, module.bottom, module.center).
with slide(): draw_axes() texts_width = 200 r = rectangle(x='center', y='center', width=100, height=100, color='crimson', edgecolor=None) t1 = text('Centered 10 px below the rectangle', x=r.center+center(0), y=r.bottom+10, width=texts_width, align='center') t2 = te...
doc-src/auto_tutorials/positioning_system.ipynb
hchauvet/beampy
gpl-3.0
An other way to do relative positioning is to use string as coordinate with '+' ot '-' before the shift and the unit. This will place the new Beampy Module relative to previous one.
with slide(): draw_axes() text('text x=20, y=0.5cm', x='20', y='0.5cm') for i in range(2): text('text x=-0, y=+0.5cm', x='-0', y='+0.5cm') text('text x=25, y=0.3', x='25', y=0.3) for i in range(2): text('text x=+0, y=+0.5cm', x='+0', y='+0.5cm') text('text x=25, y=0.5', x='25'...
doc-src/auto_tutorials/positioning_system.ipynb
hchauvet/beampy
gpl-3.0
Coordinate as dictionary Coordinate could also be given as dictionary. The dictionary keys are the following: unit: ('px', 'pt', 'cm', 'width', 'height'), the width of the shift value. shift: float value, the amount of shifting. reference: ('slide' or 'relative') 'relative' is used to make relative positioning. anch...
with slide(): draw_axes() t = text('centered text', x={'anchor':'middle', 'shift':0.5}, y={'anchor':'middle', 'shift':0.5, 'unit':'height'}) bounding_box(t) t = text('bottom right shift', x={'anchor':'right', 'shift':30, 'align':'right'}, y={'anchor'...
doc-src/auto_tutorials/positioning_system.ipynb
hchauvet/beampy
gpl-3.0
Paso 2: Definir una funcion que imprime en formato JSON la informacion
def buscar_accion(nombre_accion): clear_output() os.system('cls' if os.name=='nt' else 'clear') print(json.dumps(getQuotes(nombre_accion), indent=2))
Leer Precio Acciones Python 3.ipynb
Ric01/Uso-Google-Finance-Python3
gpl-3.0
Paso 3: Buscar informacion de la accion de Google (GOOG)
buscar_accion("AAPL")
Leer Precio Acciones Python 3.ipynb
Ric01/Uso-Google-Finance-Python3
gpl-3.0
We can make this a little bit more explicit. In the line k = make_statement('name: '), make_statement() has returned the inner function key and the inner function has been given the name k. Now, when we call k() the inner function returns the desired tuple. The reason this works is that in addition to the environment...
# First we write our timer function import time def timer(f): def inner(*args): t0 = time.time() output = f(*args) elapsed = time.time() - t0 print("Time Elapsed", elapsed) return output return inner # Now we prepare to use our timer function import numpy as np # Import...
lectures/L6/L6.ipynb
crystalzhaizhai/cs207_yi_zhai
mit
That seemed pretty useful. We might want to do such things a lot (and not just for timing purposes). Let's recap the pattern that was so useful. Basically, we wrote a nice function to "decorate" our function of interest. In this case, we wrote a timer function whose closure wrapped up any function we gave to it in a...
@timer def allocate1(x, N): return [x]*N x = 2.0 allocate1(x, 10000000)
lectures/L6/L6.ipynb
crystalzhaizhai/cs207_yi_zhai
mit
Example 2 We'll just create a demo decorator here.
def decorate(f): print("Let's decorate!") d = 1.0 def wrapper(*args): print("Entering function.") output = f(*args) print("Exited function.") if output > d : print("My d is bigger than yours.") elif output < d: print("Your d is bigger than mine...
lectures/L6/L6.ipynb
crystalzhaizhai/cs207_yi_zhai
mit
Vertex SDK: AutoML training image classification model for batch prediction <table align="left"> <td> <a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_automl_image_classification_batch.ipynb"> <img src="https://cloud.google.com...
import os # Google Cloud Notebook if os.path.exists("/opt/deeplearning/metadata/env_version"): USER_FLAG = "--user" else: USER_FLAG = "" ! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
notebooks/community/sdk/sdk_automl_image_classification_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Install the latest GA version of google-cloud-storage library as well.
! pip3 install -U google-cloud-storage $USER_FLAG if os.environ["IS_TESTING"]: ! pip3 install --upgrade tensorflow $USER_FLAG
notebooks/community/sdk/sdk_automl_image_classification_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Restart the kernel Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
import os if not os.getenv("IS_TESTING"): # Automatically restart kernel after installs import IPython app = IPython.Application.instance() app.kernel.do_shutdown(True)
notebooks/community/sdk/sdk_automl_image_classification_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0