markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Now this is much closer to the performance of the `RandomForestRegressor` (but not quite there yet). Let's check the best hyperparameters found:
rnd_search.best_params_
_____no_output_____
Apache-2.0
02_end_to_end_machine_learning_project.ipynb
Ruqyai/handson-ml2
This time the search found a good set of hyperparameters for the RBF kernel. Randomized search tends to find better hyperparameters than grid search in the same amount of time. Let's look at the exponential distribution we used, with `scale=1.0`. Note that some samples are much larger or smaller than 1.0, but when you ...
expon_distrib = expon(scale=1.) samples = expon_distrib.rvs(10000, random_state=42) plt.figure(figsize=(10, 4)) plt.subplot(121) plt.title("Exponential distribution (scale=1.0)") plt.hist(samples, bins=50) plt.subplot(122) plt.title("Log of this distribution") plt.hist(np.log(samples), bins=50) plt.show()
_____no_output_____
Apache-2.0
02_end_to_end_machine_learning_project.ipynb
Ruqyai/handson-ml2
The distribution we used for `C` looks quite different: the scale of the samples is picked from a uniform distribution within a given range, which is why the right graph, which represents the log of the samples, looks roughly constant. This distribution is useful when you don't have a clue of what the target scale is:
reciprocal_distrib = reciprocal(20, 200000) samples = reciprocal_distrib.rvs(10000, random_state=42) plt.figure(figsize=(10, 4)) plt.subplot(121) plt.title("Reciprocal distribution (scale=1.0)") plt.hist(samples, bins=50) plt.subplot(122) plt.title("Log of this distribution") plt.hist(np.log(samples), bins=50) plt.show...
_____no_output_____
Apache-2.0
02_end_to_end_machine_learning_project.ipynb
Ruqyai/handson-ml2
The reciprocal distribution is useful when you have no idea what the scale of the hyperparameter should be (indeed, as you can see on the figure on the right, all scales are equally likely, within the given range), whereas the exponential distribution is best when you know (more or less) what the scale of the hyperpara...
from sklearn.base import BaseEstimator, TransformerMixin def indices_of_top_k(arr, k): return np.sort(np.argpartition(np.array(arr), -k)[-k:]) class TopFeatureSelector(BaseEstimator, TransformerMixin): def __init__(self, feature_importances, k): self.feature_importances = feature_importances s...
_____no_output_____
Apache-2.0
02_end_to_end_machine_learning_project.ipynb
Ruqyai/handson-ml2
Note: this feature selector assumes that you have already computed the feature importances somehow (for example using a `RandomForestRegressor`). You may be tempted to compute them directly in the `TopFeatureSelector`'s `fit()` method, however this would likely slow down grid/randomized search since the feature importa...
k = 5
_____no_output_____
Apache-2.0
02_end_to_end_machine_learning_project.ipynb
Ruqyai/handson-ml2
Now let's look for the indices of the top k features:
top_k_feature_indices = indices_of_top_k(feature_importances, k) top_k_feature_indices np.array(attributes)[top_k_feature_indices]
_____no_output_____
Apache-2.0
02_end_to_end_machine_learning_project.ipynb
Ruqyai/handson-ml2
Let's double check that these are indeed the top k features:
sorted(zip(feature_importances, attributes), reverse=True)[:k]
_____no_output_____
Apache-2.0
02_end_to_end_machine_learning_project.ipynb
Ruqyai/handson-ml2
Looking good... Now let's create a new pipeline that runs the previously defined preparation pipeline, and adds top k feature selection:
preparation_and_feature_selection_pipeline = Pipeline([ ('preparation', full_pipeline), ('feature_selection', TopFeatureSelector(feature_importances, k)) ]) housing_prepared_top_k_features = preparation_and_feature_selection_pipeline.fit_transform(housing)
_____no_output_____
Apache-2.0
02_end_to_end_machine_learning_project.ipynb
Ruqyai/handson-ml2
Let's look at the features of the first 3 instances:
housing_prepared_top_k_features[0:3]
_____no_output_____
Apache-2.0
02_end_to_end_machine_learning_project.ipynb
Ruqyai/handson-ml2
Now let's double check that these are indeed the top k features:
housing_prepared[0:3, top_k_feature_indices]
_____no_output_____
Apache-2.0
02_end_to_end_machine_learning_project.ipynb
Ruqyai/handson-ml2
Works great! :) 4. Question: Try creating a single pipeline that does the full data preparation plus the final prediction.
prepare_select_and_predict_pipeline = Pipeline([ ('preparation', full_pipeline), ('feature_selection', TopFeatureSelector(feature_importances, k)), ('svm_reg', SVR(**rnd_search.best_params_)) ]) prepare_select_and_predict_pipeline.fit(housing, housing_labels)
_____no_output_____
Apache-2.0
02_end_to_end_machine_learning_project.ipynb
Ruqyai/handson-ml2
Let's try the full pipeline on a few instances:
some_data = housing.iloc[:4] some_labels = housing_labels.iloc[:4] print("Predictions:\t", prepare_select_and_predict_pipeline.predict(some_data)) print("Labels:\t\t", list(some_labels))
_____no_output_____
Apache-2.0
02_end_to_end_machine_learning_project.ipynb
Ruqyai/handson-ml2
Well, the full pipeline seems to work fine. Of course, the predictions are not fantastic: they would be better if we used the best `RandomForestRegressor` that we found earlier, rather than the best `SVR`. 5. Question: Automatically explore some preparation options using `GridSearchCV`.
param_grid = [{ 'preparation__num__imputer__strategy': ['mean', 'median', 'most_frequent'], 'feature_selection__k': list(range(1, len(feature_importances) + 1)) }] grid_search_prep = GridSearchCV(prepare_select_and_predict_pipeline, param_grid, cv=5, scoring='neg_mean_squared_er...
_____no_output_____
Apache-2.0
02_end_to_end_machine_learning_project.ipynb
Ruqyai/handson-ml2
Extracting embeddings with ALBERTWith Hugging Face transformers, we can use the ALBERT model just like how we used BERT. Let's explore this with a small example. Suppose, we need to get the contextual word embedding of every word in the sentence Paris is a beautiful city. Let's see how to that with ALBERT. Import the ...
!pip install transformers==3.5.1 from transformers import AlbertTokenizer, AlbertModel
_____no_output_____
MIT
Chapter04/.ipynb_checkpoints/4.03. Extracting embeddings with ALBERT-checkpoint.ipynb
shizukanaskytree/Getting-Started-with-Google-BERT
Download and load the pre-trained Albert model and tokenizer. In this tutorial, we use the ALBERT-base model:
model = AlbertModel.from_pretrained('albert-base-v2') tokenizer = AlbertTokenizer.from_pretrained('albert-base-v2')
_____no_output_____
MIT
Chapter04/.ipynb_checkpoints/4.03. Extracting embeddings with ALBERT-checkpoint.ipynb
shizukanaskytree/Getting-Started-with-Google-BERT
Now, feed the sentence to the tokenizer and get the preprocessed input:
sentence = "Paris is a beautiful city" inputs = tokenizer(sentence, return_tensors="pt")
_____no_output_____
MIT
Chapter04/.ipynb_checkpoints/4.03. Extracting embeddings with ALBERT-checkpoint.ipynb
shizukanaskytree/Getting-Started-with-Google-BERT
Let's print the inputs:
print(inputs)
{'input_ids': tensor([[ 2, 1162, 25, 21, 1632, 136, 3]]), 'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 0]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1]])}
MIT
Chapter04/.ipynb_checkpoints/4.03. Extracting embeddings with ALBERT-checkpoint.ipynb
shizukanaskytree/Getting-Started-with-Google-BERT
Now we just feed the inputs to the model and get the result. The model returns the hidden_rep which contains the hidden state representation of all the tokens from the final encoder layer and cls_head which contains the hidden state representation of the [CLS] token from the final encoder layer:
hidden_rep, cls_head = model(**inputs)
_____no_output_____
MIT
Chapter04/.ipynb_checkpoints/4.03. Extracting embeddings with ALBERT-checkpoint.ipynb
shizukanaskytree/Getting-Started-with-Google-BERT
module name here> API details.
#hide from nbdev.showdoc import * #export import numpy class Matrix(): """ Class generates a zero matrix """ def __init__(self, n_matrix:int, m_matrix:int): self.n = n_matrix self.m = m_matrix def make_matrix(self): return numpy.zeros(self.n * self.m).reshape(self.n, self.m)
_____no_output_____
Apache-2.0
00_core.ipynb
VladislavYak/test_repo
Matrix
a = Matrix(2, 5) a.make_matrix()
_____no_output_____
Apache-2.0
00_core.ipynb
VladislavYak/test_repo
1. Hashing task! * We've put all passwords from passwords1.txt and passwords2.txt in two lists: listPasswords1 and listPasswords2
listPasswords1=[] listPasswords2=[] filePassword1 = open("passwords1.txt", 'r') lines_p1 = filePassword1.readlines() for line in lines_p1: listPasswords1.append(line.strip()) filePassword2 = open("passwords2.txt", 'r') lines_p2 = filePassword2.readlines() for line in lines_p2: listPasswords2.append(line.strip(...
_____no_output_____
MIT
main.ipynb
AlessandroTaglieri/ADM-HW4
We have set all variables that we need to build our Bloom Filter:* n = Number of items in the filter* p = Probability of false positives, fraction between 0 and 1* m = Number of bits in the filter (size of Bloom Filter bit-array)* k = Number of hash functionsWe did these following considerations:N is the amount of pass...
n=len(listPasswords1) p=0.01 import math m=math.ceil((n * math.log(p)) / math.log(1 / pow(2, math.log(2)))) k = round((m / n) * math.log(2))
_____no_output_____
MIT
main.ipynb
AlessandroTaglieri/ADM-HW4
This following function is our hash function. We have written from scratch our fnv function based on fnv hash function. FNV hashes are designed to be fast while maintaining a low collision rate. The FNV speed allows one to quickly hash lots of data while maintaining a reasonable collision rate. To build bloom filter, w...
def fnv1_64(password, seed=0): """ Returns: The FNV-1 hash of a given string. """ #Constants FNV_prime = 1099511628211 offset_basis = 14695981039346656037 #FNV-1a Hash Function hash = offset_basis + seed for char in password: hash = hash * FNV_prime hash = hash ^ or...
_____no_output_____
MIT
main.ipynb
AlessandroTaglieri/ADM-HW4
This following class 'BloomFilter' represent our Bloom Filter. It has three attributes:* sizeArray = dimension of bit-array* number_HashFucntion = number of hash functions* array_BloomFilter = bit-array of bloom filterIt has also two methods:* init: it takes, as parameteers, our bloom filter, k (number of hash function...
class BloomFilter: sizeArray=0 number_HashFucntion=0 array_BloomFilter=[] @property def size(self): return self.sizeArray @property def numHash(self): return self.number_HashFucntion @property def arrayBloom(self): return self.array_BloomFilter ...
_____no_output_____
MIT
main.ipynb
AlessandroTaglieri/ADM-HW4
Then we made a function 'checkPassw' that checks how many password in listPasswords2 (i.e. in passowrds2.txt) are in our Bloom Filter. It takes our bloom filter and the list of passwords that are in passwords2.txt and it returns how many passwords are in bloom filter. A password is in our bloom filter if and only if it...
def checkPassw(BloomFilter, listPasswords2): countCheck=0 for psw in listPasswords2: count=0 for seed in range(BloomFilter.number_HashFucntion): index=fnv1_64(psw,seed) % BloomFilter.sizeArray if BloomFilter.array_BloomFilter[index]==1: count+=1 ...
_____no_output_____
MIT
main.ipynb
AlessandroTaglieri/ADM-HW4
Bonus section: We calculate the number of false positiveThis following function 'falsePositives' allows to calculate the exact number of false positive. It takes these following data, as parameter:* BloomFilter = our bloom filter* listPassowrds1 = passwords from passwords1.txt* listPassowrds2 = passwords from passwor...
def falsePositives(BloomFilter, listPasswords1, listPasswords2): s= set(listPasswords1) countFalsePositives=0 for psw in listPasswords2: count=0 for seed in range(BloomFilter.number_HashFucntion): index=fnv1_64(psw,seed) % BloomFilter.sizeArray if (BloomFilter.ar...
_____no_output_____
MIT
main.ipynb
AlessandroTaglieri/ADM-HW4
Then we did the main function, as wirtten in homework track. With this function we did these following steps:* init bit-array, its size adn number of hash functions of our bloom filter, with 'BloomFilter.init(BloomFilter,k,m)'* add passowrds from listPasswords1 into our bloom filter, with 'BloomFilter.add(BloomFilter,l...
import time def BloomFilterFunc(listPasswords1, listPasswords2): start = time.time() #init our bloom filter BloomFilter.init(BloomFilter,k,m) #add all passowrd from listPassowrds1 to our bloom filter BloomFilter.add(BloomFilter,listPasswords1) #check and save into 'countPassw' the number of occ...
_____no_output_____
MIT
main.ipynb
AlessandroTaglieri/ADM-HW4
* Execute main function
BloomFilterFunc(listPasswords1, listPasswords2)
Number of hash function used: 7 Number of duplicates detected: 14251447 Probability of false positives: 0.01 Execution time: 4041.6586632728577
MIT
main.ipynb
AlessandroTaglieri/ADM-HW4
* Execute bonus section
falsPositive=falsePositives(BloomFilter,listPasswords1,listPasswords2) print('Number of false positive: ', falsPositive)
Number of false positive: 251447
MIT
main.ipynb
AlessandroTaglieri/ADM-HW4
2. Alphabetical Sort Given a set of words, a common natural task is the one of sorting them in alphabetical order. It is something that you have for sure already done once in your life, using your own algorithm without maybe knowing it.In order to be everyone on the same page, we will refer to the rules defined here. ...
import string improt numpay as np lower = list(string.ascii_lowercase) upper = string.ascii_uppercase lower
_____no_output_____
MIT
main.ipynb
AlessandroTaglieri/ADM-HW4
Build your own implementation of Counting Sort... Here is the counting sort algorithm with a bite innovation and fewer assignments than the original one explained in the attached website to the hw
def s_counting(A): m = max(A) sorted_A = [] temp = 0 d = [0]*(m+1) for i in range(len(A)): d[A[i]]+=1 for x,y in enumerate(d): sorted_A += [x]*y return sorted_A A = [0,3,2,3,3,0,5,2,3] sort_counting(A)
_____no_output_____
MIT
main.ipynb
AlessandroTaglieri/ADM-HW4
Build an algorithm, based on your implementation of Counting Sort, that receives in input a list with all the letters of the alphabet (not in alphabetical order), and returns the list ordered according to alphabetical order We continue the same approch as first part here and keep
def sort_letters(B): B = list( ''.join(B).lower()) m = len(B) d = [] sorted_letters = [] for i in B: d.append(lower.index(i)) c = s_counting(d) for j in c: sorted_letters += lower[j] return sorted_letters B = ['p' , 'w' , 'x', 'k' , 'p' , 'a' , 'c' ,'a' , 'b' , 'a...
_____no_output_____
MIT
main.ipynb
AlessandroTaglieri/ADM-HW4
Build an algorithm, based on your implementation of Counting Sort, that receives in input a list of length m, that contains words with maximum length equal to n, and returns the list ordered according to alphabetical order. Here is the first algurithim try to follow the counting sort algurithim order, is our first appr...
Lower = lower.copy() Lower.insert(0,'') C = ['words' , 'amount' , 'efficiently' , 'thumb' , 'rule' , 'solvable' , 'Another' ,'open' ,'problem', 'is' ,'whether'] C = [a.lower() for a in C] m = max([len(e) for e in C]) d = [[] for _ in C] for i in range(len(C)): w2l = list(C[i]) l2num = [Lower.index(a) for ...
['amount', 'another', 'efficiently', 'is', 'open', 'problem', 'rule', 'solvable', 'thumb', 'words', 'whether']
MIT
main.ipynb
AlessandroTaglieri/ADM-HW4
Here in this algorithm, we have some limitation due to the big numbers, We tried to fix the problem with normalizing the numbers between and 1000 however this will only could distinguish the words which are equal at 3 to 5 letters and after that, it would be really time-consuming to calculate and normally end up with e...
x = ['words' , 'amount' , 'are' , 'efficiently' , 'thumb' , 'rule' , 'solvable' , 'Another' ,'open' ,'problem', 'is' ,'whether'] #x = ['asd', 'bedf','mog','zor','bze'] #x = input('Enter the words, seperated by comma(,)').split(',') x_2 = x.copy() # Finding the longest word max_length = 0 for i in x: if max_length...
['Another', 'amount', 'are', 'efficiently', 'is', 'open', 'problem', 'rule', 'solvable', 'thumb', 'whether', 'words']
MIT
main.ipynb
AlessandroTaglieri/ADM-HW4
Final version with best performance and without any limitation Here we rewirte the count sorting algurithim again and we keep the orderOrder is the origilanl list index wich if we use order list as a index we can soert it, it shows how the elements moves from where to wher to have the final arry
#sorting def sort_counting(A): m = max(A) sorted_A = [] temp = 0 d = [0]*(m+1) for i in range(len(A)): d[A[i]]+=1 cum_d =[0]*(m+1) for i in d: cum_d.append(i) order = list() for x,y in enumerate(d): sorted_A += [x]*y for i in r...
_____no_output_____
MIT
main.ipynb
AlessandroTaglieri/ADM-HW4
Now we use the counting sort algruthin and it's order to make this workWe have used pure counting sort algurtim and keep the $big O$ time still in $O(n)$ Howerver the total time it use is $T(n) = kn+c$ the one is exatly same big O and principle as counting sort
Lower = lower.copy() Lower.insert(0,'') import numpy as np C = [ 'Alessio', 'Alessandro' , 'Angela', 'Alessand','Anita', 'Anna','Alessandrx', 'Arianna' ,'Alessandra'] C = [a.lower() for a in C] m = max([len(e) for e in C]) d = [[] for _ in C] final_sort = C.copy() for i in range(len(C)): w2l = list(C[i]) l...
_____no_output_____
MIT
main.ipynb
AlessandroTaglieri/ADM-HW4
3. Find similar wines! Imports
import random import pandas as pd import numpy as np from collections import defaultdict import matplotlib import matplotlib.pyplot as plt
_____no_output_____
MIT
main.ipynb
AlessandroTaglieri/ADM-HW4
Functions created for implementing the kMeans Create the Euclidean distance
def distance_2(vec1, vec2): if len(vec1) == len(vec2): if len(vec1) > 1: add = 0 for i in range(len(vec1)): add = add + (vec1[i] - vec2[i])**2 return add**(1/2) else...
_____no_output_____
MIT
main.ipynb
AlessandroTaglieri/ADM-HW4
We will now check the dissimilarity of our clusters. To do that we need to define the variability of every cluster. Meaning the sum of the distances of every element in the cluster from the mean(centroid).
def dissimilarity(cluster): def kmeansreduce(centroid, dictionary): a = dictionary[centroid] if len(a) > 0 : vector = a[0] for i in range(1,len(a)): vector = np.add(vector, a[i]) ...
_____no_output_____
MIT
main.ipynb
AlessandroTaglieri/ADM-HW4
Compute the sum of the squared distance between data points and all centroids (distance_2). Assign each data point to the closest cluster (clusters dictionary). Compute the centroids for the clusters by taking the average of the all data points that belong to each cluster.(initial centroids) We define also two funct...
def kmeans(data, k): def kmeansmap(information, num_centroids, centroids): clusters = defaultdict(list) for i in range(num_centroids): clusters[i] = [] classes = defaultdict(list) for i in range(information.shape[0]): ...
_____no_output_____
MIT
main.ipynb
AlessandroTaglieri/ADM-HW4
Now we will implement the algorithms and functions to the data To implement the algorithms we will clean a bit the data
url = r"C:\Users\HP\Documents\ADM\HW 4\wine.data" header = ["Class", "Alcohol", "Malic acid", "Ash","Alcalinity of ash", "Magnesium", "Total phenols", "Flavanoids", "Nonflavanoid phenols", "Proanthocyanins", "Color intensity", "Hue", "OD280/OD315 of diluted wines", "Proline"] data = pd.read_table(url, deli...
_____no_output_____
MIT
main.ipynb
AlessandroTaglieri/ADM-HW4
We normalize the values of the DataFrame, so we can measure the distance Some columns are not saved as floats , so we will have an error normalizing them, so we make them floats and then normalize them
for col in data.columns[1:]: if data[col].dtype == 'int64': data[col] = data[col].astype("float64") for col in data.columns[1:]: r = (max(data[col]) - min(data[col])) minimum = min(data[col]) for i in range(len(data[col])): data[col][i] = (f...
C:\Users\HP\Anaconda3\lib\site-packages\ipykernel_launcher.py:15: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy from ipykernel import ...
MIT
main.ipynb
AlessandroTaglieri/ADM-HW4
We will not test the variable class, since this is the classification we target. So we are going to save it in a file called target and work with the other variables.
target = data["Class"] data = data.drop(columns = ["Class"]) data = data.to_numpy() data
_____no_output_____
MIT
main.ipynb
AlessandroTaglieri/ADM-HW4
This way the elements of each row are to be taken as a vector
data[1,]
_____no_output_____
MIT
main.ipynb
AlessandroTaglieri/ADM-HW4
Now we will implement the kmeans algorithm, with an unknown number of clusters. We will use the elbow method to figure whats the best number of clusters for our data. We will run the method for up to k = 10 clusters
elbow = {} for k in range(1, 11): best = kmeans(data, k) for t in range(100): C = kmeans(data, k) if dissimilarity(C[0]) < dissimilarity(best[0]): best = C elbow[k] = dissimilarity(best[0]) plt.plot(list(elbow.keys()), list(e...
_____no_output_____
MIT
main.ipynb
AlessandroTaglieri/ADM-HW4
From the previous plot we can figure out what's the best k for me... We will implement the kmeans algorithm for the specific k
best = kmeans(data, 3) for t in range(100): C = kmeans(data, 3) if dissimilarity(C[0]) < dissimilarity(best[0]): best = C outcome = [] for i in range(data.shape[0]): outcome.append(best[1][i][0] + 1)
_____no_output_____
MIT
main.ipynb
AlessandroTaglieri/ADM-HW4
We did the following commands for all the columns, and we observed that two columns/features have a big effect on the clustering of the other features. Here we will show the distribution of the features, when plotted with Magnesium and Total Phenols
f, axes = plt.subplots(4,3,figsize=(20,20)) axes[0][0].scatter(data[:, 5], data[:, 1], c=outcome, cmap=matplotlib.colors.ListedColormap(["purple", "blue", "red"])) axes[0][0].set_xlabel(header[1]) axes[0][1].scatter(data[:, 5], data[:, 2], c=outcome, cmap=matplotlib.colors.ListedColormap(["purple", "blue", "red"])) a...
_____no_output_____
MIT
main.ipynb
AlessandroTaglieri/ADM-HW4
4. K-means can go wrong! Clustring problem is non-linear over the $S \in (R^{d})^{K}$ where $d$ is the number of features in the dataset and $K$ is the number clusters. This problem can turn to linear form with binary variables while the $\hat{S} \in S$ only cosists of the points in the dataset. This problem can be fo...
import numpy as np import matplotlib.pyplot as plt import pandas as pd from sklearn.datasets import load_wine import json from sklearn.cluster import KMeans from pandas.io.json import json_normalize wine = load_wine() wine.target[[10, 80, 140]] list(wine.target_names) wine_df = pd.DataFrame(wine.data , columns= wine....
_____no_output_____
MIT
main.ipynb
AlessandroTaglieri/ADM-HW4
The real classes of the wines
plt.scatter(wine_df['alcohol'] , wine_df['od280/od315_of_diluted_wines'] , c = wine_df['target']) plt.show() kmean = KMeans(n_clusters= 3, init = 'random' ).fit(wine_df[['alcohol', 'od280/od315_of_diluted_wines']]) plt.scatter(wine_df['alcohol'] , wine_df['od280/od315_of_diluted_wines'] , c = kmean.labels_) plt.scatter...
_____no_output_____
MIT
main.ipynb
AlessandroTaglieri/ADM-HW4
We can see after choosing approximate points of initializations it would be faster to do the clustering
kmean = KMeans(n_clusters= 3, init = np.array([[12,3] ,[13,3] ,[13,1.7]]) ).fit(wine_df[['alcohol', 'od280/od315_of_diluted_wines']]) plt.scatter(wine_df['alcohol'] , wine_df['od280/od315_of_diluted_wines'] , c = kmean.labels_) plt.scatter( kmean.cluster_centers_[: , 0], kmean.cluster_centers_[: , 1] , marker='X' , c =...
C:\ProgramData\Anaconda3\lib\site-packages\sklearn\cluster\k_means_.py:971: RuntimeWarning: Explicit initial center position passed: performing only one init in k-means instead of n_init=10 return_n_iter=True)
MIT
main.ipynb
AlessandroTaglieri/ADM-HW4
We can easily show how random initialization can effect on the number of iteration and the cost increases
for k in range(12): kmean = KMeans(n_clusters= 3, init = 'random' ).fit(wine_df[['alcohol', 'od280/od315_of_diluted_wines']]) print('# of iretations: {}'.format(kmean.n_iter_ ) , 'And inertia: {}'.format(kmean.inertia_))
# of iretations: 7 And inertia: 58.32594553894382 # of iretations: 10 And inertia: 58.32594553894382 # of iretations: 9 And inertia: 58.32594553894382 # of iretations: 10 And inertia: 58.32594553894382 # of iretations: 8 And inertia: 58.32594553894382 # of iretations: 8 And inertia: 58.32594553894382 # of iretations: 8...
MIT
main.ipynb
AlessandroTaglieri/ADM-HW4
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
!pip install git+https://github.com/google/starthinker
_____no_output_____
Apache-2.0
colabs/smartsheet_report_to_bigquery.ipynb
quan/starthinker
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
CLOUD_PROJECT = 'PASTE PROJECT ID HERE' print("Cloud Project Set To: %s" % CLOUD_PROJECT)
_____no_output_____
Apache-2.0
colabs/smartsheet_report_to_bigquery.ipynb
quan/starthinker
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE' print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
_____no_output_____
Apache-2.0
colabs/smartsheet_report_to_bigquery.ipynb
quan/starthinker
4. Enter SmartSheet Report To BigQuery ParametersMove report data into a BigQuery table. 1. Specify SmartSheet Report token. 1. Locate the ID of a report by viewing its properties. 1. Provide a BigQuery dataset ( must exist ) and table to write the data into. 1. StarThinker will automatically map the correct schema.Mod...
FIELDS = { 'auth_read': 'user', # Credentials used for reading data. 'auth_write': 'service', # Credentials used for writing data. 'token': '', # Retrieve from SmartSheet account settings. 'report': '', # Retrieve from report properties. 'dataset': '', # Existing BigQuery dataset. 'table': '', # Table...
_____no_output_____
Apache-2.0
colabs/smartsheet_report_to_bigquery.ipynb
quan/starthinker
5. Execute SmartSheet Report To BigQueryThis does NOT need to be modified unles you are changing the recipe, click play.
from starthinker.util.project import project from starthinker.script.parse import json_set_fields USER_CREDENTIALS = '/content/user.json' TASKS = [ { 'smartsheet': { 'auth': 'user', 'report': {'field': {'kind': 'string','name': 'report','order': 3,'description': 'Retrieve from report properties.'}},...
_____no_output_____
Apache-2.0
colabs/smartsheet_report_to_bigquery.ipynb
quan/starthinker
import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt import plotly.graph_objs as go import os import warnings plt.style.use('ggplot') # 구글 드라이브 마운트 from google.colab import drive drive.mount('/content/drive') weather_station_location = pd.read_csv("./drive/MyDrive/study/weathe...
_____no_output_____
MIT
Day1_practice2.ipynb
andreYoo/Time-series-analysis-anomaly-detection
Note This notebook assumes that you are familiar with NumPy & Pandas. No worries if you are not! Like music & MRI? You can learn NumPy and SciPy as you are making music using MRI sounds: https://www.loom.com/share/4b08c4df903c40b397e87b2ec9de572dGitHub repo: https://github.com/agahkarakuzu/sunrise If you are using Pl...
import plotly.express as px
_____no_output_____
MIT
Plotly_Express.ipynb
agahkarakuzu/datavis_edu
In the older version of Plotly, at this stage, you had to `init_notebook_mode()` and tell plotly that you will be using it offline. Good news: * Now plotly can automatically detect which renderer to use! * Plus, you don't have to write extra code to tell Plotly you will be working offline. Plotly figures now have the a...
# Read iris data into the variable named iris iris = px.data.iris() # Display first last 5 rows of the dataframe iris.tail()
_____no_output_____
MIT
Plotly_Express.ipynb
agahkarakuzu/datavis_edu
Create scatter plotsAs you see, `iris` dataset has 6 columns, each having their own label. Now let's take a look at how `sepal_width` is corralated with `sepal_length`.
fig = px.scatter(iris, x="sepal_width", y="sepal_length") fig.show()
_____no_output_____
MIT
Plotly_Express.ipynb
agahkarakuzu/datavis_edu
Yes, that easy! 🎉You can change the column indexes to observe other correlations such as `petal_length` and `petal_height`. What if you were also able to color markers with respect to the `species` category? Well, all it takes is to pass another argument :)
fig = px.scatter(iris, x="sepal_width", y="sepal_length",color='species') fig.show()
_____no_output_____
MIT
Plotly_Express.ipynb
agahkarakuzu/datavis_edu
💬**Scatter plots are not enough! I want my histograms displayed on their respective axes.** 👏Plotly express got you covered.
fig = px.scatter(iris, x="sepal_width", y="sepal_length", color="species", marginal_y="rug", marginal_x="histogram") fig.show()
_____no_output_____
MIT
Plotly_Express.ipynb
agahkarakuzu/datavis_edu
🙄Of course scatter plots need their best fit line. And why not show `boxplots` or `violinpots` instead of histograms and rug lines? 🚀
fig = px.scatter(iris, x="sepal_width", y="sepal_length", color="species", marginal_y="violin", marginal_x="box", trendline="ols") fig.show()
_____no_output_____
MIT
Plotly_Express.ipynb
agahkarakuzu/datavis_edu
- What is better than a scatter plot? > A scatter plot matrix! 🤯You can explore cross-filtering ability of SPLOM charts in plotly. Hover your cursor over a point cloud in one of the panels, and select a poriton of them by left click + dragging. Selected data points will be highlighted in the remaining sub-panels! Doub...
fig = px.scatter_matrix(iris, dimensions=["sepal_width", "sepal_length", "petal_width", "petal_length"], color="species") fig.show()
_____no_output_____
MIT
Plotly_Express.ipynb
agahkarakuzu/datavis_edu
Remember parallel sets? Let's create oneIn [the presentation](https://zenodo.org/record/3841775.XsqgFJ5Kg1I), we saw that parallel set can be useful for visualization of proportions if there are more than two grouping variables are present. In this example, we will be working with the `tips` dataset, which has five gr...
tips = px.data.tips() tips.tail() # Hint: You can change colorscale. Type px.colors.sequential. then hit tab :) fig = px.parallel_categories(tips, color="total_bill", dimensions=['sex','smoker','day','time','size'], color_continuous_scale='viridis',template='plotly_dark') fig.show()
_____no_output_____
MIT
Plotly_Express.ipynb
agahkarakuzu/datavis_edu
Sunburst chart & Treemap**Data:** A `pandas.DataFrame` with 1704 rows and the following columns:`['country', 'continent', 'year', 'lifeExp', 'pop', 'gdpPercap',iso_alpha', 'iso_num']`.
df = px.data.gapminder().query("year == 2007") fig = px.sunburst(df, path=['continent', 'country'], values='pop', color='lifeExp', hover_data=['iso_alpha'],color_continuous_scale='viridis',template='plotly_white') fig.show()
_____no_output_____
MIT
Plotly_Express.ipynb
agahkarakuzu/datavis_edu
Polar coordinates**Data**: Level of wind intensity in a cardinal direction, and its frequency.- Scatter polar- Line polar - Bar polar
df = px.data.wind() fig = px.scatter_polar(df, r="frequency", theta="direction", color="strength", symbol="strength", color_discrete_sequence=px.colors.sequential.Plasma_r, template='plotly_dark') fig.show()
_____no_output_____
MIT
Plotly_Express.ipynb
agahkarakuzu/datavis_edu
Ternary plot**Data:** Results for an electoral district in the 2013 Montreal mayoral election.
df = px.data.election() fig = px.scatter_ternary(df, a="Joly", b="Coderre", c="Bergeron", color="winner", size="total", hover_name="district", size_max=15, color_discrete_map = {"Joly": "blue", "Bergeron": "green", "Coderre":"red"}, template="plotly_dark" ) fig.show()
_____no_output_____
MIT
Plotly_Express.ipynb
agahkarakuzu/datavis_edu
See all available `px` charts, attributes and more Plotly express gives you the liberty to change visual attributes of the plots as you like! There are many other charts made available out of the box, all can be plotted with a single line of code. Here is the [complete reference documentation](https://www.plotly.expr...
gapminder = px.data.gapminder() gapminder.tail() fig = px.scatter(gapminder, x="gdpPercap", y="lifeExp", animation_frame="year", animation_group="country", size="pop", color="continent", hover_name="country", facet_col="continent", log_x=True, size_max=45, range_x=[100,100000], range_y=[25,90]) fi...
_____no_output_____
MIT
Plotly_Express.ipynb
agahkarakuzu/datavis_edu
👽I know you like dark themes.
# See the last argument (template) I passed to the function. To see other alternatives # visit https://plot.ly/python/templates/ fig = px.scatter(gapminder, x="gdpPercap", y="lifeExp", animation_frame="year", animation_group="country", size="pop", color="continent", hover_name="country", facet_col="contin...
_____no_output_____
MIT
Plotly_Express.ipynb
agahkarakuzu/datavis_edu
Let's work with our own dataWe will load raw MRI data (K-Space), which is saved in `ISMRM-RD` format.
from ismrmrd import Dataset as read_ismrmrd from ismrmrd.xsd import CreateFromDocument as parse_ismrmd_header import numpy as np # Here, we are just loading a 3D data into a numpy matrix, so that we can use plotly with it! dset = read_ismrmrd('Kspace/sub-ismrm_ses-sunrise_acq-chord1.h5', 'dataset') header = parse_ismr...
_____no_output_____
MIT
Plotly_Express.ipynb
agahkarakuzu/datavis_edu
100X100 matrix, 16 receive channels
raw.shape fig = px.imshow(raw.real,color_continuous_scale='viridis',facet_col=0,facet_col_wrap=4,template='plotly_dark') fig.update_layout(title='Channel Raw')
_____no_output_____
MIT
Plotly_Express.ipynb
agahkarakuzu/datavis_edu
Simple image reconstruction
from scipy.fft import fft2, fftshift from scipy import ndimage im = np.zeros(raw.shape) # Let's apply some ellipsoid filter. raw = ndimage.fourier_ellipsoid(fftshift(raw),size=2) #raw = ndimage.fourier_ellipsoid(raw,size=2) for ch in range(nCoils): # Comment in and see what it gives im[ch,:,:] = abs(fftshift(...
_____no_output_____
MIT
Plotly_Express.ipynb
agahkarakuzu/datavis_edu
SAVE HTML OUTPUT* This is the file under the `.docs` directory, from which a `GitHub page` is served:![](gh_pages.png)
fig.write_html('multichannel.html')
_____no_output_____
MIT
Plotly_Express.ipynb
agahkarakuzu/datavis_edu
Binary Tree Level Order Traversal (easy) Given a binary tree, populate an array to represent its level-by-level traversal. You should populate the values of all nodes of each level from left to right in separate sub-arrays.
def get_depth(root): def helper(root, i): if not root.left and not root.right: return i r,l = 0,0 if root.left: l = helper(root.left, i+1) if root.right: r = helper(root.right, i+1) return max(l,r) return helper(root, 0) def traverse(r...
_____no_output_____
MIT
ed_breadth_first_search.ipynb
devkosal/code_challenges
Reverse level order traversal
def reverse_traverse(root): q = deque([root]) res = deque() while q: num_items = len(q) level_res = deque() for _ in range(num_items): r = q.popleft() level_res.append(r.val) if r.left: q.append(r.left) if r.right: ...
_____no_output_____
MIT
ed_breadth_first_search.ipynb
devkosal/code_challenges
zigzag order traversal
def zig_traverse(root): q = deque([root]) res = deque() while q: num_items = len(q) level_res = deque() for _ in range(num_items): r = q.popleft() level_res.append(r.val) if r.left: q.append(r.left) if r.right: ...
_____no_output_____
MIT
ed_breadth_first_search.ipynb
devkosal/code_challenges
Connect All Level Order Siblings (medium) -- Problem 1
def connect_level_order(tree): pass
_____no_output_____
MIT
ed_breadth_first_search.ipynb
devkosal/code_challenges
Plotting final spectra with MC error-bar (without MCMC error-bar)
from pathlib import Path import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline import plotly.offline as py import plotly.express as px %load_ext autoreload %autoreload 2 %time from tail.analysis.container import Spectra, FSky, NullSpectra basedir = Path('/scratc...
_____no_output_____
BSD-3-Clause
examples/plot_spectra.ipynb
ickc/TAIL
Plotting code
fig = spectra.plot_spectra(subtract_leakage=True) if save: Path('media').mkdir(exist_ok=True) py.plot(fig, filename='media/spectra.html', include_plotlyjs='cdn', include_mathjax='cdn') fig fig.update_layout(xaxis_type="log", yaxis_type="log") if save: py.plot(fig, filename='media/spectra-log.html', include_...
_____no_output_____
BSD-3-Clause
examples/plot_spectra.ipynb
ickc/TAIL
Analytical Error-bar
err_analytic = spectra.err_analytic(f_sky) df_err = spectra.to_frame_4d(err_analytic) df_err.T
_____no_output_____
BSD-3-Clause
examples/plot_spectra.ipynb
ickc/TAIL
TF-IDF Draw back of Bag of Words
# All the words have given same importance # No Semantic information preserved # For above two problems TF-IDF model is the solution
_____no_output_____
Unlicense
vk_NLP - TF IDF.ipynb
vitthalkcontact/NLP
Steps in TF-IDF
# 1. Lower case the corpus or paragraph. # 2. Tokenization. # 3. TF: Term Frequency, IDF: Inverse Document Frequency, TF-IDF = TF*log(IDF). # 4. TF = No. of occurance of a word in a document / No. of words in that document. # 5. IDF = log(No. of documents/No. of documents containing the word) # 6. TFIDF(word) = TF(Docu...
(0, 26) 0.2611488808945384 (0, 5) 0.21677716168619507 (0, 38) 0.3236873066380182 (0, 34) 0.3236873066380182 (0, 51) 0.3236873066380182 (0, 41) 0.43355432337239014 (0, 18) 0.3236873066380182 (0, 23) 0.3236873066380182 (0, 29) 0.2611488808945384 (0, 9) 0.3236873066380182 (1, 21) 0.20243884765910772 ...
Unlicense
vk_NLP - TF IDF.ipynb
vitthalkcontact/NLP
OpenStreetMap Data Case Study Problems Encountered in the MapDiscuss the main problems with the data in the following order:- Over­abbreviated street names (“S Tryon St Ste 105”)- Second level “k” tags with the value "type"(which overwrites the element’s previously processed node[“type”]field).- Street names in secon...
# -*- coding: utf-8 -*- import pprint import xml.etree.ElementTree as ET from collections import defaultdict import re import os DATASET = "san-jose_california.osm" # osm filename PATH = "./" # directory contain the osm file OSMFILE = PATH + DATASET print('Dataset folder:', OSMFILE)
Dataset folder: ./san-jose_california.osm
MIT
.ipynb_checkpoints/Data_Analyst_ND_Project3-checkpoint.ipynb
jorcus/DAND-Wrangle-OpenStreetMap-Data
Iterative Parsing the OSM file.
# mapparser.py # iterative parsing from mapparser import count_tags, count_tags_total tags = count_tags(OSMFILE) print('Numbers of tag: ', len(tags)) print('Numbers of tag elements: ', count_tags_total(tags)) pprint.pprint(tags)
Numbers of tag: 8 Numbers of tag elements: 4599618 {'bounds': 1, 'member': 18333, 'nd': 1965111, 'node': 1679378, 'osm': 1, 'relation': 1759, 'tag': 705634, 'way': 229401}
MIT
.ipynb_checkpoints/Data_Analyst_ND_Project3-checkpoint.ipynb
jorcus/DAND-Wrangle-OpenStreetMap-Data
Categorize the tag keys.Categorize the tag keys in the followings:- "lower", for tags that contain only lowercase letters and are valid,- "lower_colon", for otherwise valid tags with a colon in their names,- "problemchars", for tags with problematic characters, and- "other", for other tags that do not fall into the ot...
# tags.py from tags import key_type def process_map_tags(filename): keys = {"lower": 0, "lower_colon": 0, "problemchars": 0, "other": 0} for _, element in ET.iterparse(filename): keys = key_type(element, keys) return keys keys = process_map_tags(OSMFILE) pprint.pprint(keys)
{'lower': 459030, 'lower_colon': 224633, 'other': 21969, 'problemchars': 2}
MIT
.ipynb_checkpoints/Data_Analyst_ND_Project3-checkpoint.ipynb
jorcus/DAND-Wrangle-OpenStreetMap-Data
Number of Unique UsersAs you can see, each of the user has their own unique ID. However, the ID is unstructured likes 1, 1005885, 1030, 100744. I structured all the unique user id in the followings:- 25663 => 0025663- 951370 => 0951370
# users.py from users import unique_user_id, max_length_user_id, structure_user_id def test(): users = unique_user_id(OSMFILE) # structured = structure_user_id(users) # pprint.pprint(structured) max_length = max_length_user_id(users) print('Number of users: ', len(users)) print('User ID maximum...
Number of users: 1359 User ID maximum length 7 25663 => 0025663 951370 => 0951370 199089 => 0199089 637707 => 0637707 28145 => 0028145 941449 => 0941449 281267 => 0281267 41907 => 0041907 166129 => 0166129 173623 => 0173623
MIT
.ipynb_checkpoints/Data_Analyst_ND_Project3-checkpoint.ipynb
jorcus/DAND-Wrangle-OpenStreetMap-Data
Over-abbreviated Street NamesSome basic query is over-abbreviated. I updated all the problematic address strings in the followings:- Seaboard Ave => Seaboard Avenue- Cherry Ave => Cherry Avenue
#audit.py from audit import audit, update_name, street_type_re, mapping def test(): st_types = audit(OSMFILE) # pprint.pprint(dict(st_types)) #print out dictonary of potentially incorrect street types print_limit = 10 for st_type, ways in st_types.items(): # .iteritems() for python2 for name in...
Hillsdale Ave => Hillsdale Avenue Meridian Ave => Meridian Avenue Walsh Ave => Walsh Avenue Seaboard Ave => Seaboard Avenue N Blaney Ave => N Blaney Avenue Saratoga Ave => Saratoga Avenue 1425 E Dunne Ave => 1425 E Dunne Avenue Blake Ave => Blake Avenue The Alameda Ave => The Alameda Avenue Hollenbeck Ave => Hollenbeck...
MIT
.ipynb_checkpoints/Data_Analyst_ND_Project3-checkpoint.ipynb
jorcus/DAND-Wrangle-OpenStreetMap-Data
Insert data into Mongodb
# data.py from data import process_map data = process_map(OSMFILE, True) data[0]
_____no_output_____
MIT
.ipynb_checkpoints/Data_Analyst_ND_Project3-checkpoint.ipynb
jorcus/DAND-Wrangle-OpenStreetMap-Data
Data Overview
from pymongo import MongoClient client = MongoClient('localhost:27017') db = client.SanJose collection = db.SanJoseMAP #collection.insert(data) collection print('Size of the original xml file: ',os.path.getsize(OSMFILE)/(1024*1024.0), 'MB') print('Size of the processed json file: ',os.path.getsize(os.path.join(PATH, "s...
Size of the original xml file: 348.08773612976074 MB Size of the processed json file: 512.8097190856934 MB Number of documents: 19761132 Number of nodes: 17470242 Number of ways: 2290674 Number of relations: 0 Number of unique users: 1356 Number of pizza places: 636
MIT
.ipynb_checkpoints/Data_Analyst_ND_Project3-checkpoint.ipynb
jorcus/DAND-Wrangle-OpenStreetMap-Data
Contributor statistics and gamification suggestioThe contributions of users seems incredibly skewed, possibly due to automated versus manual map editing (the word “bot” appears in some usernames). Here are some user percentage statistics:- Top user contribution percentage (“nmixter”) - 15.08%- Combined top 2 users' co...
# Top 10 users with most contributions pipeline = [{"$group":{"_id": "$created.user", "count": {"$sum": 1}}}, {"$sort": {"count": -1}}, {"$limit": 10}] result = collection.aggregate(pipeline) for r in range(10): print (result.next())
{'_id': 'nmixter', 'count': 2980568} {'_id': 'andygol', 'count': 2961664} {'_id': 'mk408', 'count': 1615791} {'_id': 'Bike Mapper', 'count': 969105} {'_id': 'samely', 'count': 813227} {'_id': 'RichRico', 'count': 768741} {'_id': 'dannykath', 'count': 752101} {'_id': 'MustangBuyer', 'count': 646129} {'_id': 'karitotp', ...
MIT
.ipynb_checkpoints/Data_Analyst_ND_Project3-checkpoint.ipynb
jorcus/DAND-Wrangle-OpenStreetMap-Data
Number of users appearing only once (having 1 post) There's only one user appearing only once. Which means most the the user appear at least once.
# Number of users appearing only once (having 1 post) pipeline = [{"$group":{"_id":"$created.user", "count":{"$sum":1}}}, {"$group":{"_id":"$count", "num_users":{"$sum":1}}}, {"$sort":{"_id":1}}, {"$limit":1}] result = collection.aggregate(pipeline) for r in range(1): pr...
{'_id': 1, 'num_users': 1}
MIT
.ipynb_checkpoints/Data_Analyst_ND_Project3-checkpoint.ipynb
jorcus/DAND-Wrangle-OpenStreetMap-Data
Top 10 Biggest religion The result show in Sanjose area, the Christian is one of the biggest religion. The the seconds largest is Unknown, seem like the record is missing. Then coming to the third largest religion is jewish.
# Top 10 Biggest religion pipeline = [{"$match":{"amenity":{"$exists":1}, "amenity":"place_of_worship"}}, {"$group":{"_id":"$religion", "count":{"$sum":1}}}, {"$sort":{"count":-1}}, {"$limit":10}] result = collection.aggregate(pipeline) for r in range(10): print (result.next())
{'_id': 'christian', 'count': 1996} {'_id': None, 'count': 139} {'_id': 'jewish', 'count': 33} {'_id': 'buddhist', 'count': 26} {'_id': 'muslim', 'count': 18} {'_id': 'hindu', 'count': 14} {'_id': 'unitarian_universalist', 'count': 13} {'_id': 'sikh', 'count': 7} {'_id': 'caodaism', 'count': 7} {'_id': 'zoroastrian', '...
MIT
.ipynb_checkpoints/Data_Analyst_ND_Project3-checkpoint.ipynb
jorcus/DAND-Wrangle-OpenStreetMap-Data
Top 10 appearing amenities
# Top 10 appearing amenities pipeline = [{"$match":{"amenity":{"$exists":1}}}, {"$group":{"_id":"$amenity","count":{"$sum":1}}}, {"$sort":{"count":-1}}, {"$limit":10}] result = collection.aggregate(pipeline) for r in range(10): print (result.next())
{'_id': '', 'count': 3798126} {'_id': 'parking', 'count': 12302} {'_id': 'restaurant', 'count': 6573} {'_id': 'fast_food', 'count': 3406} {'_id': 'school', 'count': 3321} {'_id': 'place_of_worship', 'count': 2298} {'_id': 'bench', 'count': 1807} {'_id': 'cafe', 'count': 1753} {'_id': 'fuel', 'count': 1580} {'_id': 'bic...
MIT
.ipynb_checkpoints/Data_Analyst_ND_Project3-checkpoint.ipynb
jorcus/DAND-Wrangle-OpenStreetMap-Data
Top 10 popular cuisines
# Top 10 popular cuisines pipeline = [{"$match":{"amenity":{"$exists":1}, "amenity":"restaurant"}}, {"$group":{"_id":"$cuisine", "count":{"$sum":1}}}, {"$sort":{"count":-1}}, {"$limit":10}] result = collection.aggregate(pipeline) for r in range(10): print (result.next())
{'_id': None, 'count': 1296} {'_id': '', 'count': 572} {'_id': 'mexican', 'count': 570} {'_id': 'chinese', 'count': 504} {'_id': 'vietnamese', 'count': 459} {'_id': 'pizza', 'count': 390} {'_id': 'japanese', 'count': 293} {'_id': 'american', 'count': 283} {'_id': 'italian', 'count': 222} {'_id': 'indian', 'count': 214}...
MIT
.ipynb_checkpoints/Data_Analyst_ND_Project3-checkpoint.ipynb
jorcus/DAND-Wrangle-OpenStreetMap-Data
Sort postcodes by count, descending
# Sort postcodes by count, descending pipeline = [{"$match":{"address.postcode":{"$exists":1}}}, {"$group":{"_id":"$address.postcode", "count":{"$sum":1}}}, {"$sort":{"count":-1}}] result = collection.aggregate(pipeline) for r in range(10): print (result.next())
{'_id': '', 'count': 1893917} {'_id': '95014', 'count': 3503} {'_id': '95070', 'count': 2438} {'_id': '94087', 'count': 2205} {'_id': '94086', 'count': 2052} {'_id': '95051', 'count': 1772} {'_id': '95129', 'count': 1397} {'_id': '95127', 'count': 1130} {'_id': '95054', 'count': 1023} {'_id': '95035', 'count': 1018}
MIT
.ipynb_checkpoints/Data_Analyst_ND_Project3-checkpoint.ipynb
jorcus/DAND-Wrangle-OpenStreetMap-Data
Sort street by count, descending
# Sort street by count, descending pipeline = [{"$match":{"address.street":{"$exists":1}}}, {"$group":{"_id":"$address.street", "count":{"$sum":1}}}, {"$sort":{"count":-1}}] result = collection.aggregate(pipeline) for r in range(10): print (result.next())
{'_id': '', 'count': 1885122} {'_id': 'Stevens Creek Boulevard', 'count': 2898} {'_id': 'Hollenbeck Avenue', 'count': 1745} {'_id': 'South Stelling Road', 'count': 1300} {'_id': 'East Estates Drive', 'count': 1230} {'_id': 'Johnson Avenue', 'count': 1200} {'_id': 'Miller Avenue', 'count': 1170} {'_id': 'Bollinger Road'...
MIT
.ipynb_checkpoints/Data_Analyst_ND_Project3-checkpoint.ipynb
jorcus/DAND-Wrangle-OpenStreetMap-Data
Building a Variational Autoencoder in MXNet Xiaoyu Lu, July 5th, 2017This tutorial guides you through the process of building a variational encoder in MXNet. In this notebook we'll focus on an example using the MNIST handwritten digit recognition dataset. Refer to [Auto-Encoding Variational Bayes](https://arxiv.org/a...
mnist = mx.test_utils.get_mnist() image = np.reshape(mnist['train_data'],(60000,28*28)) label = image image_test = np.reshape(mnist['test_data'],(10000,28*28)) label_test = image_test [N,features] = np.shape(image) #number of examples and features f, (ax1, ax2, ax3, ax4) = plt.subplots(1,4, sharex='col', shar...
_____no_output_____
Apache-2.0
example/vae/VAE_example.ipynb
dkuspawono/incubator-mxnet
We can optionally save the parameters in the directory variable 'model_prefix'. We first create data iterators for MXNet, with each batch of data containing 100 images.
model_prefix = None batch_size = 100 nd_iter = mx.io.NDArrayIter(data={'data':image},label={'loss_label':label}, batch_size = batch_size) nd_iter_test = mx.io.NDArrayIter(data={'data':image_test},label={'loss_label':label_test}, batch_size = batch_size)
_____no_output_____
Apache-2.0
example/vae/VAE_example.ipynb
dkuspawono/incubator-mxnet
2. Building the Network Architecture 2.1 Gaussian MLP as encoderNext we constuct the neural network, as in the [paper](https://arxiv.org/abs/1312.6114/), we use *Multilayer Perceptron (MLP)* for both the encoder and decoder. For encoder, a Gaussian MLP is used as follows:\begin{align}\log q_{\phi}(z|x) &= \log \mathc...
## define data and loss labels as symbols data = mx.sym.var('data') loss_label = mx.sym.var('loss_label') ## define fully connected and activation layers for the encoder, where we used tanh activation function. encoder_h = mx.sym.FullyConnected(data=data, name="encoder_h",num_hidden=400) act_h = mx.sym.Activation(da...
_____no_output_____
Apache-2.0
example/vae/VAE_example.ipynb
dkuspawono/incubator-mxnet