markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Again, the cabin doesn't guarantee whether the passengers are save. Cabin B has 3 times ratio but so other cabins. Cabin C not even a magnitude higher. So there's some limitations that we have. At least whether the passenger is in the upper class can be one of the factor of whether the passengers is survived. Let's test those. Again I will build frequency table of SES vs whether passengers is survived, and perform Chi-Square Independence test to confirm.
compute_freq_chi2(df.Survived,df.Pclass)
p2-introds/titanic/titanic.ipynb
napjon/ds-nd
mit
Since we already validated our dataset earlier (also from the frequency table, we can see that each cell has at least 5 expected cases), we can directly perform the statistical test. So indeed, the test statistic result really high and again p-value is practically zero. The data provide convincing evidence that economic status and whether they survived are related. Lastly, we have Embarked variable. Let's see the survivability rate for each embarkation point.
df.groupby('Embarked').Survived.mean()
p2-introds/titanic/titanic.ipynb
napjon/ds-nd
mit
Okay, we see that (C)herbourg has huge survivability rate compared to other two! How is this possible? My first thought would be Cherbourg could be the first/last embarkation point. So they will put around the corner of the ship. When googling schedule of embarkation, I found this link that stated the embarkation is in this order, (S)outhampton - (C)herbourg - (Q)ueenstown Cherbourg is actually in the middle embarkation point. So my assumption is false. Nonetheless, let's see frequency table and test statistics.
compute_freq_chi2(df.Survived,df.Pclass)
p2-introds/titanic/titanic.ipynb
napjon/ds-nd
mit
Again the dataset has been validated for statistical test. The other thing is to observe the expected cases of each cell is 5, from the frequency table we know that to be true. Using ChiSquare Indepence test, our p-value is also practically zero. Thus the data also provide convincing evidence that Embarkation Point and survivability are related. Let's see if embarkation point play nice with other predictors, age for example. I use describe method to describe age by Embarkation point.
df.groupby('Embarked').Age.describe().unstack(level=0)
p2-introds/titanic/titanic.ipynb
napjon/ds-nd
mit
Looking at this statistics, I know that all of the embarkation point has pretty normal distribution since the median and mean are similar. Let's use the violin plot again to confirm this and comparing age distribution by embarkation point side by side.
p = sns.violinplot(data = df, x = 'Embarked', y = 'Age',order=["S","C","Q"]) p.set(title = 'Age Distribution by Embarkation Point', xlabel = 'Location', ylabel = 'Age Distribution', xticklabels = ['Southampton','Cherbourg', 'Queenstown']);
p2-introds/titanic/titanic.ipynb
napjon/ds-nd
mit
I can see from the violin plot little different age distribution betwen each location. Queenstown has wider distribution among the other two. Cherbourg in the middle, and Southampton has thinner distribution and highest peak. This wide distribution can also be explained by standard deviation in the earlier statistics. We can see a little peak in Southampton's age distribution, in children range. Children is supposed to be important factor when determining survivability. We can see that Southampton has higher peak children age distribution compared to Cherbourg. Nothing odd about Cherbourg in age distribution, explained by relevant statistic and violin plot. This suggest that oddness of Cherbourg can't be explained by age. So again based on observation of the data what I found is many passengers who survived is based on: Passengers are women or children. Passengers are an upper class. Passengers are from Cherbourg. There's some limitation in my analysis is determining what caused difference in survived in upper classs. Checking whether the cabin has something to do about it has failed. Note that what I really do is observation based on this data. Hence this is correlation and not causation. You will have a causation when you have a controlled experiment. One must rebuild titanic condition, how it wrecked, and makes a total near-death experience. I don't really think your participants want to experiment that. There are some other variables that I don't analyze, like whether the passengers have siblings or partners. These variables are bias in the dataset. If you see the explanation, the description of the data at the top of this article, it stated: *With respect to the family relation variables (i.e. sibsp and parch) some relations were ignored. * So this bias can lure us into thinking that not having family relations will somehow affect whether or not the passengers are going to survived, based on the observation of the data. Because of this data limitation, these variables are dangerous and best to not analyze if concerning about survivability. The other concern is about how I handle missing values. I will build a frequency table to count whether or not the value is missing for each column.
df.apply(lambda x: x.isnull().value_counts())
p2-introds/titanic/titanic.ipynb
napjon/ds-nd
mit
Unzipping files with house sales data Dataset is from house sales in King County, the region where the city of Seattle, WA is located.
# Put files in current direction into a list files_list = [f for f in os.listdir('.') if os.path.isfile(f)] # Filenames of unzipped files unzip_files = ['kc_house_data.csv','wk3_kc_house_set_1_data.csv', 'wk3_kc_house_set_2_data.csv', 'wk3_kc_house_set_3_data.csv', 'wk3_kc_house_set_4_data.csv', 'wk3_kc_house_test_data.csv', 'wk3_kc_house_train_data.csv', 'wk3_kc_house_valid_data.csv', 'wk3_kc_house_train_valid_shuffled.csv'] # If upzipped file not in files_list, unzip the file for filename in unzip_files: if filename not in files_list: zip_file = filename + '.zip' unzipping = zipfile.ZipFile(zip_file) unzipping.extractall() unzipping.close
Week_4_Ridge_Regression/assign_1_ridge-regression.ipynb
Santana9937/Regression_ML_Specialization
mit
Polynomial regression, revisited We build on the material from Week 3, where we wrote the function to produce an SFrame with columns containing the powers of a given input. Copy and paste the function polynomial_sframe from Week 3:
def polynomial_dataframe(feature, degree): # feature is pandas.Series type # assume that degree >= 1 # initialize the dataframe: poly_dataframe = pd.DataFrame() # and set poly_dataframe['power_1'] equal to the passed feature poly_dataframe['power_1'] = feature # first check if degree > 1 if degree > 1: # then loop over the remaining degrees: for power in range(2, degree+1): # first we'll give the column a name: name = 'power_' + str(power) # assign poly_dataframe[name] to be feature^power; use apply(*) poly_dataframe[name] = poly_dataframe['power_1'].apply(lambda x: x**power) return poly_dataframe
Week_4_Ridge_Regression/assign_1_ridge-regression.ipynb
Santana9937/Regression_ML_Specialization
mit
Let's use matplotlib to visualize what a polynomial regression looks like on the house data.
# Dictionary with the correct dtypes for the DataFrame columns dtype_dict = {'bathrooms':float, 'waterfront':int, 'sqft_above':int, 'sqft_living15':float, 'grade':int, 'yr_renovated':int, 'price':float, 'bedrooms':float, 'zipcode':str, 'long':float, 'sqft_lot15':float, 'sqft_living':float, 'floors':str, 'condition':int, 'lat':float, 'date':str, 'sqft_basement':int, 'yr_built':int, 'id':str, 'sqft_lot':int, 'view':int} sales = pd.read_csv('kc_house_data.csv', dtype = dtype_dict)
Week_4_Ridge_Regression/assign_1_ridge-regression.ipynb
Santana9937/Regression_ML_Specialization
mit
Plotting the data we are working with
plt.figure(figsize=(8,6)) plt.plot(sales['sqft_living'], sales['price'],'.') plt.xlabel('Living Area (ft^2)', fontsize=16) plt.ylabel('House Price ($)', fontsize=16) plt.title('King County, Seattle House Price Data', fontsize=18) plt.axis([0.0, 14000.0, 0.0, 8000000.0]) plt.show()
Week_4_Ridge_Regression/assign_1_ridge-regression.ipynb
Santana9937/Regression_ML_Specialization
mit
Let us revisit the 15th-order polynomial model using the 'sqft_living' input. Generate polynomial features up to degree 15 using polynomial_dataframe() and fit a model with these features. When fitting the model, use an L2 penalty of 1e-5:
# Bulding dataframe with 15 polynomial features poly15_data = polynomial_dataframe(sales['sqft_living'], 15) l2_small_penalty = 1e-5 model = linear_model.Ridge(alpha=l2_small_penalty, normalize=True) model.fit(poly15_data, sales['price'])
Week_4_Ridge_Regression/assign_1_ridge-regression.ipynb
Santana9937/Regression_ML_Specialization
mit
Note: When we have so many features and so few data points, the solution can become highly numerically unstable, which can sometimes lead to strange unpredictable results. Thus, rather than using no regularization, we will introduce a tiny amount of regularization (l2_penalty=1e-5) to make the solution numerically stable. (In lecture, we discussed the fact that regularization can also help with numerical stability, and here we are seeing a practical example.) With the L2 penalty specified above, fit the model and print out the learned weights. QUIZ QUESTION: What's the learned value for the coefficient of feature power_1?
print 'Weight for power_1 feature is: %.2f' % (model.coef_[0])
Week_4_Ridge_Regression/assign_1_ridge-regression.ipynb
Santana9937/Regression_ML_Specialization
mit
Observe overfitting Recall from Week 3 that the polynomial fit of degree 15 changed wildly whenever the data changed. In particular, when we split the sales data into four subsets and fit the model of degree 15, the result came out to be very different for each subset. The model had a high variance. We will see in a moment that ridge regression reduces such variance. But first, we must reproduce the experiment we did in Week 3. First, split the data into split the sales data into four subsets of roughly equal size and call them set_1, set_2, set_3, and set_4. Use .random_split function and make sure you set seed=0.
set_1 = pd.read_csv('wk3_kc_house_set_1_data.csv', dtype=dtype_dict) set_2 = pd.read_csv('wk3_kc_house_set_2_data.csv', dtype=dtype_dict) set_3 = pd.read_csv('wk3_kc_house_set_3_data.csv', dtype=dtype_dict) set_4 = pd.read_csv('wk3_kc_house_set_4_data.csv', dtype=dtype_dict)
Week_4_Ridge_Regression/assign_1_ridge-regression.ipynb
Santana9937/Regression_ML_Specialization
mit
Next, fit a 15th degree polynomial on set_1, set_2, set_3, and set_4, using 'sqft_living' to predict prices. Print the weights and make a plot of the resulting model.
# Putting data and keys in a list for looping data_list = [set_1, set_2, set_3, set_4] key_list = ['set_1', 'set_2', 'set_3', 'set_4'] # model_poly_deg is a dict which holds all the regression models for the ith polynomial fit poly_15_dframe_dict = {} models_poly_15_dict = {} # First, learn models with a really small L2 penalty l2_small_penalty = 1e-9 # Looping over polynomial features from 1-15 for key, dframe in zip(key_list, data_list): # Entering each dataframe returned from polynomial_dataframe function into a dict # Then, saving col_names into a list to do regression w/ these features. Then, adding price column to dataframe poly_15_dframe_dict[key] = polynomial_dataframe(dframe['sqft_living'], 15) # Adding regression models to dicts models_poly_15_dict[key] = linear_model.Ridge(alpha=l2_small_penalty, normalize=True) models_poly_15_dict[key].fit( poly_15_dframe_dict[key], dframe['price'] )
Week_4_Ridge_Regression/assign_1_ridge-regression.ipynb
Santana9937/Regression_ML_Specialization
mit
Plotting the data and the 4 different 15 degree polynomials we learned from the data.
plt.figure(figsize=(8,6)) plt.plot(sales['sqft_living'], sales['price'],'.', label= 'House Price Data') plt.hold(True) # for i, key in enumerate(key_list): leg_label = 'Model ' + str(i+1) plt.plot( poly_15_dframe_dict[key]['power_1'], models_poly_15_dict[key].predict(poly_15_dframe_dict[key]), '-', label = leg_label ) # plt.hold(False) plt.legend(loc='upper left', fontsize=16) plt.xlabel('Living Area (ft^2)', fontsize=16) plt.ylabel('House Price ($)', fontsize=16) plt.title('4 Diff. 15th Deg. Polynomial Regr. Models, Small L2 Penalty', fontsize=16) plt.axis([0.0, 14000.0, 0.0, 8000000.0]) plt.show()
Week_4_Ridge_Regression/assign_1_ridge-regression.ipynb
Santana9937/Regression_ML_Specialization
mit
The four curves should differ from one another a lot, as should the coefficients you learned. QUIZ QUESTION: For the models learned in each of these training sets, what are the smallest and largest values you learned for the coefficient of feature power_1? (For the purpose of answering this question, negative numbers are considered "smaller" than positive numbers. So -5 is smaller than -3, and -3 is smaller than 5 and so forth.)
power_l_coeff_list = [] for key in key_list: power_l_coeff_list.append( models_poly_15_dict[key].coef_[0] ) print 'Smallest power_1 weight with small L2 penalty is: %.2f' %( min(power_l_coeff_list)) print 'Largest power_1 weight with small L2 penalty is: %.2f' %( max(power_l_coeff_list))
Week_4_Ridge_Regression/assign_1_ridge-regression.ipynb
Santana9937/Regression_ML_Specialization
mit
Ridge regression comes to rescue Generally, whenever we see weights change so much in response to change in data, we believe the variance of our estimate to be large. Ridge regression aims to address this issue by penalizing "large" weights. (Weights of model15 looked quite small, but they are not that small because 'sqft_living' input is in the order of thousands.) With the argument l2_penalty=1e5, fit a 15th-order polynomial model on set_1, set_2, set_3, and set_4. Other than the change in the l2_penalty parameter, the code should be the same as the experiment above.
# model_poly_deg is a dict which holds all the regression models for the ith polynomial fit poly_15_dframe_dict = {} models_poly_15_dict = {} # Re-learn models with a large L2 penalty l2_large_penalty=1.23e2 # Looping over polynomial features from 1-15 for key, dframe in zip(key_list, data_list): # Entering each dataframe returned from polynomial_dataframe function into a dict # Then, saving col_names into a list to do regression w/ these features. Then, adding price column to dataframe poly_15_dframe_dict[key] = polynomial_dataframe(dframe['sqft_living'], 15) # Adding regression models to dicts models_poly_15_dict[key] = linear_model.Ridge(alpha=l2_large_penalty, normalize=True) models_poly_15_dict[key].fit( poly_15_dframe_dict[key], dframe['price'] ) plt.figure(figsize=(8,6)) plt.plot(sales['sqft_living'], sales['price'],'.', label= 'House Price Data') plt.hold(True) # for i, key in enumerate(key_list): leg_label = 'Model ' + str(i+1) plt.plot( poly_15_dframe_dict[key]['power_1'], models_poly_15_dict[key].predict(poly_15_dframe_dict[key]), '-', label = leg_label ) # plt.hold(False) plt.legend(loc='upper left', fontsize=16) plt.xlabel('Living Area (ft^2)', fontsize=16) plt.ylabel('House Price ($)', fontsize=16) plt.title('4 Diff. 15th Deg. Polynomial Regr. Models, Large L2 Penalty', fontsize=16) plt.axis([0.0, 14000.0, 0.0, 8000000.0]) plt.show()
Week_4_Ridge_Regression/assign_1_ridge-regression.ipynb
Santana9937/Regression_ML_Specialization
mit
These curves should vary a lot less, now that you applied a high degree of regularization. QUIZ QUESTION: For the models learned with the high level of regularization in each of these training sets, what are the smallest and largest values you learned for the coefficient of feature power_1? (For the purpose of answering this question, negative numbers are considered "smaller" than positive numbers. So -5 is smaller than -3, and -3 is smaller than 5 and so forth.)
power_l_coeff_list = [] for key in key_list: power_l_coeff_list.append( models_poly_15_dict[key].coef_[0] ) print 'Smallest power_1 weight with large L2 penalty is: %.2f' %( min(power_l_coeff_list)) print 'Largest power_1 weight with large L2 penalty is: %.2f' %( max(power_l_coeff_list))
Week_4_Ridge_Regression/assign_1_ridge-regression.ipynb
Santana9937/Regression_ML_Specialization
mit
Selecting an L2 penalty via cross-validation Just like the polynomial degree, the L2 penalty is a "magic" parameter we need to select. We could use the validation set approach as we did in the last module, but that approach has a major disadvantage: it leaves fewer observations available for training. Cross-validation seeks to overcome this issue by using all of the training set in a smart way. We will implement a kind of cross-validation called k-fold cross-validation. The method gets its name because it involves dividing the training set into k segments of roughtly equal size. Similar to the validation set method, we measure the validation error with one of the segments designated as the validation set. The major difference is that we repeat the process k times as follows: Set aside segment 0 as the validation set, and fit a model on rest of data, and evalutate it on this validation set<br> Set aside segment 1 as the validation set, and fit a model on rest of data, and evalutate it on this validation set<br> ...<br> Set aside segment k-1 as the validation set, and fit a model on rest of data, and evalutate it on this validation set After this process, we compute the average of the k validation errors, and use it as an estimate of the generalization error. Notice that all observations are used for both training and validation, as we iterate over segments of data. To estimate the generalization error well, it is crucial to shuffle the training data before dividing them into segments. We reserve 10% of the data as the test set and shuffle the remainder.
train_valid_shuffled = pd.read_csv('wk3_kc_house_train_valid_shuffled.csv', dtype=dtype_dict) test = pd.read_csv('wk3_kc_house_test_data.csv', dtype=dtype_dict)
Week_4_Ridge_Regression/assign_1_ridge-regression.ipynb
Santana9937/Regression_ML_Specialization
mit
Let us familiarize ourselves with array slicing with DataFrames. To extract a continuous slice from a DataFrame, use colon in square brackets. For instance, the following cell extracts rows 0 to 9 of train_valid_shuffled. Notice that the first index (0) is included in the slice but the last index (10) is omitted.
train_valid_shuffled[0:10] # rows 0 to 9
Week_4_Ridge_Regression/assign_1_ridge-regression.ipynb
Santana9937/Regression_ML_Specialization
mit
Now let us extract individual segments with array slicing. Consider the scenario where we group the houses in the train_valid_shuffled dataframe into k=10 segments of roughly equal size, with starting and ending indices computed as above. Extract the fourth segment (segment 3) and assign it to a variable called validation4.
i = 3 start_ind = (n*3)/k end_ind = (n*(3+1))/k-1 validation4 = train_valid_shuffled[ start_ind : end_ind + 1]
Week_4_Ridge_Regression/assign_1_ridge-regression.ipynb
Santana9937/Regression_ML_Specialization
mit
Extract the remainder of the data after excluding fourth segment (segment 3) and assign the subset to train4.
i = 3 k = 10 n = len(train_valid_shuffled) start_ind = (n*3)/k end_ind = (n*(3+1))/k-1 train4 = train_valid_shuffled[0:start].append(train_valid_shuffled[end+1:n])
Week_4_Ridge_Regression/assign_1_ridge-regression.ipynb
Santana9937/Regression_ML_Specialization
mit
Now we are ready to implement k-fold cross-validation. Write a function that computes k validation errors by designating each of the k segments as the validation set. It accepts as parameters (i) k, (ii) l2_penalty, (iii) dataframe, (iv) name of output column (e.g. price) and (v) list of feature names. The function returns the average validation error using k segments as validation sets. For each i in [0, 1, ..., k-1]: Compute starting and ending indices of segment i and call 'start' and 'end' Form validation set by taking a slice (start:end+1) from the data. Form training set by appending slice (end+1:n) to the end of slice (0:start). Train a linear model using training set just formed, with a given l2_penalty Compute validation error using validation set just formed
def k_fold_cross_validation(k, l2_penalty, data, output_vals): # Defining n as the number of observations and an empty list to store the k cross_validation errors n = len(data) cv_error_list = [] # Looping to compute k slices. Computing the array index to get the kth_slice. for i in range(k): # Getting the starting and ending index of the kth slice start = (n*i)/k end = (n*(i+1))/k-1 # Using start and end to split data into cross-validation and training set cv_set = data[start: end + 1] training_set = data[0:start].append(data[end+1:n]) # Using the training data to create a linear regression model model_train_data = linear_model.Ridge(alpha=l2_penalty, normalize=True) model_train_data.fit( data, output_vals ) # Computing np.array with predictions from the model we learn predictions = model_train_data.predict(data) # Computing the error on the cross-validation set RSS_cv_set = sum( (predictions - output_vals)**2 ) cv_error_list.append(RSS_cv_set) # Return the average validation error return sum(cv_error_list)/float(len(cv_error_list))
Week_4_Ridge_Regression/assign_1_ridge-regression.ipynb
Santana9937/Regression_ML_Specialization
mit
Once we have a function to compute the average validation error for a model, we can write a loop to find the model that minimizes the average validation error. Write a loop that does the following: * We will again be aiming to fit a 15th-order polynomial model using the sqft_living input * For l2_penalty in [10^1, 10^1.5, 10^2, 10^2.5, ..., 10^7] (to get this in Python, you can use this Numpy function: np.logspace(1, 7, num=13).) * Run 10-fold cross-validation with l2_penalty * Report which L2 penalty produced the lowest average validation error. Note: since the degree of the polynomial is now fixed to 15, to make things faster, you should generate polynomial features in advance and re-use them throughout the loop. Make sure to use train_valid_shuffled when generating polynomial features!
l2_penalty_list = np.logspace(3, 9, num=26) poly_15_dframe = polynomial_dataframe(train_valid_shuffled['sqft_living'], 15) output_values = train_valid_shuffled['price'] l2_RSS_list = [] for l2_pen in l2_penalty_list: RSS_error = k_fold_cross_validation(10, l2_pen, poly_15_dframe, output_values) l2_RSS_list.append( (RSS_error, l2_pen) )
Week_4_Ridge_Regression/assign_1_ridge-regression.ipynb
Santana9937/Regression_ML_Specialization
mit
QUIZ QUESTIONS: What is the best value for the L2 penalty according to 10-fold validation?
print 'Minimum value for RSS error is : %.2e' %min(l2_RSS_list)[0] print 'L2 penalty for this RSS error is: %.2e' %min(l2_RSS_list)[1]
Week_4_Ridge_Regression/assign_1_ridge-regression.ipynb
Santana9937/Regression_ML_Specialization
mit
You may find it useful to plot the k-fold cross-validation errors you have obtained to better understand the behavior of the method.
# Putting all L2 penalties and RSS errors for plotting L2_plot_list = [] RSS_plot_list = [] for entry in l2_RSS_list: L2_plot_list.append(entry[1]) RSS_plot_list.append(entry[0]) # Plot the l2_penalty values in the x axis and the cross-validation error in the y axis. # Using plt.xscale('log') will make your plot more intuitive. plt.figure(figsize=(8,6)) plt.plot(L2_plot_list, RSS_plot_list,'-') plt.xscale('log') # plt.xlabel('L2 penalty ' + r'$(\lambda)$', fontsize=16) plt.ylabel('Average Cross-Validation RSS', fontsize=16) plt.title('Cross-Validation RSS vs. L2 Penalty', fontsize=16) # plt.show()
Week_4_Ridge_Regression/assign_1_ridge-regression.ipynb
Santana9937/Regression_ML_Specialization
mit
Once you found the best value for the L2 penalty using cross-validation, it is important to retrain a final model on all of the training data using this value of l2_penalty. This way, your final model will be trained on the entire dataset.
min_L2_pen = min(l2_RSS_list)[1]
Week_4_Ridge_Regression/assign_1_ridge-regression.ipynb
Santana9937/Regression_ML_Specialization
mit
QUIZ QUESTION: Using the best L2 penalty found above, train a model using all training data. What is the RSS on the TEST data of the model you learn with this L2 penalty?
# Loading the training set data and defining the dataframe w/ 15 polynomial features train_data = pd.read_csv('wk3_kc_house_train_data.csv', dtype = dtype_dict) train_data = train_data.sort_values(['sqft_living', 'price']) poly_15_train_data = polynomial_dataframe(train_data['sqft_living'], 15) # Training a linear regression model with L2 penalty that gave the smallest RSS error model_train_data = linear_model.Ridge(alpha=min_L2_pen, normalize=True) model_train_data.fit( poly_15_train_data, train_data['price'] ) # Now, loading the test data and defining the dataframe w/ 15 polynomial features test_data = pd.read_csv('wk3_kc_house_test_data.csv', dtype = dtype_dict) poly_15_test_data = polynomial_dataframe(test_data['sqft_living'], 15) # Using the weights learning from the training data to calculate the predictions on the test data predictions = model_train_data.predict(poly_15_test_data) # Computing the RSS on the test data RSS_test_set = sum( (predictions - test_data['price'])**2 ) print 'RSS on test data with min_L2_pen: %.2e' %(RSS_test_set)
Week_4_Ridge_Regression/assign_1_ridge-regression.ipynb
Santana9937/Regression_ML_Specialization
mit
First , we introduce vector compression by Product Quantization (PQ) [Jegou, TPAMI 11]. The first task is to train an encoder. Let us assume that there are 1000 six-dimensional vectors for training; $X_1 \in \mathbb{R}^{1000\times6}$
X1 = numpy.random.random((1000, 6)) print("X1.shape:\n{}\n".format(X1.shape)) print("X1:\n{}".format(X1))
tutorial/1_pqkmeans.ipynb
DwangoMediaVillage/pqkmeans
mit
Then we can train a PQEncoder using $X_1$.
encoder = pqkmeans.encoder.PQEncoder(num_subdim=2, Ks=256) encoder.fit(X1)
tutorial/1_pqkmeans.ipynb
DwangoMediaVillage/pqkmeans
mit
The encoder takes two parameters: $num_subdim$ and $Ks$. In the training step, each vector is splitted into $num_subdim$ sub-vectors, and quantized with $Ks$ codewords. The $num_subdim$ decides the bit length of PQ-code, and typically set as 4, 8, etc. The $Ks$ is usually set as 256 so as to represent each sub-code by $\log_2 256=8$ bit. In this example, each 6D training vector is splitted into $num_subdim(=2)$ sub-vectors (two 3D vectors). Consequently, the 1000 6D training vectors are splitted into the two set of 1000 3D vectors. The k-means clustering is applied for each set of subvectors with $Ks=256$. Note that, alternatively, you can use fit_generator for a large dataset. This will be covered in the tutorial3. After the training step, the encoder stores the resulting codewords (2 subpspaces $$ 256 codewords $$ 3 dimensions):
print("codewords.shape:\n{}".format(encoder.codewords.shape))
tutorial/1_pqkmeans.ipynb
DwangoMediaVillage/pqkmeans
mit
Note that you can train the encoder preliminary using training data, and write/read the encoder via pickle.
# pickle.dump(encoder, open('encoder.pkl', 'wb')) # Write # encoder = pickle.load(open('encoder.pkl', 'rb')) # Read
tutorial/1_pqkmeans.ipynb
DwangoMediaVillage/pqkmeans
mit
Next, let us consider database vectors (2000 six-dimensional vectors, $X_2$) that we'd like to compress.
X2 = numpy.random.random((2000, 6)) print("X2.shape:\n{}\n".format(X2.shape)) print("X2:\n{}\n".format(X2)) print("Data type of each element:\n{}\n".format(type(X2[0][0]))) print("Memory usage:\n{} byte".format(X2.nbytes))
tutorial/1_pqkmeans.ipynb
DwangoMediaVillage/pqkmeans
mit
We can compress these vectors by the trained PQ-encoder.
X2_pqcode = encoder.transform(X2) print("X2_pqcode.shape:\n{}\n".format(X2_pqcode.shape)) print("X2_pqcode:\n{}\n".format(X2_pqcode)) print("Data type of each element:\n{}\n".format(type(X2_pqcode[0][0]))) print("Memory usage:\n{} byte".format(X2_pqcode.nbytes))
tutorial/1_pqkmeans.ipynb
DwangoMediaVillage/pqkmeans
mit
Each vector is splitted into $num_subdim(=2)$ sub-vectors, and the nearest codeword is searched for each sub-vector. The id of the nearest codeword is recorded, i.e., two integers in this case. This representation is called PQ-code. PQ-code is a memory efficient data representation. The original 6D vector requies $6 * 64 = 384$ bit if 64 bit float is used for each element. On the other, a PQ-code requires only $2 * \log_2 256 = 16$ bit. Note that we can approximately recunstruct the original vector from a PQ-code, by fetching the codewords using the PQ-code:
X2_reconstructed = encoder.inverse_transform(X2_pqcode) print("original X2:\n{}\n".format(X2)) print("reconstructed X2:\n{}".format(X2_reconstructed))
tutorial/1_pqkmeans.ipynb
DwangoMediaVillage/pqkmeans
mit
As can be seen, the reconstructed vectors are similar to the original one. In a large-scale data processing scenario where all data cannot be stored on memory, you can compress input vectors to PQ-codes, and store the PQ-codes only (X2_pqcode).
# numpy.save('pqcode.npy', X2_pqcode) # You can store the PQ-codes only
tutorial/1_pqkmeans.ipynb
DwangoMediaVillage/pqkmeans
mit
2. Clustering by PQk-means Let us run the clustering over the PQ-codes. The clustering object is instanciated with the trained encoder. Here, we set the number of cluster as $k=10$.
kmeans = pqkmeans.clustering.PQKMeans(encoder=encoder, k=10)
tutorial/1_pqkmeans.ipynb
DwangoMediaVillage/pqkmeans
mit
Let's run the PQk-means over X2_pqcode.
clustered = kmeans.fit_predict(X2_pqcode) print(clustered[:100]) # Just show the 100 results
tutorial/1_pqkmeans.ipynb
DwangoMediaVillage/pqkmeans
mit
The resulting vector (clustered) contains the id of assigned codeword for each input PQ-code.
print("The id of assigned codeword for the 1st PQ-code is {}".format(clustered[0])) print("The id of assigned codeword for the 2nd PQ-code is {}".format(clustered[1])) print("The id of assigned codeword for the 3rd PQ-code is {}".format(clustered[2]))
tutorial/1_pqkmeans.ipynb
DwangoMediaVillage/pqkmeans
mit
You can fetch the center of the clustering by:
print("clustering centers:{}\n".format(kmeans.cluster_centers_))
tutorial/1_pqkmeans.ipynb
DwangoMediaVillage/pqkmeans
mit
The centers are also PQ-codes. They can be reconstructed by the PQ-encoder.
clustering_centers_numpy = numpy.array(kmeans.cluster_centers_, dtype=encoder.code_dtype) # Convert to np.array with the proper dtype clustering_centers_reconstructd = encoder.inverse_transform(clustering_centers_numpy) # From PQ-code to 6D vectors print("reconstructed clustering centers:\n{}".format(clustering_centers_reconstructd))
tutorial/1_pqkmeans.ipynb
DwangoMediaVillage/pqkmeans
mit
Let's summalize the result:
print("13th input vector:\n{}\n".format(X2[12])) print("13th PQ code:\n{}\n".format(X2_pqcode[12])) print("reconstructed 13th PQ code:\n{}\n".format(X2_reconstructed[12])) print("ID of the assigned center:\n{}\n".format(clustered[12])) print("Assigned center (PQ-code):\n{}\n".format(kmeans.cluster_centers_[clustered[12]])) print("Assigned center (reconstructed):\n{}".format(clustering_centers_reconstructd[clustered[12]]))
tutorial/1_pqkmeans.ipynb
DwangoMediaVillage/pqkmeans
mit
Note that you can pickle the kmeans instace. The instance can be reused later as a vector quantizer for new input vectors.
# pickle.dump(kmeans, open('kmeans.pkl', 'wb')) # Write # kmeans = pickle.load(open('kmeans.pkl', 'rb')) # Read
tutorial/1_pqkmeans.ipynb
DwangoMediaVillage/pqkmeans
mit
3. Comparison to other clustering methods Let us compare PQk-means and the traditional k-means using high-dimensional data.
from sklearn.cluster import KMeans X3 = numpy.random.random((1000, 1024)) # 1K 1024-dim vectors, for training X4 = numpy.random.random((10000, 1024)) # 10K 1024-dim vectors, for database K = 100 # Train the encoder encoder_large = pqkmeans.encoder.PQEncoder(num_subdim=4, Ks=256) encoder_large.fit(X3) # Encode the vectors to PQ-code X4_pqcode = encoder_large.transform(X4)
tutorial/1_pqkmeans.ipynb
DwangoMediaVillage/pqkmeans
mit
Let's run the PQ-kmeans, and see the computational cost
%time clustered_pqkmeans = pqkmeans.clustering.PQKMeans(encoder=encoder_large, k=K).fit_predict(X4_pqcode)
tutorial/1_pqkmeans.ipynb
DwangoMediaVillage/pqkmeans
mit
Then, run the traditional k-means clustering
%time clustered_kmeans = KMeans(n_clusters=K, n_jobs=-1).fit_predict(X4)
tutorial/1_pqkmeans.ipynb
DwangoMediaVillage/pqkmeans
mit
PQk-means would be tens to hundreds of times faster than k-means depending on your machine. Then let's see the accuracy. Since the result of PQk-means is the approximation of that of k-means, k-means achieved the lower error:
_, pqkmeans_micro_average_error, _ = pqkmeans.evaluation.calc_error(clustered_pqkmeans, X4, K) _, kmeans_micro_average_error, _ = pqkmeans.evaluation.calc_error(clustered_kmeans, X4, K) print("PQk-means, micro avg error: {}".format(pqkmeans_micro_average_error)) print("k-means, micro avg error: {}".format(kmeans_micro_average_error))
tutorial/1_pqkmeans.ipynb
DwangoMediaVillage/pqkmeans
mit
Making a GrainDist object from scratch
MRN = graindist.sizedist.Powerlaw(amax=AMAX) ECUT = graindist.sizedist.ExpCutoff(acut=AMAX) SIL = graindist.composition.CmSilicate(rho=RHO) gd_mrn = graindist.GrainDist(MRN, SIL, md=MD, custom=True) gd_ecut = graindist.GrainDist(ECUT, SIL, md=MD, custom=True) ax = plt.subplot(111) gd_mrn.plot(ax, color='k', lw=2, label='Powerlaw') gd_ecut.plot(ax, color='b', lw=2, alpha=0.8, label='Exponential cut-off') plt.legend(loc='upper left', frameon=False)
examples/plot_graindist.ipynb
eblur/newdust
bsd-2-clause
Making a GrainDist object with helper function This is a shortcut for making some common grain distributions You can change AMAX and RHO from here, too For the different grain size distributions, amax acts in a different way: graindist.sizedist.Grain: amax sets the singular grain size graindist.sizedist.Powerlaw: amax sets the maximum grain size in the distribution graindist.sizedist.ExpCutoff: amax sets the "acut" value
gd_mrn2 = graindist.GrainDist('Powerlaw', 'Silicate', amax=AMAX, rho=RHO, md=MD) gd_ecut2 = graindist.GrainDist('ExpCutoff', 'Silicate', amax=AMAX, rho=RHO, md=MD) ax = plt.subplot(111) gd_mrn.plot(ax, color='k', lw=2, label='') gd_mrn2.plot(ax, color='r', lw=2, ls='--', label='Powerlaw') gd_ecut.plot(ax, color='k', lw=2, label='') gd_ecut2.plot(ax, color='b', lw=2, ls='--', label='Exponential Cutoff') plt.legend(loc='upper left', frameon=False)
examples/plot_graindist.ipynb
eblur/newdust
bsd-2-clause
Note that the values I've used in this example are different from the defaults Silicate has a default grain material density of 3.8 g cm^-3
gd_mrn3 = graindist.GrainDist('Powerlaw', 'Silicate', md=MD) gd_ecut3 = graindist.GrainDist('ExpCutoff', 'Silicate', md=MD) ax = plt.subplot(111) gd_mrn.plot(ax, color='k', lw=2, label='') gd_mrn3.plot(ax, color='r', lw=2, ls='--', label='Powerlaw') gd_ecut.plot(ax, color='k', lw=2, label='') gd_ecut3.plot(ax, color='b', lw=2, ls='--', label='Exponential Cutoff') plt.legend(loc='upper left', frameon=False)
examples/plot_graindist.ipynb
eblur/newdust
bsd-2-clause
You can also get the mass density of dust grains But the GrainDist.plot function only does number density
plt.plot(gd_mrn.a, gd_mrn.mdens, label='Powerlaw') plt.plot(gd_ecut.a, gd_ecut.mdens, label='ExpCutoff') plt.loglog() plt.xlabel('Radius (um)') plt.ylabel('Mass density (g cm$^{-2}$ um$^{-1}$)') plt.legend(loc='upper right', frameon=False)
examples/plot_graindist.ipynb
eblur/newdust
bsd-2-clause
You can't plot single grain sizes It prints the number density instead
gd_single = graindist.GrainDist('Grain', 'Silicate', amax=AMAX) gd_single.plot()
examples/plot_graindist.ipynb
eblur/newdust
bsd-2-clause
Imports and configuration We set the path to the config.cfg file using the environment variable 'PYPMJ_CONFIG_FILE'. If you do not have a configuration file yet, please look into the Setting up a configuration file example.
import os os.environ['PYPMJ_CONFIG_FILE'] = '/path/to/your/config.cfg'
examples/Using jcmpython - the mie2D-project.ipynb
cbpygit/pypmj
gpl-3.0
On import, we receive information on the configured logging and JCMsuite version, if the logging level is appropriate. We can get additional info on the version and the license.
jpy.jcm_license_info()
examples/Using jcmpython - the mie2D-project.ipynb
cbpygit/pypmj
gpl-3.0
The versions of JCMsuite and pypmj can be accessed by the module attributes __jcm_version__ and __version__
print 'Version of JCMsuite:', jpy.__jcm_version__ print 'Version of pypmj:', jpy.__version__
examples/Using jcmpython - the mie2D-project.ipynb
cbpygit/pypmj
gpl-3.0
Extensions You can get a list of available extensions to pypmj using the extensions module attribute.
jpy.extensions
examples/Using jcmpython - the mie2D-project.ipynb
cbpygit/pypmj
gpl-3.0
Extensions may have additional dependencies or may need data that is not shipped with pypmj. You can load an extension using the load_extension function.
jpy.load_extension('materials')
examples/Using jcmpython - the mie2D-project.ipynb
cbpygit/pypmj
gpl-3.0
Simulation Preparing and configuring the simulation set We start by creating a JCMProject-instance describing the project we'd like to run. The mie2D-project is located in a subdirectory of our project catalog, which is configured in the section Data under key projects in the configuration file. We could also specify an absolute path instead. Since we want to leave the actual project untouched, we specify a working_dir into which the project is copied before. JCMgeo and the template conversion will be executed in the working directory, for example. If we do not specify a working_dir, a folder called current_run will be used in the current directory. Note: If you did not configure the project directory shipped with pypmj in the configuration, simply specify the absolute path to the mie2D-project here.
wdir = os.path.abspath('working_dir') project = jpy.JCMProject('scattering/mie/mie2D', working_dir=wdir)
examples/Using jcmpython - the mie2D-project.ipynb
cbpygit/pypmj
gpl-3.0
The JCMProject-instance automatically detected the name of the project file:
project.project_file_name
examples/Using jcmpython - the mie2D-project.ipynb
cbpygit/pypmj
gpl-3.0
If it fails to find a proper file or if it finds multiple project files, it raises an Exception. You can specify the project file name manually using the parameter project_file_name on initialization. To run simulations using this project, we create a SimulationSet (this could also be a single simulation). The keys that are necessary to translate the JCM template files (i.e. the .jcmp(t)-files) need to be given as a nested dict with keys constants, parameters and geometry in the top level. The values for these keys need to be dicts as well, providing all necessary keys for the template translation in total. Their function is as follows: constants: can be of any type, but are not stored in the HDF5 store. This is useful for minor parameters, such as the info level in the project, as it does not change the result of the simulation. But it can also be used to pass complicated data, such as material data maps. parameters: All parameters that do not change the geometry, i.e. do not belong to the layout.jcmt template. geometry: All parameters that do change the geometry, i.e. belong to the layout.jcmt template. If a sequence is provided for any of the parameters or geometry values, loops will be performed (depending on the combination_mode of the SimulationSet). In the mie2D project, there is only one parameter: the radius of the circle. This parameter changes the geometry! We'd like to scan over different radii and, consequently, provide a numpy.array for it. We leave the other two dicts empty.
mie_keys = {'constants' :{}, 'parameters': {}, 'geometry': {'radius':np.linspace(0.3, 0.5, 40)}}
examples/Using jcmpython - the mie2D-project.ipynb
cbpygit/pypmj
gpl-3.0
Now, the SimulationSet can be initialized. For now, we also set the storage_base and storage_folder attributes. This will ignore the storage base folder of your configuration and use the folder 'tmp_storage_folder' in the current working directory. And instead of a folder named by the current date, the subfolder will be called 'mie2D_test'.
simuset = jpy.SimulationSet(project, mie_keys, storage_folder='mie2D_test', storage_base=os.path.abspath('tmp_storage_folder'))
examples/Using jcmpython - the mie2D-project.ipynb
cbpygit/pypmj
gpl-3.0
We are now informed about the directory in which our data is stored as configured in the configuration file and by the duplicate_path_levels parameter. The path is also stored in the attribute storage_dir. It now contains an .h5 database file:
os.listdir(simuset.storage_dir)
examples/Using jcmpython - the mie2D-project.ipynb
cbpygit/pypmj
gpl-3.0
We can now make a schedule for the simulations that we want to perform. This includes that all parameter combinations are determinded, the simulations are sorted in a way to have minimal calls of JCMgeo (which can be expensive) and the database is checked for matching simulations which already have been done. In our case, the database is still empty and we should end up with 40 simulations, as we specified 40 different radii.
simuset.make_simulation_schedule()
examples/Using jcmpython - the mie2D-project.ipynb
cbpygit/pypmj
gpl-3.0
The store should still be empty at this time:
simuset.is_store_empty()
examples/Using jcmpython - the mie2D-project.ipynb
cbpygit/pypmj
gpl-3.0
Depending on the configured servers, there might be multiple workstations or queues which can be used by the JCMdaemon to run the simulations. For this 2D project, we can restrict the resources to be only the local computer, i.e. 'localhost', and use only 2 workers.
simuset.use_only_resources('localhost') simuset.resource_manager.resources.set_m_n_for_all(2,1)
examples/Using jcmpython - the mie2D-project.ipynb
cbpygit/pypmj
gpl-3.0
You can get ResourceDict of the currently configured resources that will be used by the SimulationSet using the get_current_resources-method or simply using the resources attribute of the resource_manager.
simuset.get_current_resources()
examples/Using jcmpython - the mie2D-project.ipynb
cbpygit/pypmj
gpl-3.0
Computing geometries only Computing a geometry (i.e. running jcm.geo) for a specific simulation of our SimulationSet is an easy task now. We only need to call the compute_geometry method with the index of the simulation (or the Simulation instance itself). We can pass additional keyword arguments to jcm.geo, such as a jcmt_pattern if desired. Let's compute the geometry for the first simulation of our set. The simulation has the following properties:
sim = simuset.simulations[0] print sim print 'keys:', sim.keys
examples/Using jcmpython - the mie2D-project.ipynb
cbpygit/pypmj
gpl-3.0
Now we run jcm.geo for it (we also could have written simuset.compute_geometry(0))
simuset.compute_geometry(0)
examples/Using jcmpython - the mie2D-project.ipynb
cbpygit/pypmj
gpl-3.0
The project's working directory now contains a grid file:
os.listdir(simuset.get_project_wdir())
examples/Using jcmpython - the mie2D-project.ipynb
cbpygit/pypmj
gpl-3.0
Running a single simulation Before demonstrating how to solve the complete set of simulations, we show how to solve a single simulation using JCMsolve. This can be very useful if the simulation is still being developed or if something needs to be recomputed later. We can solve a specific simulation using the solve_single_simulation method by passing the simulation number or the Simulation-instance. It automatically computes the geometry (if compute_geometry is True) and adds the resources if necessary.
print 'Status before solving:', sim.status results, logs = simuset.solve_single_simulation(sim) print 'Status after solving:', sim.status
examples/Using jcmpython - the mie2D-project.ipynb
cbpygit/pypmj
gpl-3.0
All the results and logs are also stored as attributes of the Simulation-instance as logs, error_message, exit_code and jcm_results, e.g.
print sim.logs['Out']
examples/Using jcmpython - the mie2D-project.ipynb
cbpygit/pypmj
gpl-3.0
The fieldbag file path is also set as an attribute:
sim.fieldbag_file
examples/Using jcmpython - the mie2D-project.ipynb
cbpygit/pypmj
gpl-3.0
Developing and using a processing function So far we only have solved the simulation by (1) computing the geometry and (2) running JCMsolve. This is fine and it may be that no further step is desired. However, yet nothing is set up to be saved to the HDF5 store, so nothing but the input parameters will appear in it. As a minimal step, we may want to save the computational costs of our simulations. But normally, post processes will be part of your JCM project file and you may want to (3) extract information from them or even derive quantities from them by further processing. If all you want to do is store the computational cost, this is automatically done by calling the process_results method of the Simulation-instance without further input arguments.
sim.process_results()
examples/Using jcmpython - the mie2D-project.ipynb
cbpygit/pypmj
gpl-3.0
If the logging-level is set to 'DEBUG', you see the message 'No result processing was done.' to inform you that only the computational costs have been read out. The results that will be stored are now described by the (hidden) attribute _results_dict:
sim._results_dict
examples/Using jcmpython - the mie2D-project.ipynb
cbpygit/pypmj
gpl-3.0
The status of the simulation is now updated.
sim.status
examples/Using jcmpython - the mie2D-project.ipynb
cbpygit/pypmj
gpl-3.0
To execute further processings, we can use the processing_func-argument. From the docs: The `processing_func` must be a function of one or two input arguments. A list of all results returned by post processes in JCMsolve are passed as the first argument to this function. If a second input argument is present, it must be called 'keys'. Then, the simulation keys are passed (i.e. self.keys). This is useful to use parameters of the simulation, e.g. the wavelength, inside your processing function. It must return a dict with key-value pairs that should be saved to the HDF5 store. Consequently, the values must be of types that can be stored to HDF5, otherwise Exceptions will occur in the saving steps. In the mie2D-project a FluxIntegration is used as a PostProcess-section in mie2D.jcmp. It's results will be passed to the processing_func. As this is the only post process, the length of the list passed to the processing_func will be 1. We can see the list using
sim.jcm_results[1:]
examples/Using jcmpython - the mie2D-project.ipynb
cbpygit/pypmj
gpl-3.0
We'd like to store the real part of the ElectromagneticFieldEnergyFlux of the zeroth source, as this is the scattering cross section (SCS) of our Mie-scatterer. The function processing_func will simply look like this:
def read_scs(pp): results = {} #must be a dict results['SCS'] = pp[0]['ElectromagneticFieldEnergyFlux'][0][0].real return results
examples/Using jcmpython - the mie2D-project.ipynb
cbpygit/pypmj
gpl-3.0
We can try it out by running process_results results again, this time passing our custom function. We need to set overwrite to True to force the processing to be executed again.
sim.process_results(processing_func=read_scs, overwrite=True)
examples/Using jcmpython - the mie2D-project.ipynb
cbpygit/pypmj
gpl-3.0
We now have an additional key called 'SCS' in our _results_dict. It will be stored in the HDF5 store.
sim._results_dict['SCS']
examples/Using jcmpython - the mie2D-project.ipynb
cbpygit/pypmj
gpl-3.0
You can get the complete data of the simulation including input parameters and results as a pandas DataFrame:
sim._get_DataFrame()
examples/Using jcmpython - the mie2D-project.ipynb
cbpygit/pypmj
gpl-3.0
Running all simulations Finally, we can run all our simulations and process them using our custom processing_func. This is done in parallel using all the resources that we have added. The results will be appended to the HDF5 store.
simuset.run(N=10, processing_func=read_scs) %matplotlib inline import matplotlib.pyplot as plt import seaborn as sns sns.set_context('notebook') data = simuset.get_store_data().sort_values(by='radius') data.plot(x='radius', y='SCS', title='Results of the simulation')
examples/Using jcmpython - the mie2D-project.ipynb
cbpygit/pypmj
gpl-3.0
Adding data in a later run Let's assume we want to extend our store in a later session. The automatic data comparison will detect that some of the data is already known and will only execute the missing simulations. The same mechanism helps to continue the SimulationSet on the point were it stopped, e.g. due to a keyboard interruption or a server error. For demonstration, we extend our list of radii and reinitialize the SimulationSet. To show the power of the comparison mechanism we provide the radii in a mixed up order.
extended_radii = np.append(np.linspace(0.5, 0.6, 40)[1:], np.linspace(0.3, 0.5, 40)) mie_keys_extended = {'constants' :{}, 'parameters': {}, 'geometry': {'radius':extended_radii}}
examples/Using jcmpython - the mie2D-project.ipynb
cbpygit/pypmj
gpl-3.0
We close the store, delete the SimulationSet instance and start all over.
simuset.close_store() del simuset simuset = jpy.SimulationSet(project, mie_keys_extended) simuset.make_simulation_schedule()
examples/Using jcmpython - the mie2D-project.ipynb
cbpygit/pypmj
gpl-3.0
We are now informed that matches have been found in the HDF5 store. The residual simulations can be executed as before. This time, we set the wdir_mode to 'zip' to demonstrate the automatic zipping and deletion of the working directories. If the directories are no longer needed, you can set wdir_mode to 'delete'. Also, we demonstrate the save running of SimulationSets by using the utility function run_simusets_in_save_mode, which is tolerant to unforseen errors and restarts the SimulationSet in such cases. It also sends status e-mails if this is configured in the configuration file.
jpy.utils.run_simusets_in_save_mode(simuset, N=10, processing_func=read_scs, wdir_mode='zip')
examples/Using jcmpython - the mie2D-project.ipynb
cbpygit/pypmj
gpl-3.0
The storage folder now only contains the HDF5 store and a zip-archive with all the working directories.
os.listdir(simuset.storage_dir)
examples/Using jcmpython - the mie2D-project.ipynb
cbpygit/pypmj
gpl-3.0
We can now plot our extended results.
simuset.get_store_data().sort_values(by='radius').plot(x='radius', y='SCS', title='Results of the simulation')
examples/Using jcmpython - the mie2D-project.ipynb
cbpygit/pypmj
gpl-3.0
We can also write our data to a CSV or an Excel file using the write_store_data_to_file method:
simuset.write_store_data_to_file() # default is results.csv in the storage folder simuset.write_store_data_to_file(os.path.join(simuset.storage_dir, 'results_excel.xls'), mode='Excel')
examples/Using jcmpython - the mie2D-project.ipynb
cbpygit/pypmj
gpl-3.0
Observations: - The cost starts very high. This is because with large random-valued weights, the last activation (sigmoid) outputs results that are very close to 0 or 1 for some examples, and when it gets that example wrong it incurs a very high loss for that example. Indeed, when $\log(a^{[3]}) = \log(0)$, the loss goes to infinity. - Poor initialization can lead to vanishing/exploding gradients, which also slows down the optimization algorithm. - If you train this network longer you will see better results, but initializing with overly large random numbers slows down the optimization. <font color='blue'> In summary: - Initializing weights to very large random values does not work well. - Hopefully intializing with small random values does better. The important question is: how small should be these random values be? Lets find out in the next part! 4 - He initialization Finally, try "He Initialization"; this is named for the first author of He et al., 2015. (If you have heard of "Xavier initialization", this is similar except Xavier initialization uses a scaling factor for the weights $W^{[l]}$ of sqrt(1./layers_dims[l-1]) where He initialization would use sqrt(2./layers_dims[l-1]).) Exercise: Implement the following function to initialize your parameters with He initialization. Hint: This function is similar to the previous initialize_parameters_random(...). The only difference is that instead of multiplying np.random.randn(..,..) by 10, you will multiply it by $\sqrt{\frac{2}{\text{dimension of the previous layer}}}$, which is what He initialization recommends for layers with a ReLU activation.
# GRADED FUNCTION: initialize_parameters_he def initialize_parameters_he(layers_dims): """ Arguments: layer_dims -- python array (list) containing the size of each layer. Returns: parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL": W1 -- weight matrix of shape (layers_dims[1], layers_dims[0]) b1 -- bias vector of shape (layers_dims[1], 1) ... WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1]) bL -- bias vector of shape (layers_dims[L], 1) """ np.random.seed(3) parameters = {} L = len(layers_dims) - 1 # integer representing the number of layers for l in range(1, L + 1): ### START CODE HERE ### (≈ 2 lines of code) parameters['W' + str(l)] = np.random.randn(layers_dims[l], layers_dims[l-1]) * np.sqrt(2/layers_dims[l-1]) parameters['b' + str(l)] = np.zeros((layers_dims[l], 1)) * np.sqrt(2/layers_dims[l-1]) ### END CODE HERE ### return parameters parameters = initialize_parameters_he([2, 4, 1]) print("W1 = " + str(parameters["W1"])) print("b1 = " + str(parameters["b1"])) print("W2 = " + str(parameters["W2"])) print("b2 = " + str(parameters["b2"]))
Course 2/Initialization.ipynb
ShubhamDebnath/Coursera-Machine-Learning
mit
Building on logistic regression with no L2 penalty assignment Let us now build on Module 3 assignment. Recall from lecture that the link function for logistic regression can be defined as: $$ P(y_i = +1 | \mathbf{x}_i,\mathbf{w}) = \frac{1}{1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))}, $$ where the feature vector $h(\mathbf{x}_i)$ is given by the word counts of important_words in the review $\mathbf{x}_i$. We will use the same code as in this past assignment to make probability predictions since this part is not affected by the L2 penalty. (Only the way in which the coefficients are learned is affected by the addition of a regularization term.)
''' produces probablistic estimate for P(y_i = +1 | x_i, w). estimate ranges between 0 and 1. ''' def predict_probability(feature_matrix, coefficients): # Take dot product of feature_matrix and coefficients ## YOUR CODE HERE ... # Compute P(y_i = +1 | x_i, w) using the link function ## YOUR CODE HERE predictions = ... return predictions
ml-classification/blank/module-4-linear-classifier-regularization-assignment-blank.ipynb
dnc1994/MachineLearning-UW
mit
required software This analysis tool requires the seaborn module for the heatmap plotting. toyplot also has a matrix function for plotting heatmaps, but I found that it grinds on assemblies with many taxa.
# conda isntall -c conda-forge seaborn import ipyrad import ipyrad.analysis as ipa from ipyrad.analysis.sharing import Sharing # the path to your VCF or HDF5 formatted snps file data = "/home/isaac/ipyrad/test-data/pedicularis/analysis-ipyrad/ped_outfiles/ped.snps.hdf5" imap = { "prz": ["32082_przewalskii_SRR1754729", "33588_przewalskii_SRR1754727"], "cys": ["41478_cyathophylloides_SRR1754722", "41954_cyathophylloides_SRR1754721"], "cya": ["30686_cyathophylla_SRR1754730"], "sup": ["29154_superba_SRR1754715"], "cup": ["33413_thamno_SRR1754728"], "tha": ["30556_thamno_SRR1754720"], "rck": ["35236_rex_SRR1754731"], "rex": ["35855_rex_SRR1754726", "40578_rex_SRR1754724"], "lip": ["39618_rex_SRR1754723", "38362_rex_SRR1754725"], } # load the snp data into sharing tool with arguments from ipyrad.analysis.sharing import Sharing share = Sharing( data=data, imap=imap, ) share.run() share.sharing_matrix ## Plot shared snps/missigness as proportions scaled to max values fig, ax = share.draw() ## Plot shared snps/missigness as raw values fig, ax = share.draw(scaled=False)
newdocs/API-analysis/cookbook-sharing.ipynb
dereneaton/ipyrad
gpl-3.0
Removing samples from the sharing matrix is as simple as removing them from the imap This can be a convenience for speeding up the pairwise calculations if you have lots of samples and only want to examine a few of them.
imap = { "prz": ["32082_przewalskii_SRR1754729", "33588_przewalskii_SRR1754727"], "cys": ["41478_cyathophylloides_SRR1754722", "41954_cyathophylloides_SRR1754721"], "cya": ["30686_cyathophylla_SRR1754730"], "sup": ["29154_superba_SRR1754715"], "cup": ["33413_thamno_SRR1754728"], "tha": ["30556_thamno_SRR1754720"], # "rck": ["35236_rex_SRR1754731"], # "rex": ["35855_rex_SRR1754726", "40578_rex_SRR1754724"], # "lip": ["39618_rex_SRR1754723", "38362_rex_SRR1754725"], } # load the snp data into sharing tool with arguments share = Sharing( data=data, imap=imap, ) share.run() fig, ax = share.draw()
newdocs/API-analysis/cookbook-sharing.ipynb
dereneaton/ipyrad
gpl-3.0
The order of samples in the figure can be reconfigured with the keep_order argument This allows for flexibly reordering samples in the figure without recalculating the sharing values. This parameter will accept a list or a dictionary, and will only plot the specified samples in list order.
imap = { "prz": ["32082_przewalskii_SRR1754729", "33588_przewalskii_SRR1754727"], "cys": ["41478_cyathophylloides_SRR1754722", "41954_cyathophylloides_SRR1754721"], "cya": ["30686_cyathophylla_SRR1754730"], "sup": ["29154_superba_SRR1754715"], "cup": ["33413_thamno_SRR1754728"], "tha": ["30556_thamno_SRR1754720"], "rck": ["35236_rex_SRR1754731"], "rex": ["35855_rex_SRR1754726", "40578_rex_SRR1754724"], "lip": ["39618_rex_SRR1754723", "38362_rex_SRR1754725"], } # Hack to get a list of samples in some order order = sum(imap.values(), [])[::-1] print(order) share = Sharing( data=data, imap=imap, mincov=0.83, ) share.run() fig, ax = share.draw(keep_order=order)
newdocs/API-analysis/cookbook-sharing.ipynb
dereneaton/ipyrad
gpl-3.0
An example of using keep_order to plot a subset of the imap samples
imap2 = { "prz": ["32082_przewalskii_SRR1754729", "33588_przewalskii_SRR1754727"], "cys": ["41478_cyathophylloides_SRR1754722", "41954_cyathophylloides_SRR1754721"], "cya": ["30686_cyathophylla_SRR1754730"], "sup": ["29154_superba_SRR1754715"], "cup": ["33413_thamno_SRR1754728"], "tha": ["30556_thamno_SRR1754720"], # "rck": ["35236_rex_SRR1754731"], # "rex": ["35855_rex_SRR1754726", "40578_rex_SRR1754724"], # "lip": ["39618_rex_SRR1754723", "38362_rex_SRR1754725"], } _, _ = share.draw(keep_order=imap2)
newdocs/API-analysis/cookbook-sharing.ipynb
dereneaton/ipyrad
gpl-3.0
The matrices can also be sorted either by shared "loci" or shared "missing" This will sort the rows/columns by mean locus sharing or mean missingness. Notice, no need to rerun the pairwise calculations, this is just a manipulation of the data. The sort argument will be superceded by order.
fig, ax = share.draw(sort="loci") imap [item for sublist in imap.values() for item in sublist]
newdocs/API-analysis/cookbook-sharing.ipynb
dereneaton/ipyrad
gpl-3.0
Show most negative reviews for giraffe
giraffe_reviews[-1]['review'] giraffe_reviews[-2]['review'] selected_words = ['awesome', 'great', 'fantastic', 'amazing', 'love', 'horrible', 'bad', 'terrible', 'awful', 'wow', 'hate'] def sent_word_count(word_counts): if 'hate' in word_counts: return word_counts['hate'] else: return 0 products['hate'] = products['word_count'].apply(sent_word_count) word_dict = {} for word in selected_words: word_dict[word] = products[word].sum() train_data, test_data = products.random_split(.8, seed=0) selected_words_model = graphlab.logistic_classifier.create(train_data, target='sentiment', features=['awesome'], validation_set=test_data) selected_words_model['coefficients'] swm_coefficients = selected_words_model['coefficients'] swm_coefficients.sort('value') selected_words_model.evaluate(test_data, metric='roc_curve') baby_products = products[products['name'] == 'Baby Trend Diaper Champ'] baby_products['predicted_sentiment'] = selected_words_model.predict(baby_products, output_type='probability') baby_products = baby_products.sort('predicted_sentiment', ascending=False) baby_products.head() baby_products['review'][0] baby_products['predicted_sentiment'] = sentiment_model.predict(baby_products, output_type='probability') baby_products = baby_products.sort('predicted_sentiment', ascending=False) baby_products['review'][0] baby_products.head()
dato/recommendations/Analyzing product sentiment.ipynb
jrrembert/cybernetic-organism
gpl-2.0
Reading the data The astropy package has an I/O package to simplify reading and writing a number of popular formats common in astronomy.
from astropy.io import ascii
notebooks/Haiti2016/spectrum.ipynb
astroumd/GradMap
gpl-3.0
We obtained a very simple ascii table with two columns, wavelength and intensity, of a spectrum at a particular point in the sky. The astropy module has some classes to deal with such ascii tables. Although this is not a very reliable way to store data, it is very simple and portable. Almost any software will be able to deal with such ascii files. Your favorite spreadsheet program probably prefers "csv" (comma-separated-values), where the space is replaced by a comma.
data = ascii.read('../data/extract.dat') print(data) x = data['col1'] y = data['col2']
notebooks/Haiti2016/spectrum.ipynb
astroumd/GradMap
gpl-3.0
Plotting the basics
import matplotlib.pyplot as plt plt.plot(x,y)
notebooks/Haiti2016/spectrum.ipynb
astroumd/GradMap
gpl-3.0
There is a very strong line around 4865 Angstrom dominating this plot, and many weaker lines. If we rescale this plot and look what is near the baseline, we get to see these weaker lines much better:
plt.plot(x,y) z = y*0.0 plt.plot(x,z) plt.ylim(-0.5,2.0)
notebooks/Haiti2016/spectrum.ipynb
astroumd/GradMap
gpl-3.0
<h3>Beschreibende Statistik</h3>
# Ausgabe df.shape print(dataset.shape) # Ausgabe df.dtypes # Spalte enthält die Classifikation R oder M set_option('display.max_rows', 500) print(dataset.dtypes) # Ausgabe df.head mit vergösserter display width set_option('display.width', 100) print(dataset.head(20)) # Ausgabe df.describe() mit 4 Nachkomma Stellen set_option('precision', 4) print(dataset.describe()) # Ausgabe der Klassen Verteilung in der Spalte 60 print(dataset.groupby(60).size())
18-05-14-ml-workcamp/sonar-daten/Projekt-Sonardaten-Workcamp-ML.ipynb
mediagit2016/workcamp-maschinelles-lernen-grundlagen
gpl-3.0