markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Let's visualize our regression function with the scatterplot showing the original data set. Herefore, we use the predicted values.
#visualize data points plt.scatter(df1.x, df1.y, color = "y", marker = "o", s = 40) #visualize regression function plt.plot(descriptiveFeatures1, targetFeature1_predict, color = "g") plt.xlabel('x') plt.ylabel('y') plt.title('the data and the regression function') plt.show()
_____no_output_____
Apache-2.0
IDS_2020/RegressionSVM/Instruction4-RegressionSVM-without solutions.ipynb
pmnatos/DataScience
Now it is your turn. Build a simple linear regression for the data below. Use col1 as descriptive feature and col2 as target feature. Also plot your results.
df2 = pd.DataFrame({'col1': [770, 677, 428, 410, 371, 504, 1136, 695, 551, 550], 'col2': [54, 47, 28, 38, 29, 38, 80, 52, 45, 40]}) #Your turn
_____no_output_____
Apache-2.0
IDS_2020/RegressionSVM/Instruction4-RegressionSVM-without solutions.ipynb
pmnatos/DataScience
EvaluationUsually, the model and its predictions is not sufficient. In the following we want to evaluate our classifiers. Let's start by computing their error. The sklearn.metrics package contains several errors such as* Mean squared error* Mean absolute error* Mean squared log error* Median absolute error
#computing the squared error of the first model print("Mean squared error model 1: %.2f" % mean_squared_error(targetFeature1, targetFeature1_predict))
Mean squared error model 1: 0.56
Apache-2.0
IDS_2020/RegressionSVM/Instruction4-RegressionSVM-without solutions.ipynb
pmnatos/DataScience
We can also visualize the errors:
plt.scatter(targetFeature1_predict, (targetFeature1 - targetFeature1_predict) ** 2, color = "blue", s = 10,) ## plotting line to visualize zero error plt.hlines(y = 0, xmin = 0, xmax = 15, linewidth = 2) ## plot title plt.title("Squared errors Model 1") ## function to show plot plt.show()
_____no_output_____
Apache-2.0
IDS_2020/RegressionSVM/Instruction4-RegressionSVM-without solutions.ipynb
pmnatos/DataScience
Now it is your turn. Compute the mean squared error and visualize the squared errors. Play around using different error metrics.
#Your turn
_____no_output_____
Apache-2.0
IDS_2020/RegressionSVM/Instruction4-RegressionSVM-without solutions.ipynb
pmnatos/DataScience
Handling multiple descriptive features at once - Multiple linear regressionIn most cases, we will have more than one descriptive feature . As an example we use an example data set of the scikit package. The dataset describes housing prices in Boston based on several attributes. Note, in this format the data is already split into descriptive features and a target feature.
from sklearn import datasets ## imports datasets from scikit-learn df3 = datasets.load_boston() #The sklearn package provides the data splitted into a set of descriptive features and a target feature. #We can easily transform this format into the pandas data frame as used above. descriptiveFeatures3 = pd.DataFrame(df3.data, columns=df3.feature_names) targetFeature3 = pd.DataFrame(df3.target, columns=['target']) print('Descriptive features:') print(descriptiveFeatures3.head()) print('Target feature:') print(targetFeature3.head())
Descriptive features: CRIM ZN INDUS CHAS NOX RM AGE DIS RAD TAX \ 0 0.00632 18.0 2.31 0.0 0.538 6.575 65.2 4.0900 1.0 296.0 1 0.02731 0.0 7.07 0.0 0.469 6.421 78.9 4.9671 2.0 242.0 2 0.02729 0.0 7.07 0.0 0.469 7.185 61.1 4.9671 2.0 242.0 3 0.03237 0.0 2.18 0.0 0.458 6.998 45.8 6.0622 3.0 222.0 4 0.06905 0.0 2.18 0.0 0.458 7.147 54.2 6.0622 3.0 222.0 PTRATIO B LSTAT 0 15.3 396.90 4.98 1 17.8 396.90 9.14 2 17.8 392.83 4.03 3 18.7 394.63 2.94 4 18.7 396.90 5.33 Target feature: target 0 24.0 1 21.6 2 34.7 3 33.4 4 36.2
Apache-2.0
IDS_2020/RegressionSVM/Instruction4-RegressionSVM-without solutions.ipynb
pmnatos/DataScience
To predict the housing price we will use a Multiple Linear Regression model. In Python this is very straightforward: we use the same function as for simple linear regression, but our set of descriptive features now contains more than one element (see above).
classifier = LinearRegression() model3 = classifier.fit(descriptiveFeatures3,targetFeature3) targetFeature3_predict = classifier.predict(descriptiveFeatures3) print('Coefficients: \n', classifier.coef_) print('Intercept: \n', classifier.intercept_) print("Mean squared error: %.2f" % mean_squared_error(targetFeature3, targetFeature3_predict))
Coefficients: [[-1.08011358e-01 4.64204584e-02 2.05586264e-02 2.68673382e+00 -1.77666112e+01 3.80986521e+00 6.92224640e-04 -1.47556685e+00 3.06049479e-01 -1.23345939e-02 -9.52747232e-01 9.31168327e-03 -5.24758378e-01]] Intercept: [36.45948839] Mean squared error: 21.89
Apache-2.0
IDS_2020/RegressionSVM/Instruction4-RegressionSVM-without solutions.ipynb
pmnatos/DataScience
As you can see above, we have a coefficient for each descriptive feature. Handling categorical descriptive featuresSo far we always encountered numerical dscriptive features, but data sets can also contain categorical attributes. The regression function can only handle numerical input. There are several ways to tranform our categorical data to numerical data (for example using one-hot encoding as explained in the lecture: we introduce a 0/1 feature for every possible value of our categorical attribute). For adequate data, another possibility is to replace each categorical value by a numerical value and adding an ordering with it. Popular possibilities to achieve this transformation are* the get_dummies function of pandas* the OneHotEncoder of scikit* the LabelEncoder of scikitAfter encoding the attributes we can apply our regular regression function.
#example using pandas df4 = pd.DataFrame({'A':['a','b','c'],'B':['c','b','a'] }) one_hot_pd = pd.get_dummies(df4) one_hot_pd #example using scikit from sklearn.preprocessing import LabelEncoder, OneHotEncoder #apply the one hot encoder encoder = OneHotEncoder(categories='auto') encoder.fit(df4) df4_OneHot = encoder.transform(df4).toarray() print('Transformed by One-hot Encoding: ') print(df4_OneHot) # encode labels with value between 0 and n_classes-1 encoder = LabelEncoder() df4_LE = df4.apply(encoder.fit_transform) print('Replacing categories by numerical labels: ') print(df4_LE.head())
Transformed by One-hot Encoding: [[1. 0. 0. 0. 0. 1.] [0. 1. 0. 0. 1. 0.] [0. 0. 1. 1. 0. 0.]] Replacing categories by numerical labels: A B 0 0 2 1 1 1 2 2 0
Apache-2.0
IDS_2020/RegressionSVM/Instruction4-RegressionSVM-without solutions.ipynb
pmnatos/DataScience
Now it is your turn. Perform linear regression using the data set given below. Don't forget to transform your categorical descriptive features. The rental price attribute represents the target variable.
df5 = pd.DataFrame({'Size':[500,550,620,630,665],'Floor':[4,7,9,5,8], 'Energy rating':['C', 'A', 'A', 'B', 'C'], 'Rental price': [320,380,400,390,385] }) #Your turn
_____no_output_____
Apache-2.0
IDS_2020/RegressionSVM/Instruction4-RegressionSVM-without solutions.ipynb
pmnatos/DataScience
Predicting a categorical target value - Logistic regression We might also encounter data sets where our target feature is categorical. Here we don't transform them into numerical values, but insetad we use a logistic regression function. Luckily, sklearn provides us with a suitable function that is similar to the linear equivalent. Similar to linear regression, we can compute logistic regression on a single descriptive variable as well as on multiple variables.
# Importing the dataset iris = pd.read_csv('iris.csv') print('First look at the data set: ') print(iris.head()) #defining the descriptive and target features descriptiveFeatures_iris = iris[['sepal_length']] #we only use the attribute 'sepal_length' in this example targetFeature_iris = iris['species'] #we want to predict the 'species' of iris from sklearn.linear_model import LogisticRegression classifier = LogisticRegression(solver = 'liblinear', multi_class = 'ovr') classifier.fit(descriptiveFeatures_iris, targetFeature_iris) targetFeature_iris_pred = classifier.predict(descriptiveFeatures_iris) print('Coefficients: \n', classifier.coef_) print('Intercept: \n', classifier.intercept_)
First look at the data set: sepal_length sepal_width petal_length petal_width species 0 5.1 3.5 1.4 0.2 setosa 1 4.9 3.0 1.4 0.2 setosa 2 4.7 3.2 1.3 0.2 setosa 3 4.6 3.1 1.5 0.2 setosa 4 5.0 3.6 1.4 0.2 setosa Coefficients: [[-0.86959145] [ 0.01223362] [ 0.57972675]] Intercept: [ 4.16186636 -0.74244291 -3.9921824 ]
Apache-2.0
IDS_2020/RegressionSVM/Instruction4-RegressionSVM-without solutions.ipynb
pmnatos/DataScience
Now it is your turn. In the example above we only used the first attribute as descriptive variable. Change the example such that all available attributes are used.
#Your turn
_____no_output_____
Apache-2.0
IDS_2020/RegressionSVM/Instruction4-RegressionSVM-without solutions.ipynb
pmnatos/DataScience
Note, that the regression classifier (both logistic and non-logistic) can be tweaked using several parameters. This includes, but is not limited to, non-linear regression. Check out the documentation for details and feel free to play around! Support Vector Machines Aside from regression models, the sklearn package also provides us with a function for training support vector machines. Looking at the example below we see that they can be trained in similar ways. We still use the iris data set for illustration.
from sklearn.svm import SVC #define descriptive and target features as before descriptiveFeatures_iris = iris[['sepal_length', 'sepal_width', 'petal_length', 'petal_width']] targetFeature_iris = iris['species'] #this time, we train an SVM classifier classifier = SVC(C=1, kernel='linear', gamma = 'auto') classifier.fit(descriptiveFeatures_iris, targetFeature_iris) targetFeature_iris_predict = classifier.predict(descriptiveFeatures_iris) targetFeature_iris_predict[0:5] #show the first 5 predicted values
_____no_output_____
Apache-2.0
IDS_2020/RegressionSVM/Instruction4-RegressionSVM-without solutions.ipynb
pmnatos/DataScience
As explained in the lecture, a support vector machine is defined by its support vectors. In the sklearn package we can access them and their properties very easily:* support_: indicies of support vectors* support_vectors_: the support vectors* n_support_: the number of support vectors for each class
print('Indicies of support vectors:') print(classifier.support_) print('The support vectors:') print(classifier.support_vectors_) print('The number of support vectors for each class:') print(classifier.n_support_)
Indicies of support vectors: [ 23 24 41 52 56 63 66 68 70 72 76 77 83 84 98 106 110 119 123 126 127 129 133 138 146 147 149] The support vectors: [[5.1 3.3 1.7 0.5] [4.8 3.4 1.9 0.2] [4.5 2.3 1.3 0.3] [6.9 3.1 4.9 1.5] [6.3 3.3 4.7 1.6] [6.1 2.9 4.7 1.4] [5.6 3. 4.5 1.5] [6.2 2.2 4.5 1.5] [5.9 3.2 4.8 1.8] [6.3 2.5 4.9 1.5] [6.8 2.8 4.8 1.4] [6.7 3. 5. 1.7] [6. 2.7 5.1 1.6] [5.4 3. 4.5 1.5] [5.1 2.5 3. 1.1] [4.9 2.5 4.5 1.7] [6.5 3.2 5.1 2. ] [6. 2.2 5. 1.5] [6.3 2.7 4.9 1.8] [6.2 2.8 4.8 1.8] [6.1 3. 4.9 1.8] [7.2 3. 5.8 1.6] [6.3 2.8 5.1 1.5] [6. 3. 4.8 1.8] [6.3 2.5 5. 1.9] [6.5 3. 5.2 2. ] [5.9 3. 5.1 1.8]] The number of support vectors for each class: [ 3 12 12]
Apache-2.0
IDS_2020/RegressionSVM/Instruction4-RegressionSVM-without solutions.ipynb
pmnatos/DataScience
We can also calculate the distance of the data points to the separating hyperplane by using the decision_function(X) method. Score(X,y) calculates the mean accuracy of the classification. The classification report shows metrics such as precision, recall, f1-score and support. You will learn more about these quality metrics in a few lectures.
from sklearn.metrics import classification_report classifier.decision_function(descriptiveFeatures_iris) print('Accuracy: \n', classifier.score(descriptiveFeatures_iris,targetFeature_iris)) print('Classification report: \n') print(classification_report(targetFeature_iris, targetFeature_iris_predict))
Accuracy: 0.9933333333333333 Classification report: precision recall f1-score support setosa 1.00 1.00 1.00 50 versicolor 1.00 0.98 0.99 50 virginica 0.98 1.00 0.99 50 accuracy 0.99 150 macro avg 0.99 0.99 0.99 150 weighted avg 0.99 0.99 0.99 150
Apache-2.0
IDS_2020/RegressionSVM/Instruction4-RegressionSVM-without solutions.ipynb
pmnatos/DataScience
Grade distrubtion by Faculties (used in the appendix)
# note: no NaN's print(df['Fakultet'].unique()) # Number of courses by faculty print(sum(df['Fakultet'] == 'Det Natur- og Biovidenskabelige Fakultet')) print(sum(df['Fakultet'] == 'Det Samfundsvidenskabelige Fakultet')) print(sum(df['Fakultet'] == 'Det Humanistiske Fakultet')) print(sum(df['Fakultet'] == 'Det Sundhedsvidenskabelige Fakultet')) print(sum(df['Fakultet'] == 'Det Juridiske Fakultet')) print(sum(df['Fakultet'] == 'Det Teologiske Fakultet')) #Number of grades given for each faculty. list_number_of_grades_faculties = [] for i in tqdm_notebook(df['Fakultet'].unique()): df_number_grades = df[df['Fakultet'] == i] list_number_of_grades_faculties.append(int(sum(df_number_grades[[12, 10, 7, 4, 2, 0, -3]].sum(skipna = True)))) list_number_of_grades_faculties; #Number of passing grades given for each faculty. list_no_fail_number_of_grades_faculties = [] for i in tqdm_notebook(df['Fakultet'].unique()): df_number_grades = df[df['Fakultet'] == i] list_no_fail_number_of_grades_faculties.append(int(sum(df_number_grades[[12, 10, 7, 4, 2]].sum(skipna = True)))) list_no_fail_number_of_grades_faculties
_____no_output_____
MIT
Exam project/Jacob legemappe/ANALYSIS_JENS_CLEAN.ipynb
tnv875/Group18-NoTeeth
Grade distrubtion by Faculties, weighted against ECTS
# We need to weight the grades according to ECTS points. If we do not small courses will have the same weight as # bigger courses. df['Weigthed_m3'] = df['Credit_edit'] * df[-3] df['Weigthed_00'] = df['Credit_edit'] * df[0] df['Weigthed_02'] = df['Credit_edit'] * df[2] df['Weigthed_4'] = df['Credit_edit'] * df[4] df['Weigthed_7'] = df['Credit_edit'] * df[7] df['Weigthed_10'] = df['Credit_edit'] * df[10] df['Weigthed_12'] = df['Credit_edit'] * df[12] df[['Credit_edit',-3,'Weigthed_m3',0,'Weigthed_00',2,'Weigthed_02',4,'Weigthed_4',7,'Weigthed_7',10,'Weigthed_10',12,'Weigthed_12']]; y_ects_inner = [] y_ects = [] x = ['-3','00','02','4','7','10','12'] # Looking at each faculty for i in tqdm_notebook(df['Fakultet'].unique()): df_faculty = df[df['Fakultet']==i] # Using the weighted grades this time. for k in ['Weigthed_m3','Weigthed_00','Weigthed_02','Weigthed_4','Weigthed_7','Weigthed_10','Weigthed_12']: y_ects_inner.append(df_faculty[k].sum(skipna = True)) y_ects.append(y_ects_inner) y_ects_inner=[] # calc frequencies y_ects_freq_inner = [] y_ects_freq = [] # running through each faculty for i in range(len(y_ects)): # calc frequencies for q in range(len(y_ects[i])): y_ects_freq_inner.append(y_ects[i][q]/sum(y_ects[i])) y_ects_freq.append(y_ects_freq_inner) y_ects_freq_inner = [] # This figure is used in the analysis. It uses WEIGHTED grades f, ax = plt.subplots(figsize=(15,10)) plt.subplot(2, 3, 1) plt.title(Faculty_names[0], fontsize = 16, weight = 'bold') plt.ylim([0,0.30]) plt.grid(axis ='y',zorder=0) plt.ylabel('Frequency',fontsize=14) plt.annotate('Number of grades th=0.93, edgecolor='black',zorder=3) given: '+ str(list_number_of_grades_faculties[0]), (0,0), (0, -20),fontsize= 13, xycoords='axes fraction', textcoords='offset points', va='top') plt.bar(x, y_ects_freq[0], wid plt.subplot(2, 3, 2) plt.title(Faculty_names[1], fontsize = 16, weight = 'bold') plt.ylim([0,0.30]) plt.grid(axis ='y',zorder=0) plt.annotate('Number of grades given: '+ str(list_number_of_grades_faculties[1]), (0,0), (0, -20),fontsize= 13, xycoords='axes fraction', textcoords='offset points', va='top') plt.bar(x, y_ects_freq[1], width=0.93, edgecolor='black',zorder=3) plt.subplot(2, 3, 3) plt.title(Faculty_names[2], fontsize = 16, weight = 'bold') plt.ylim([0,0.30]) plt.grid(axis ='y',zorder=0) plt.annotate('Number of grades given: '+ str(list_number_of_grades_faculties[2]), (0,0), (0, -20),fontsize= 13, xycoords='axes fraction', textcoords='offset points', va='top') plt.bar(x, y_ects_freq[2], width=0.93, edgecolor='black',zorder=3) plt.subplot(2, 3, 4) plt.title(Faculty_names[3], fontsize = 16, weight = 'bold') plt.ylim([0,0.30]) plt.grid(axis ='y',zorder=0) plt.ylabel('Frequency',fontsize=14) plt.annotate('Number of grades given: '+ str(list_number_of_grades_faculties[3]), (0,0), (0, -20),fontsize= 13, xycoords='axes fraction', textcoords='offset points', va='top') plt.bar(x, y_ects_freq[3], width=0.93, edgecolor='black',zorder=3) plt.subplot(2, 3, 5) plt.title(Faculty_names[4], fontsize = 16, weight = 'bold') plt.ylim([0,0.30]) plt.grid(axis ='y',zorder=0) plt.annotate('Number of grades given: '+ str(list_number_of_grades_faculties[4]), (0,0), (0, -20),fontsize= 13, xycoords='axes fraction', textcoords='offset points', va='top') plt.bar(x, y_ects_freq[4], width=0.93, edgecolor='black',zorder=3) plt.subplot(2, 3, 6) plt.title(Faculty_names[5], fontsize = 16, weight = 'bold') plt.ylim([0,0.30]) plt.grid(axis ='y',zorder=0) plt.annotate('Number of grades given: '+ str(list_number_of_grades_faculties[5]), (0,0), (0, -20),fontsize= 13, xycoords='axes fraction', textcoords='offset points', va='top') plt.bar(x, y_ects_freq[5], width=0.93, edgecolor='black',zorder=3) f.savefig('histogram_gades_split_faculty_ECTS_weight.png')
_____no_output_____
MIT
Exam project/Jacob legemappe/ANALYSIS_JENS_CLEAN.ipynb
tnv875/Group18-NoTeeth
Faculties, weighted against ECTS - droppping -3 and 00
y_no_fail_ects = [] x_no_fail = ['02','4','7','10','12'] # Looking at each faculty for i in tqdm_notebook(df['Fakultet'].unique()): df_faculty = df[df['Fakultet']==i] y_no_fail_ects_inner=[] for k in ['Weigthed_02','Weigthed_4','Weigthed_7','Weigthed_10','Weigthed_12']: y_no_fail_ects_inner.append(df_faculty[k].sum(skipna = True)) y_no_fail_ects.append(y_no_fail_ects_inner) y_no_fail_ects # calc frequencies y_no_fail_ects_freq = [] # running through each faculty for i in range(len(y_ects)): y_no_fail_ects_freq_inner = [] # calc frequencies for q in range(len(y_no_fail_ects[i])): y_no_fail_ects_freq_inner.append(y_no_fail_ects[i][q]/sum(y_no_fail_ects[i])) y_no_fail_ects_freq.append(y_no_fail_ects_freq_inner) y_no_fail_ects_freq # This figure is used in the analysis f, ax = plt.subplots(figsize=(15,10)) plt.subplot(2, 3, 1) plt.title(Faculty_names[0], fontsize = 16, weight = 'bold') plt.ylim([0,0.35]) plt.grid(axis ='y',zorder=0) plt.ylabel('Frequency',fontsize=14) plt.annotate('Number of passing grades: '+ str(list_no_fail_number_of_grades_faculties[0]), (0,0), (0, -20),fontsize= 13, xycoords='axes fraction', textcoords='offset points', va='top') plt.bar(x_no_fail, y_no_fail_ects_freq[0], width=0.93, edgecolor='black',zorder=3) plt.subplot(2, 3, 2) plt.title(Faculty_names[1], fontsize = 16, weight = 'bold') plt.ylim([0,0.35]) plt.grid(axis ='y',zorder=0) plt.annotate('Number of passing grades: '+ str(list_no_fail_number_of_grades_faculties[1]), (0,0), (0, -20),fontsize= 13, xycoords='axes fraction', textcoords='offset points', va='top') plt.bar(x_no_fail, y_no_fail_ects_freq[1], width=0.93, edgecolor='black',zorder=3) plt.subplot(2, 3, 3) plt.title(Faculty_names[2], fontsize = 16, weight = 'bold') plt.ylim([0,0.35]) plt.grid(axis ='y',zorder=0) plt.annotate('Number of passing grades: '+ str(list_no_fail_number_of_grades_faculties[2]), (0,0), (0, -20),fontsize= 13, xycoords='axes fraction', textcoords='offset points', va='top') plt.bar(x_no_fail, y_no_fail_ects_freq[2], width=0.93, edgecolor='black',zorder=3) plt.subplot(2, 3, 4) plt.title(Faculty_names[3], fontsize = 16, weight = 'bold') plt.ylim([0,0.35]) plt.grid(axis ='y',zorder=0) plt.ylabel('Frequency',fontsize=14) plt.annotate('Number of passing grades: '+ str(list_no_fail_number_of_grades_faculties[3]), (0,0), (0, -20),fontsize= 13, xycoords='axes fraction', textcoords='offset points', va='top') plt.bar(x_no_fail, y_no_fail_ects_freq[3], width=0.93, edgecolor='black',zorder=3) plt.subplot(2, 3, 5) plt.title(Faculty_names[4], fontsize = 16, weight = 'bold') plt.ylim([0,0.35]) plt.grid(axis ='y',zorder=0) plt.annotate('Number of passing grades: '+ str(list_no_fail_number_of_grades_faculties[4]), (0,0), (0, -20),fontsize= 13, xycoords='axes fraction', textcoords='offset points', va='top') plt.bar(x_no_fail, y_no_fail_ects_freq[4], width=0.93, edgecolor='black',zorder=3) plt.subplot(2, 3, 6) plt.title(Faculty_names[5], fontsize = 16, weight = 'bold') plt.ylim([0,0.35]) plt.grid(axis ='y',zorder=0) plt.annotate('Number of passing grades: '+ str(list_no_fail_number_of_grades_faculties[5]), (0,0), (0, -20),fontsize= 13, xycoords='axes fraction', textcoords='offset points', va='top') plt.bar(x_no_fail, y_no_fail_ects_freq[5], width=0.93, edgecolor='black',zorder=3) f.savefig('histogram_gades_split_faculty_ECTS_weight_NO_FAIL.png')
_____no_output_____
MIT
Exam project/Jacob legemappe/ANALYSIS_JENS_CLEAN.ipynb
tnv875/Group18-NoTeeth
Calculating GPA by faculty
from math import isnan import math #Calculate gpa when ONLY PASSED exams are counted snit = [] for i in range(0,len(df)): x_02 = df[2][i] x_04 = df[4][i] x_07 = df[7][i] x_10 = df[10][i] x_12 = df[12][i] number = (x_12,x_10,x_07,x_04,x_02) grades = [12,10,7,4,2] mydick = dict(zip(grades,number)) cleandick = {k: mydick[k] for k in mydick if not isnan(mydick[k])} num = sum([x * y for x,y in mydick.items()]) den = sum(mydick.values()) snit.append(num/den) df["Snit"] = snit # Here I calculate the GPA of some form of assessment def gpa(df,string): x_gpa = [] x_sho = [] x_ect = [] for i in range(0,len(df)): if df["Fakultet"][i] == string: if math.isnan(df["Snit"][i]) == False: x_gpa.append(float(df["Snit"][i])) x_sho.append(float(df["Fremmødte"][i])) x_ect.append(float(df["Credit_edit"][i])) den = 0 num = 0 for i in range(0,len(x_gpa)): den = x_sho[i]*x_ect[i] + den num = x_gpa[i]*x_sho[i]*x_ect[i] + num out = num/den return out # Looping through each faculty for i in df['Fakultet'].unique(): print(gpa(df,i))
_____no_output_____
MIT
Exam project/Jacob legemappe/ANALYSIS_JENS_CLEAN.ipynb
tnv875/Group18-NoTeeth
Type of assessments broken down by faculties
list_type_ass=list(df['Type of assessmet_edit'].unique()) y_type_ass_inner = [] y_type_ass = [] # Looking at each faculty for i in tqdm_notebook(df['Fakultet'].unique()): df_faculty = df[df['Fakultet']==i] # Running through each type of assessment. for k in list_type_ass: # Summing number of passed people for all courses (in each faculty) broken down on each tyep of assessment y_type_ass_inner.append(df_faculty[df_faculty['Type of assessmet_edit']==k]['Antal bestået'].sum(skipna = True)) y_type_ass.append(y_type_ass_inner) y_type_ass_inner=[] # Creating the categories which we want for plot. categories = [] for i in range(len(df['Fakultet'].unique())): categories_inner = [] # Oral categories_inner.append(y_type_ass[i][1]) # Written not under invigilation categories_inner.append(y_type_ass[i][2]) # Written under invigilation categories_inner.append(y_type_ass[i][4]) # Rest categories_inner.append(y_type_ass[i][0]+y_type_ass[i][3]+y_type_ass[i][5]+y_type_ass[i][6]+y_type_ass[i][7]\ +y_type_ass[i][8]+y_type_ass[i][9]) categories.append(categories_inner) #calc share. list_categories_share = [] # Running through each faculty for i in range(len(categories)): categories_share_inner = [] # For each faculty calc type of ass shares. for k in range(len(categories[i])): categories_share= categories[i][k]/sum(categories[i])*100 # times a 100 for % categories_share_inner.append(categories_share) list_categories_share.append(categories_share_inner) # Converting list to DataFrame dfcat= pd.DataFrame(list_categories_share) dfcat dfcat=dfcat.T dfcat.columns = ['Science','Social Sciences','Humanities','Health & Medical Sciences','Law','Theology'] dfcat.rename(index={0:'Oral',1:'Written not under invigilation',2:'Written under invigilation',3:'Rest'}) #dfcat.index.name = 'type_ass' dfcat=dfcat.T dfcat.columns = ['Oral','Written not invigilation','Written invigilation','Rest'] colors = ["#011f4b","#005b96","#6497b1",'#b3cde0'] dfcat.plot(kind='bar', stacked=True, color = colors, fontsize = 12) #plt.legend(bbox_to_anchor=(0, 1), loc='upper left', ncol=1) plt.rcParams["figure.figsize"] = [15,15] plt.legend(bbox_to_anchor=(0,1.02,1,0.2), loc="lower left", mode="expand", borderaxespad=0, ncol=2, fontsize = 12) #,weight = 'bold') plt.tight_layout() plt.savefig('stacked_bar_share_ass.png')
_____no_output_____
MIT
Exam project/Jacob legemappe/ANALYSIS_JENS_CLEAN.ipynb
tnv875/Group18-NoTeeth
Step1. Import and Load Data
!pip install -q pip install git+https://github.com/huggingface/transformers.git !pip install -q datasets from datasets import load_dataset emotions = load_dataset("emotion") import torch device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
_____no_output_____
Apache-2.0
SentimentalAnalysisWithGPTNeo.ipynb
bhadreshpsavani/ExploringSentimentalAnalysis
Step2. Preprocess Data
from transformers import AutoTokenizer model_name = "EleutherAI/gpt-neo-125M" tokenizer = AutoTokenizer.from_pretrained(model_name) def tokenize(batch): return tokenizer(batch["text"], padding=True, truncation=True) tokenizer tokenizer.add_special_tokens({'pad_token': '<|pad|>'}) tokenizer emotions_encoded = emotions.map(tokenize, batched=True, batch_size=None) from transformers import AutoModelForSequenceClassification num_labels = 6 model = (AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=num_labels).to(device)) emotions_encoded["train"].features emotions_encoded.set_format("torch", columns=["input_ids", "attention_mask", "label"]) emotions_encoded["train"].features from sklearn.metrics import accuracy_score, f1_score def compute_metrics(pred): labels = pred.label_ids preds = pred.predictions.argmax(-1) f1 = f1_score(labels, preds, average="weighted") acc = accuracy_score(labels, preds) return {"accuracy": acc, "f1": f1} from transformers import Trainer, TrainingArguments batch_size = 2 logging_steps = len(emotions_encoded["train"]) // batch_size training_args = TrainingArguments(output_dir="results", num_train_epochs=2, learning_rate=2e-5, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, load_best_model_at_end=True, metric_for_best_model="f1", weight_decay=0.01, evaluation_strategy="epoch", disable_tqdm=False, logging_steps=logging_steps,) from transformers import Trainer trainer = Trainer(model=model, args=training_args, compute_metrics=compute_metrics, train_dataset=emotions_encoded["train"], eval_dataset=emotions_encoded["validation"]) trainer.train(); results = trainer.evaluate() results preds_output = trainer.predict(emotions_encoded["validation"]) preds_output.metrics import numpy as np from sklearn.metrics import plot_confusion_matrix y_valid = np.array(emotions_encoded["validation"]["label"]) y_preds = np.argmax(preds_output.predictions, axis=1) labels = ['sadness', 'joy', 'love', 'anger', 'fear', 'surprise'] plot_confusion_matrix(y_preds, y_valid, labels) model.save_pretrained('./model') tokenizer.save_pretrained('./model')
_____no_output_____
Apache-2.0
SentimentalAnalysisWithGPTNeo.ipynb
bhadreshpsavani/ExploringSentimentalAnalysis
parameters
CLUSTER_ALGO = 'KMedoids' C_SHAPE ='circle' #C_SHAPE ='CIRCLE' #C_SHAPE ='ellipse' #N_CLUSTERS = [50,300, 1000] N_CLUSTERS = [3] CLUSTERS_STD = 0.3 N_P_CLUSTERS = [3, 30, 300, 3000] N_CLUSTERS_S = N_CLUSTERS[0] INNER_FOLDS = 3 OUTER_FOLDS = 3
_____no_output_____
MIT
Cluster/kmeans-kmedoids/KMedoids-blobs-3-0.3-varypts.ipynb
bcottman/photon_experiments
includes
%matplotlib inline import matplotlib import matplotlib.pyplot as plt from sklearn.datasets import load_breast_cancer from sklearn.datasets import load_iris from sklearn.datasets import make_blobs from sklearn.datasets import make_moons %load_ext autoreload %autoreload 2 packages = !conda list packages
_____no_output_____
MIT
Cluster/kmeans-kmedoids/KMedoids-blobs-3-0.3-varypts.ipynb
bcottman/photon_experiments
Output registry
from __future__ import print_function import sys, os old__file__ = !pwd __file__ = !cd ../../../photon ;pwd #__file__ = !pwd __file__ = __file__[0] __file__ sys.path.append(__file__) print(sys.path) os.chdir(old__file__[0]) !pwd old__file__[0] import seaborn as sns; sns.set() # for plot styling import numpy as np import pandas as pd from math import floor,ceil from sklearn.model_selection import KFold from sklearn.manifold import TSNE import itertools import seaborn as sns import matplotlib.pyplot as plt %matplotlib inline #set font size of labels on matplotlib plots plt.rc('font', size=16) #set style of plots sns.set_style('white') #define a custom palette PALLET = ['#40111D', '#DCD5E4', '#E7CC74' ,'#39C8C6', '#AC5583', '#D3500C' ,'#FFB139', '#98ADA7', '#AD989E' ,'#708090','#6C8570','#3E534D' ,'#0B8FD3','#0B47D3','#96D30B' ,'#630C3A','#F1D0AF','#64788B' ,'#8B7764','#7A3C5D','#77648B' ,'#eaff39','#39ff4e','#4e39ff' ,'#ff4e39','#87ff39','#ff3987', ] N_PALLET = len(PALLET) sns.set_palette(PALLET) sns.palplot(PALLET) from clusim.clustering import Clustering, remap2match import clusim.sim as sim from photonai.base import Hyperpipe, PipelineElement, Preprocessing, OutputSettings from photonai.optimization import FloatRange, Categorical, IntegerRange from photonai.base.photon_elements import PhotonRegistry from photonai.visual.graphics import plot_cm from photonai.photonlogger.logger import logger #from photonai.base.registry.registry import PhotonRegistry
/opt/conda/lib/python3.7/site-packages/sklearn/externals/joblib/__init__.py:15: DeprecationWarning: sklearn.externals.joblib is deprecated in 0.21 and will be removed in 0.23. Please import this functionality directly from joblib, which can be installed with: pip install joblib. If this warning is raised when loading pickled models, you may need to re-serialize those models with scikit-learn 0.21+. warnings.warn(msg, category=DeprecationWarning)
MIT
Cluster/kmeans-kmedoids/KMedoids-blobs-3-0.3-varypts.ipynb
bcottman/photon_experiments
function defintions
def yield_parameters_ellipse(n_p_clusters): cluster_std = CLUSTERS_STD for n_p_cluster in n_p_clusters: for n_cluster in N_CLUSTERS: print('ncluster:', n_cluster) n_cluster_std = [cluster_std for k in range(n_cluster)] n_samples = [n_p_cluster for k in range(n_cluster)] data_X, data_y = make_blobs(n_samples=n_samples, cluster_std=n_cluster_std, random_state=0) transformation = [[0.6, -0.6], [-0.4, 0.8]] X_ellipse = np.dot(data_X, transformation) yield [X_ellipse, data_y, n_cluster] def yield_parameters(n_p_clusters): cluster_std = CLUSTERS_STD for n_p_cluster in n_p_clusters: for n_cluster in N_CLUSTERS: n_cluster_std = [cluster_std for k in range(n_cluster)] n_samples = [n_p_cluster for k in range(n_cluster)] data_X, data_y = make_blobs(n_samples=n_samples, cluster_std=n_cluster_std, random_state=0) yield [data_X, data_y, n_cluster] def results_to_df(results): ll = [] for obj in results: ll.append([obj.operation, obj.value, obj.metric_name]) _results=pd.DataFrame(ll).pivot(index=2, columns=0, values=1) _results.columns=['Mean','STD'] return(_results) def cluster_plot(my_pipe, data_X, n_cluster, PALLET): y_pred= my_pipe.predict(data_X) data = pd.DataFrame(data_X[:, 0],columns=['x']) data['y'] = data_X[:, 1] data['labels'] = y_pred facet = sns.lmplot(data=data, x='x', y='y', hue='labels', aspect= 1.0, height=10, fit_reg=False, legend=True, legend_out=True) customPalette = PALLET #*ceil((n_cluster/N_PALLET +2)) for i, label in enumerate( np.sort(data['labels'].unique())): plt.annotate(label, data.loc[data['labels']==label,['x','y']].mean(), horizontalalignment='center', verticalalignment='center', size=5, weight='bold', color='white', backgroundcolor=customPalette[i]) plt.show() return y_pred def simple_output(string: str, number: int) -> None: print(string, number) logger.info(string, number ) __file__ = "exp1.log" base_folder = os.path.dirname(os.path.abspath('')) custom_elements_folder = os.path.join(base_folder, 'custom_elements') custom_elements_folder registry = PhotonRegistry(custom_elements_folder=custom_elements_folder) registry.activate() registry.PHOTON_REGISTRIES,PhotonRegistry.PHOTON_REGISTRIES registry.activate() registry.list_available_elements() # take off last name
PhotonCore ARDRegression sklearn.linear_model.ARDRegression Estimator AdaBoostClassifier sklearn.ensemble.AdaBoostClassifier Estimator AdaBoostRegressor sklearn.ensemble.AdaBoostRegressor Estimator BaggingClassifier sklearn.ensemble.BaggingClassifier Estimator BaggingRegressor sklearn.ensemble.BaggingRegressor Estimator BayesianGaussianMixture sklearn.mixture.BayesianGaussianMixture Estimator BayesianRidge sklearn.linear_model.BayesianRidge Estimator BernoulliNB sklearn.naive_bayes.BernoulliNB Estimator BernoulliRBM sklearn.neural_network.BernoulliRBM Estimator Binarizer sklearn.preprocessing.Binarizer Transformer CCA sklearn.cross_decomposition.CCA Transformer ConfounderRemoval photonai.modelwrapper.ConfounderRemoval.ConfounderRemoval Transformer DecisionTreeClassifier sklearn.tree.DecisionTreeClassifier Estimator DecisionTreeRegressor sklearn.tree.DecisionTreeRegressor Estimator DictionaryLearning sklearn.decomposition.DictionaryLearning Transformer DummyClassifier sklearn.dummy.DummyClassifier Estimator DummyRegressor sklearn.dummy.DummyRegressor Estimator ElasticNet sklearn.linear_model.ElasticNet Estimator ExtraDecisionTreeClassifier sklearn.tree.ExtraDecisionTreeClassifier Estimator ExtraDecisionTreeRegressor sklearn.tree.ExtraDecisionTreeRegressor Estimator ExtraTreesClassifier sklearn.ensemble.ExtraTreesClassifier Estimator ExtraTreesRegressor sklearn.ensemble.ExtraTreesRegressor Estimator FClassifSelectPercentile photonai.modelwrapper.FeatureSelection.FClassifSelectPercentile Transformer FRegressionFilterPValue photonai.modelwrapper.FeatureSelection.FRegressionFilterPValue Transformer FRegressionSelectPercentile photonai.modelwrapper.FeatureSelection.FRegressionSelectPercentile Transformer FactorAnalysis sklearn.decomposition.FactorAnalysis Transformer FastICA sklearn.decomposition.FastICA Transformer FeatureEncoder photonai.modelwrapper.OrdinalEncoder.FeatureEncoder Transformer FunctionTransformer sklearn.preprocessing.FunctionTransformer Transformer GaussianMixture sklearn.mixture.GaussianMixture Estimator GaussianNB sklearn.naive_bayes.GaussianNB Estimator GaussianProcessClassifier sklearn.gaussian_process.GaussianProcessClassifier Estimator GaussianProcessRegressor sklearn.gaussian_process.GaussianProcessRegressor Estimator GenericUnivariateSelect sklearn.feature_selection.GenericUnivariateSelect Transformer GradientBoostingClassifier sklearn.ensemble.GradientBoostingClassifier Estimator GradientBoostingRegressor sklearn.ensemble.GradientBoostingRegressor Estimator HuberRegressor sklearn.linear_model.HuberRegressor Estimator ImbalancedDataTransformer photonai.modelwrapper.imbalanced_data_transformer.ImbalancedDataTransformer Transformer IncrementalPCA sklearn.decomposition.IncrementalPCA Transformer KNeighborsClassifier sklearn.neighbors.KNeighborsClassifier Estimator KNeighborsRegressor sklearn.neighbors.KNeighborsRegressor Estimator KerasBaseClassifier photonai.modelwrapper.keras_base_models.KerasBaseClassifier Estimator KerasBaseRegression photonai.modelwrapper.keras_base_models.KerasBaseRegression Estimator KerasDnnClassifier photonai.modelwrapper.keras_dnn_classifier.KerasDnnClassifier Estimator KerasDnnRegressor photonai.modelwrapper.keras_dnn_regressor.KerasDnnRegressor Estimator KernelCenterer sklearn.preprocessing.KernelCenterer Transformer KernelPCA sklearn.decomposition.KernelPCA Transformer KernelRidge sklearn.kernel_ridge.KernelRidge Estimator LabelEncoder photonai.modelwrapper.LabelEncoder.LabelEncoder Transformer Lars sklearn.linear_model.Lars Estimator Lasso sklearn.linear_model.Lasso Estimator LassoFeatureSelection photonai.modelwrapper.FeatureSelection.LassoFeatureSelection Transformer LassoLars sklearn.linear_model.LassoLars Estimator LatentDirichletAllocation sklearn.decomposition.LatentDirichletAllocation Transformer LinearRegression sklearn.linear_model.LinearRegression Estimator LinearSVC sklearn.svm.LinearSVC Estimator LinearSVR sklearn.svm.LinearSVR Estimator LogisticRegression sklearn.linear_model.LogisticRegression Estimator MLPClassifier sklearn.neural_network.MLPClassifier Estimator MLPRegressor sklearn.neural_network.MLPRegressor Estimator MaxAbsScaler sklearn.preprocessing.MaxAbsScaler Transformer MinMaxScaler sklearn.preprocessing.MinMaxScaler Transformer MiniBatchDictionaryLearning sklearn.decomposition.MiniBatchDictionaryLearning Transformer MiniBatchSparsePCA sklearn.decomposition.MiniBatchSparsePCA Transformer MultinomialNB sklearn.naive_bayes.MultinomialNB Estimator NMF sklearn.decompositcion.NMF Transformer NearestCentroid sklearn.neighbors.NearestCentroid Estimator Normalizer sklearn.preprocessing.Normalizer Transformer NuSVC sklearn.svm.NuSVC Estimator NuSVR sklearn.svm.NuSVR Estimator OneClassSVM sklearn.svm.OneClassSVM Estimator PCA sklearn.decomposition.PCA Transformer PLSCanonical sklearn.cross_decomposition.PLSCanonical Transformer PLSRegression sklearn.cross_decomposition.PLSRegression Transformer PLSSVD sklearn.cross_decomposition.PLSSVD Transformer PassiveAggressiveClassifier sklearn.linear_model.PassiveAggressiveClassifier Estimator PassiveAggressiveRegressor sklearn.linear_model.PassiveAggressiveRegressor Estimator Perceptron sklearn.linear_model.Perceptron Estimator PhotonMLPClassifier photonai.modelwrapper.PhotonMLPClassifier.PhotonMLPClassifier Estimator PhotonOneClassSVM photonai.modelwrapper.PhotonOneClassSVM.PhotonOneClassSVM Estimator PhotonTestXPredictor photonai.test.processing_tests.results_tests.XPredictor Estimator PhotonVotingClassifier photonai.modelwrapper.Voting.PhotonVotingClassifier Estimator PhotonVotingRegressor photonai.modelwrapper.Voting.PhotonVotingRegressor Estimator PolynomialFeatures sklearn.preprocessing.PolynomialFeatures Transformer PowerTransformer sklearn.preprocessing.PowerTransformer Transformer QuantileTransformer sklearn.preprocessing.QuantileTransformer Transformer RANSACRegressor sklearn.linear_model.RANSACRegressor Estimator RFE sklearn.feature_selection.RFE Transformer RFECV sklearn.feature_selection.RFECV Transformer RadiusNeighborsClassifier sklearn.neighbors.RadiusNeighborsClassifier Estimator RadiusNeighborsRegressor sklearn.neighbors.RadiusNeighborsRegressor Estimator RandomForestClassifier sklearn.ensemble.RandomForestClassifier Estimator RandomForestRegressor sklearn.ensemble.RandomForestRegressor Estimator RandomTreesEmbedding sklearn.ensemble.RandomTreesEmbedding Transformer RangeRestrictor photonai.modelwrapper.RangeRestrictor.RangeRestrictor Estimator Ridge sklearn.linear_model.Ridge Estimator RidgeClassifier sklearn.linear_model.RidgeClassifier Estimator RobustScaler sklearn.preprocessing.RobustScaler Transformer SGDClassifier sklearn.linear_model.SGDClassifier Estimator SGDRegressor sklearn.linear_model.SGDRegressor Estimator SVC sklearn.svm.SVC Estimator SVR sklearn.svm.SVR Estimator SamplePairingClassification photonai.modelwrapper.SamplePairing.SamplePairingClassification Transformer SamplePairingRegression photonai.modelwrapper.SamplePairing.SamplePairingRegression Transformer SelectFdr sklearn.feature_selection.SelectFdr Transformer SelectFpr sklearn.feature_selection.SelectFpr Transformer SelectFromModel sklearn.feature_selection.SelectFromModel Transformer SelectFwe sklearn.feature_selection.SelectFwe Transformer SelectKBest sklearn.feature_selection.SelectKBest Transformer SelectPercentile sklearn.feature_selection.SelectPercentile Transformer SimpleImputer sklearn.impute.SimpleImputer Transformer SourceSplitter photonai.modelwrapper.source_splitter.SourceSplitter Transformer SparseCoder sklearn.decomposition.SparseCoder Transformer SparsePCA sklearn.decomposition.SparsePCA Transformer StandardScaler sklearn.preprocessing.StandardScaler Transformer TheilSenRegressor sklearn.linear_model.TheilSenRegressor Estimator TruncatedSVD sklearn.decomposition.TruncatedSVD Transformer VarianceThreshold sklearn.feature_selection.VarianceThreshold Transformer dict_learning sklearn.decomposition.dict_learning Transformer dict_learning_online sklearn.decomposition.dict_learning_online Transformer fastica sklearn.decomposition.fastica Transformer sparse_encode sklearn.decomposition.sparse_encode Transformer PhotonCluster KMeans sklearn.cluster.KMeans Estimator KMedoids sklearn_extra.cluster.KMedoids Estimator PhotonNeuro BrainAtlas photonai.neuro.brain_atlas.BrainAtlas Transformer BrainMask photonai.neuro.brain_atlas.BrainMask Transformer PatchImages photonai.neuro.nifti_transformations.PatchImages Transformer ResampleImages photonai.neuro.nifti_transformations.ResampleImages Transformer SmoothImages photonai.neuro.nifti_transformations.SmoothImages Transformer
MIT
Cluster/kmeans-kmedoids/KMedoids-blobs-3-0.3-varypts.ipynb
bcottman/photon_experiments
KMeans blobs
registry.info(CLUSTER_ALGO) def hyper_cluster(cluster_name): if C_SHAPE == 'ellipse' : yield_cluster = yield_parameters_ellipse else: yield_cluster = yield_parameters n_p_clusters = N_P_CLUSTERS for data_X, data_y,n_cluster in yield_cluster(n_p_clusters): simple_output('CLUSTER_ALGO:', CLUSTER_ALGO) simple_output('C_SHAPE:',C_SHAPE) simple_output('n_cluster:', n_cluster) simple_output('CLUSTERS_STD:', CLUSTERS_STD) simple_output('INNER_FOLDS:', INNER_FOLDS) simple_output('OUTER_FOLDS:', OUTER_FOLDS) simple_output('n_points:', len(data_y)) X = data_X.copy(); y = data_y.copy() # DESIGN YOUR PIPELINE settings = OutputSettings(project_folder='./tmp/') my_pipe = Hyperpipe('batching', optimizer='sk_opt', # optimizer_params={'n_configurations': 25}, metrics=['ARI', 'MI', 'HCV', 'FM'], best_config_metric='ARI', outer_cv=KFold(n_splits=OUTER_FOLDS), inner_cv=KFold(n_splits=INNER_FOLDS), verbosity=0, output_settings=settings) my_pipe += PipelineElement(cluster_name, hyperparameters={ 'n_clusters': IntegerRange(floor(n_cluster*.7) , ceil(n_cluster*1.2)), },random_state=777) logger.info('Cluster optimization range:', floor(n_cluster*.7), ceil(n_cluster*1.2)) print('Cluster optimization range:', floor(n_cluster*.7), ceil(n_cluster*1.2)) # NOW TRAIN YOUR PIPELINE my_pipe.fit(X, y) debug = True #------------------------------plot y_pred=cluster_plot(my_pipe, X, n_cluster, PALLET) #--------------------------------- best print(pd.DataFrame(my_pipe.best_config.items() ,columns=['n_clusters', 'k'])) #------------------------------ print('train','\n' ,results_to_df(my_pipe.results.metrics_train)) print('test','\n' ,results_to_df(my_pipe.results.metrics_test)) #------------------------------ # turn the ground-truth labels into a clusim Clustering true_clustering = Clustering().from_membership_list(y) kmeans_clustering = Clustering().from_membership_list(y_pred) # lets see how similar the predicted k-means clustering is to the true clustering #------------------------------ # using all available similar measures! row_format2 ="{:>25}" * (2) for simfunc in sim.available_similarity_measures: print(row_format2.format(simfunc, eval('sim.' + simfunc+'(true_clustering, kmeans_clustering)'))) #------------------------------# The element-centric similarity is particularly useful for understanding # how a clustering method performed # Let's start with the single similarity value: elsim = sim.element_sim(true_clustering, kmeans_clustering) print("Element-centric similarity: {}".format(elsim)) hyper_cluster(CLUSTER_ALGO) Cottman *-+ *+-=END
_____no_output_____
MIT
Cluster/kmeans-kmedoids/KMedoids-blobs-3-0.3-varypts.ipynb
bcottman/photon_experiments
[NTDS'19] assignment 1: network science[ntds'19]: https://github.com/mdeff/ntds_2019[Eda Bayram](https://lts4.epfl.ch/bayram), [EPFL LTS4](https://lts4.epfl.ch) and[Nikolaos Karalias](https://people.epfl.ch/nikolaos.karalias), [EPFL LTS2](https://lts2.epfl.ch). Students* Team: ``* `<Alice Bizeul, Gaia Carparelli, Antoine Spahr and Hugues Vinzant` RulesGrading:* The first deadline is for individual submissions. The second deadline is for the team submission.* All team members will receive the same grade based on the team solution submitted on the second deadline.* As a fallback, a team can ask for individual grading. In that case, solutions submitted on the first deadline are graded.* Collaboration between team members is encouraged. No collaboration between teams is allowed.Submission:* Textual answers shall be short. Typically one to two sentences.* Code has to be clean.* You cannot import any other library than we imported. Note that Networkx is imported in the second section and cannot be used in the first.* When submitting, the notebook is executed and the results are stored. I.e., if you open the notebook again it should show numerical results and plots. We won't be able to execute your notebooks.* The notebook is re-executed from a blank state before submission. That is to be sure it is reproducible. You can click "Kernel" then "Restart Kernel and Run All Cells" in Jupyter. ObjectiveThe purpose of this milestone is to explore a given dataset, represent it by network by constructing different graphs. In the first section, you will analyze the network properties. In the second section, you will explore various network models and find out the network model fitting the ones you construct from the dataset. Cora DatasetThe [Cora dataset](https://linqs.soe.ucsc.edu/node/236) consists of scientific publications classified into one of seven research fields. * **Citation graph:** the citation network can be constructed from the connections given in the `cora.cites` file.* **Feature graph:** each publication in the dataset is described by a 0/1-valued word vector indicating the absence/presence of the corresponding word from the dictionary and its research field, given in the `cora.content` file. The dictionary consists of 1433 unique words. A feature graph can be constructed using the Euclidean distance between the feature vector of the publications.The [`README`](data/cora/README) provides details about the content of [`cora.cites`](data/cora/cora.cites) and [`cora.content`](data/cora/cora.content). Section 1: Network Properties
import numpy as np import pandas as pd from matplotlib import pyplot as plt %matplotlib inline
_____no_output_____
Apache-2.0
Assignments/1_network_science.ipynb
carparel/NTDS
Question 1: Construct a Citation Graph and a Feature Graph Read the `cora.content` file into a Pandas DataFrame by setting a header for the column names. Check the `README` file.
column_list = ['paper_id'] + [str(i) for i in range(1,1434)] + ['class_label'] pd_content = pd.read_csv('data/cora/cora.content', delimiter='\t', names=column_list) pd_content.head()
_____no_output_____
Apache-2.0
Assignments/1_network_science.ipynb
carparel/NTDS
Print out the number of papers contained in each of the reasearch fields.**Hint:** You can use the `value_counts()` function.
pd_content['class_label'].value_counts()
_____no_output_____
Apache-2.0
Assignments/1_network_science.ipynb
carparel/NTDS
Select all papers from a field of your choice and store their feature vectors into a NumPy array.Check its shape.
my_field = 'Neural_Networks' features = pd_content[pd_content['class_label'] == my_field].drop(columns=['paper_id','class_label']).to_numpy() features.shape
_____no_output_____
Apache-2.0
Assignments/1_network_science.ipynb
carparel/NTDS
Let $D$ be the Euclidean distance matrix whose $(i,j)$ entry corresponds to the Euclidean distance between feature vectors $i$ and $j$.Using the feature vectors of the papers from the field which you have selected, construct $D$ as a Numpy array.
distance = np.zeros([features.shape[0],features.shape[0]]) for i in range(features.shape[0]): distance[i] = np.sqrt(np.sum((features[i,:] - features)**2, axis=1)) distance.shape
_____no_output_____
Apache-2.0
Assignments/1_network_science.ipynb
carparel/NTDS
Check the mean pairwise distance $\mathbb{E}[D]$.
# Mean on the upper triangle as the matrix is symetric (we also excluded the diagonal) mean_distance = distance[np.triu_indices(distance.shape[1],1)].mean() print('Mean euclidian distance between feature vectors of papers on Neural Networks: {}'.format(mean_distance))
Mean euclidian distance between feature vectors of papers on Neural Networks: 5.696602496555962
Apache-2.0
Assignments/1_network_science.ipynb
carparel/NTDS
Plot an histogram of the euclidean distances.
fig,ax = plt.subplots(1,1,figsize=(8, 8)) ax.hist(distance.flatten(), density=True, bins=20, color='salmon', edgecolor='black', linewidth=1); ax.set_title("Histogram of Euclidean distances between Neural-networks papers") ax.set_xlabel("Euclidian Distances") ax.set_ylabel("Frequency") ax.grid(True, which='major', axis='y') ax.set_axisbelow(True) plt.show()
_____no_output_____
Apache-2.0
Assignments/1_network_science.ipynb
carparel/NTDS
Now create an adjacency matrix for the papers by thresholding the Euclidean distance matrix.The resulting (unweighted) adjacency matrix should have entries$$ A_{ij} = \begin{cases} 1, \; \text{if} \; d(i,j)< \mathbb{E}[D], \; i \neq j, \\ 0, \; \text{otherwise.} \end{cases} $$First, let us choose the mean distance as the threshold.
threshold = mean_distance A_feature = np.where(distance < threshold, 1, 0) np.fill_diagonal(A_feature,0)
_____no_output_____
Apache-2.0
Assignments/1_network_science.ipynb
carparel/NTDS
Now read the `cora.cites` file and construct the citation graph by converting the given citation connections into an adjacency matrix.
cora_cites = np.genfromtxt('data/cora/cora.cites', delimiter='\t') papers = np.unique(cora_cites) A_citation = np.zeros([papers.size, papers.size]) for i in range(cora_cites.shape[0]): A_citation[np.where(papers==cora_cites[i,1]),np.where(papers==cora_cites[i,0])] = 1 A_citation.shape
_____no_output_____
Apache-2.0
Assignments/1_network_science.ipynb
carparel/NTDS
Get the adjacency matrix of the citation graph for the field that you chose.You have to appropriately reduce the adjacency matrix of the citation graph.
# get the paper id from the chosen field field_id = pd_content[pd_content['class_label'] == my_field]["paper_id"].unique() # get the index of those paper in the A_citation matrix (similar to index on the vector 'papers') field_citation_id = np.empty(field_id.shape[0]).astype(int) for i in range(field_id.shape[0]): field_citation_id[i] = np.where(papers == field_id[i])[0] # get the A_citation matrix only at the index of the paper in the field A_citation = A_citation[field_citation_id][:,field_citation_id] A_citation.shape
_____no_output_____
Apache-2.0
Assignments/1_network_science.ipynb
carparel/NTDS
Check if your adjacency matrix is symmetric. Symmetrize your final adjacency matrix if it's not already symmetric.
# a matrix is symetric if it's the same as its transpose print('The citation adjency matrix for papers on Neural Networks is symmetric: {}'.format(np.all(A_citation == A_citation.transpose()))) # symetrize it by taking the maximum between A and A.transposed A_citation = np.maximum(A_citation, A_citation.transpose()) # To verify if the matrix is symetric print('After modifiying the matrix, it is symmetric: {}'.format(np.count_nonzero(A_citation - A_citation.transpose())==0))
The citation adjency matrix for papers on Neural Networks is symmetric: False After modifiying the matrix, it is symmetric: True
Apache-2.0
Assignments/1_network_science.ipynb
carparel/NTDS
Check the shape of your adjacency matrix again.
A_citation.shape
_____no_output_____
Apache-2.0
Assignments/1_network_science.ipynb
carparel/NTDS
Question 2: Degree Distribution and Moments What is the total number of edges in each graph?
num_edges_feature = int(np.sum(A_feature)/2) # only half of the matrix num_edges_citation = int(np.sum(A_citation)/2) print(f"Number of edges in the feature graph: {num_edges_feature}") print(f"Number of edges in the citation graph: {num_edges_citation}")
Number of edges in the feature graph: 136771 Number of edges in the citation graph: 1175
Apache-2.0
Assignments/1_network_science.ipynb
carparel/NTDS
Plot the degree distribution histogram for each of the graphs.
degrees_citation = A_citation.sum(axis=1) # degree = nbr of connections --> sum of ones over columns (axis=1) degrees_feature = A_feature.sum(axis=1) deg_hist_normalization = np.ones(degrees_citation.shape[0]) / degrees_citation.shape[0] fig, axes = plt.subplots(1, 2, figsize=(16, 8)) axes[0].set_title('Citation graph degree distribution') axes[0].hist(degrees_citation, weights=deg_hist_normalization, bins=20, color='salmon', edgecolor='black', linewidth=1); axes[1].set_title('Feature graph degree distribution') axes[1].hist(degrees_feature, weights=deg_hist_normalization, bins=20, color='salmon', edgecolor='black', linewidth=1);
_____no_output_____
Apache-2.0
Assignments/1_network_science.ipynb
carparel/NTDS
Calculate the first and second moments of the degree distribution of each graph.
cit_moment_1 = np.mean(degrees_citation) cit_moment_2 = np.var(degrees_citation) feat_moment_1 = np.mean(degrees_feature) feat_moment_2 = np.var(degrees_feature) print(f"1st moment of citation graph: {cit_moment_1:.3f}") print(f"2nd moment of citation graph: {cit_moment_2:.3f}") print(f"1st moment of feature graph: {feat_moment_1:.3f}") print(f"2nd moment of feature graph: {feat_moment_2:.3f}")
1st moment of citation graph: 2.873 2nd moment of citation graph: 15.512 1st moment of feature graph: 334.403 2nd moment of feature graph: 55375.549
Apache-2.0
Assignments/1_network_science.ipynb
carparel/NTDS
What information do the moments provide you about the graphs?Explain the differences in moments between graphs by comparing their degree distributions. Answer :**The moments provide an idea of the sparsity of the graphs and the way the data is distributed using numerical values. The first moment is associated with the average value, the second to the variance of the distribution. A large 1st moment would mean a large number of edges per node on average, whereas the 2nd moment give information about the spread of the node's degree around the average value (variance). Citation degree distribution 1st moment lays around 2.8, and the second one is higher (around 15.5) with a large number of nodes (818). It thus means that there are many nodes with a small degree but there are also larger hubs, the nework is likely to be sparse. The feature degree distribution moments are larger, meaning a rather dense graph. There are many nodes with a degree of above 800 (15%), and since the network contains 818 nodes, it means that many nodes are almost saturated. The high variance shows that the degree distribution is more diffuse around the average value than for the citation graph.** Select the 20 largest hubs for each of the graphs and remove them. Observe the sparsity pattern of the adjacency matrices of the citation and feature graphs before and after such a reduction.
smallest_feat_hub_idx = np.argpartition(degrees_feature, degrees_feature.shape[0]-20)[:-20] smallest_feat_hub_idx.sort() reduced_A_feature = A_feature[smallest_feat_hub_idx][:,smallest_feat_hub_idx] smallest_cit_hub_idx = np.argpartition(degrees_citation, degrees_citation.shape[0]-20)[:-20] smallest_cit_hub_idx.sort() reduced_A_citation = A_citation[smallest_cit_hub_idx][:,smallest_cit_hub_idx] fig, axes = plt.subplots(2, 2, figsize=(16, 16)) axes[0, 0].set_title('Feature graph: adjacency matrix sparsity pattern') axes[0, 0].spy(A_feature); axes[0, 1].set_title('Feature graph without top 20 hubs: adjacency matrix sparsity pattern') axes[0, 1].spy(reduced_A_feature); axes[1, 0].set_title('Citation graph: adjacency matrix sparsity pattern') axes[1, 0].spy(A_citation); axes[1, 1].set_title('Citation graph without top 20 hubs: adjacency matrix sparsity pattern') axes[1, 1].spy(reduced_A_citation);
_____no_output_____
Apache-2.0
Assignments/1_network_science.ipynb
carparel/NTDS
Plot the new degree distribution histograms.
reduced_degrees_feat = reduced_A_feature.sum(axis=1) reduced_degrees_cit = reduced_A_citation.sum(axis=1) deg_hist_normalization = np.ones(reduced_degrees_feat.shape[0])/reduced_degrees_feat.shape[0] fig, axes = plt.subplots(1, 2, figsize=(16, 8)) axes[0].set_title('Citation graph degree distribution') axes[0].hist(reduced_degrees_cit, weights=deg_hist_normalization, bins=8, color='salmon', edgecolor='black', linewidth=1); axes[1].set_title('Feature graph degree distribution') axes[1].hist(reduced_degrees_feat, weights=deg_hist_normalization, bins=20, color='salmon', edgecolor='black', linewidth=1);
_____no_output_____
Apache-2.0
Assignments/1_network_science.ipynb
carparel/NTDS
Compute the first and second moments for the new graphs.
reduced_cit_moment_1 = np.mean(reduced_degrees_cit) reduced_cit_moment_2 = np.var(reduced_degrees_cit) reduced_feat_moment_1 = np.mean(reduced_degrees_feat) reduced_feat_moment_2 = np.var(reduced_degrees_feat) print(f"Citation graph first moment: {reduced_cit_moment_1:.3f}") print(f"Citation graph second moment: {reduced_cit_moment_2:.3f}") print(f"Feature graph first moment: {reduced_feat_moment_1:.3f}") print(f"Feature graph second moment: {reduced_feat_moment_2:.3f}")
Citation graph first moment: 1.972 Citation graph second moment: 2.380 Feature graph first moment: 302.308 Feature graph second moment: 50780.035
Apache-2.0
Assignments/1_network_science.ipynb
carparel/NTDS
Print the number of edges in the reduced graphs.
num_edges_reduced_feature = int(np.sum(reduced_A_feature)/2) num_edges_reduced_citation = int(np.sum(reduced_A_citation)/2) print(f"Number of edges in the reduced feature graph: {num_edges_reduced_feature}") print(f"Number of edges in the reduced citation graph: {num_edges_reduced_citation}")
Number of edges in the reduced feature graph: 120621 Number of edges in the reduced citation graph: 787
Apache-2.0
Assignments/1_network_science.ipynb
carparel/NTDS
Is the effect of removing the hubs the same for both networks? Look at the percentage changes for each moment. Which of the moments is affected the most and in which graph? Explain why. **Hint:** Examine the degree distributions.
change_cit_moment_1 = (reduced_cit_moment_1-cit_moment_1)/cit_moment_1 change_cit_moment_2 = (reduced_cit_moment_2-cit_moment_2)/cit_moment_2 change_feat_moment_1 = (reduced_feat_moment_1-feat_moment_1)/feat_moment_1 change_feat_moment_2 = (reduced_feat_moment_2-feat_moment_2)/feat_moment_2 print(f"% Percentage of change for citation 1st moment: {change_cit_moment_1*100:.3f}") print(f"% Percentage of change for citation 2nd moment: {change_cit_moment_2*100:.3f}") print(f"% Percentage of change for feature 1st moment: {change_feat_moment_1*100:.3f}") print(f"% Percentage of change for feature 2nd moment: {change_feat_moment_2*100:.3f}")
% Percentage of change for citation 1st moment: -31.343 % Percentage of change for citation 2nd moment: -84.656 % Percentage of change for feature 1st moment: -9.598 % Percentage of change for feature 2nd moment: -8.299
Apache-2.0
Assignments/1_network_science.ipynb
carparel/NTDS
Answer : **After looking of the percentage of change of moments, we can notice that the removal of the 20 largest hubs affects way more the citation degree distribution than the feature degree distribution. The 2nd moment of the citation degree distribution is reduced by almost 85%, this can be due to the fact that the percentage of nodes with a high degree was lower for the citation than for the feature network and they were thus all removed as part of the 20 largest hubs, resulting in a much lower variance (less spread distribution).****In conclusion, the new citation distribution is more condensed around its mean value, the degree landscape is hence more uniform. Regarding the feature degree distribution, a consistent number of nodes remain hotspots.** Question 3: Pruning, sparsity, paths By adjusting the threshold of the euclidean distance matrix, prune the feature graph so that its number of edges is roughly close (within a hundred edges) to the number of edges in the citation graph.
threshold = np.max(distance) diagonal = distance.shape[0] threshold_flag = False epsilon = 0.01*threshold tolerance = 250 while threshold > 0 and not threshold_flag: threshold -= epsilon # steps of 1% of maximum n_edge = int((np.count_nonzero(np.where(distance < threshold, 1, 0)) - diagonal)/2) # within a hundred edges if abs(num_edges_citation - n_edge) < tolerance: threshold_flag = True print(f'Found a threshold : {threshold:.3f}') A_feature_pruned = np.where(distance < threshold, 1, 0) np.fill_diagonal(A_feature_pruned, 0) num_edges_feature_pruned = int(np.count_nonzero(A_feature_pruned)/2) print(f"Number of edges in the feature graph: {num_edges_feature}") print(f"Number of edges in the feature graph after pruning: {num_edges_feature_pruned}") print(f"Number of edges in the citation graph: {num_edges_citation}")
Found a threshold : 2.957 Number of edges in the feature graph: 136771 Number of edges in the feature graph after pruning: 1386 Number of edges in the citation graph: 1175
Apache-2.0
Assignments/1_network_science.ipynb
carparel/NTDS
Remark:**The distribution of distances (which is a distribution of integers) for this particular field (Neural Networks) doesn't allow a configuration where the number of edges is roughly close (whithin a hundred of edges) to the citation distribution. This is independant of the chosen epsilon . The closest match is 250 edges apart.** Check your results by comparing the sparsity patterns and total number of edges between the graphs.
fig, axes = plt.subplots(1, 2, figsize=(12, 6)) axes[0].set_title('Citation graph sparsity') axes[0].spy(A_citation); axes[1].set_title('Feature graph sparsity') axes[1].spy(A_feature_pruned);
_____no_output_____
Apache-2.0
Assignments/1_network_science.ipynb
carparel/NTDS
Let $C_{k}(i,j)$ denote the number of paths of length $k$ from node $i$ to node $j$. We define the path matrix $P$, with entries$ P_{ij} = \displaystyle\sum_{k=0}^{N}C_{k}(i,j). $ Calculate the path matrices for both the citation and the unpruned feature graphs for $N =10$. **Hint:** Use [powers of the adjacency matrix](https://en.wikipedia.org/wiki/Adjacency_matrixMatrix_powers).
def path_matrix(A, N=10): """Compute the path matrix for matrix A for N power """ power_A = [A] for i in range(N-1): power_A.append(np.matmul(power_A[-1], A)) return np.stack(power_A, axis=2).sum(axis=2) path_matrix_citation = path_matrix(A_citation) path_matrix_feature = path_matrix(A_feature)
_____no_output_____
Apache-2.0
Assignments/1_network_science.ipynb
carparel/NTDS
Check the sparsity pattern for both of path matrices.
fig, axes = plt.subplots(1, 2, figsize=(16, 9)) axes[0].set_title('Citation Path matrix sparsity') axes[0].spy(path_matrix_citation); axes[1].set_title('Feature Path matrix sparsity') axes[1].spy(path_matrix_feature, vmin=0, vmax=1); #scaling the color bar
_____no_output_____
Apache-2.0
Assignments/1_network_science.ipynb
carparel/NTDS
Now calculate the path matrix of the pruned feature graph for $N=10$. Plot the corresponding sparsity pattern. Is there any difference?
path_matrix_pruned = path_matrix(A_feature_pruned) plt.figure(figsize=(12, 6)) plt.title('Feature Path matrix sparsity') plt.spy(path_matrix_pruned);
_____no_output_____
Apache-2.0
Assignments/1_network_science.ipynb
carparel/NTDS
Your answer here: **Many combinations of nodes have a path matrix value of zero now, meaning that they are not within the reach of N = 10 nodes from another node. This makes sense as many edges were removed in the pruning procedure (from 136000 to 1400). Hence, the number of possible paths from i to j was reduced, reducing at the same time the amount of paths of size N. The increase of the sparsity of the adjency matrix increases the diameter of a network.** Describe how you can use the above process of counting paths to determine whether a graph is connected or not. Is the original (unpruned) feature graph connected? Answer: **The graph is connected if all points are within the reach of others. In others words, if when increasing $N$, we are able to reach a point where the path matrix doesn't contain any null value, it means that the graph is connected. Therefore, even if the path matrix has some null values it can be connected, this depends on the chosen N-value. For example, if 20 nodes are aligned and linked then we know that all point are reachable. Even though, the number of paths of length 10 between the first and the last node remain 0.** If the graph is connected, how can you guess its diameter using the path matrix? Answer : **The diameter coresponds to the minimum $N$ ($N$ being a non negative integer) for which the path matrix does not contain any null value.** If any of your graphs is connected, calculate the diameter using that process.
N=0 diameter = None d_found = False while not d_found: N += 1 P = path_matrix(A_feature, N) if np.count_nonzero(P == 0) == 0: # if there are no zero in P d_found = True diameter = N print(f"The diameter of the feature graph (which is connected) is: {diameter}")
The diameter of the feature graph (which is connected) is: 2
Apache-2.0
Assignments/1_network_science.ipynb
carparel/NTDS
Check if your guess was correct using [NetworkX](https://networkx.github.io/documentation/stable/reference/algorithms/generated/networkx.algorithms.distance_measures.diameter.html).Note: usage of NetworkX is only allowed in this part of Section 1.
import networkx as nx feature_graph = nx.from_numpy_matrix(A_feature) print(f"Diameter of feature graph according to networkx: {nx.diameter(feature_graph)}")
Diameter of feature graph according to networkx: 2
Apache-2.0
Assignments/1_network_science.ipynb
carparel/NTDS
Section 2: Network Models In this section, you will analyze the feature and citation graphs you constructed in the previous section in terms of the network model types.For this purpose, you can use the NetworkX libary imported below.
import networkx as nx
_____no_output_____
Apache-2.0
Assignments/1_network_science.ipynb
carparel/NTDS
Let us create NetworkX graph objects from the adjacency matrices computed in the previous section.
G_citation = nx.from_numpy_matrix(A_citation) print('Number of nodes: {}, Number of edges: {}'. format(G_citation.number_of_nodes(), G_citation.number_of_edges())) print('Number of self-loops: {}, Number of connected components: {}'. format(G_citation.number_of_selfloops(), nx.number_connected_components(G_citation)))
Number of nodes: 818, Number of edges: 1175 Number of self-loops: 0, Number of connected components: 104
Apache-2.0
Assignments/1_network_science.ipynb
carparel/NTDS
In the rest of this assignment, we will consider the pruned feature graph as the feature network.
G_feature = nx.from_numpy_matrix(A_feature_pruned) print('Number of nodes: {}, Number of edges: {}'. format(G_feature.number_of_nodes(), G_feature.number_of_edges())) print('Number of self-loops: {}, Number of connected components: {}'. format(G_feature.number_of_selfloops(), nx.number_connected_components(G_feature)))
Number of nodes: 818, Number of edges: 1386 Number of self-loops: 0, Number of connected components: 684
Apache-2.0
Assignments/1_network_science.ipynb
carparel/NTDS
Question 4: Simulation with Erdős–Rényi and Barabási–Albert models Create an Erdős–Rényi and a Barabási–Albert graph using NetworkX to simulate the citation graph and the feature graph you have. When choosing parameters for the networks, take into account the number of vertices and edges of the original networks. The number of nodes should exactly match the number of nodes in the original citation and feature graphs.
assert len(G_citation.nodes()) == len(G_feature.nodes()) n = len(G_citation.nodes()) print('The number of nodes ({}) matches the original number of nodes: {}'.format(n,n==A_citation.shape[0]))
The number of nodes (818) matches the original number of nodes: True
Apache-2.0
Assignments/1_network_science.ipynb
carparel/NTDS
The number of match shall fit the average of the number of edges in the citation and the feature graph.
m = np.round((G_citation.size() + G_feature.size()) / 2) print('The number of match ({}) fits the average number of edges: {}'.format(m,m==np.round(np.mean([num_edges_citation,num_edges_feature_pruned]))))
The number of match (1280.0) fits the average number of edges: True
Apache-2.0
Assignments/1_network_science.ipynb
carparel/NTDS
How do you determine the probability parameter for the Erdős–Rényi graph? Answer:**Based on the principles ruling random networks (no preferential attachment but a random attachment), the expected number of edges is given by : $\langle L \rangle = p\frac{N(N-1)}{2}$ , where $\langle L \rangle$ is the average number of edges, $N$, the number of nodes and $p$, the probability parameter. Therefore we can get $p$ from the number of edges we want and the number of nodes we have : $ p = \langle L \rangle\frac{2}{N(N-1)}$ The number of expected edges is given by $m$ in our case (defined as being the average of edges between the two original networks). $N$ is the same as in the original graphs**
p = m*2/(n*(n-1)) G_er = nx.erdos_renyi_graph(n, p)
_____no_output_____
Apache-2.0
Assignments/1_network_science.ipynb
carparel/NTDS
Check the number of edges in the Erdős–Rényi graph.
print('My Erdos-Rényi network that simulates the citation graph has {} edges.'.format(G_er.size()))
My Erdos-Rényi network that simulates the citation graph has 1238 edges.
Apache-2.0
Assignments/1_network_science.ipynb
carparel/NTDS
How do you determine the preferential attachment parameter for Barabási–Albert graphs? Answer :**The Barabasi-Albert model uses growth and preferential attachement to build a scale-free network. The network is constructed by progressivly adding nodes to the network and adding a fixed number of edges, q, to each node added. Those edges are preferentially drawn towards already existing nodes with a high degree (preferential attachment). By the end of the process, the network contains n nodes and hence $n * q$ edges. Knowing that the final number of edges is defined by $m$, the parameter $q = m/n$**
q = int(m/n) G_ba = nx.barabasi_albert_graph(n, q)
_____no_output_____
Apache-2.0
Assignments/1_network_science.ipynb
carparel/NTDS
Check the number of edges in the Barabási–Albert graph.
print('My Barabási-Albert network that simulates the citation graph has {} edges.'.format(G_ba.size()))
My Barabási-Albert network that simulates the citation graph has 817 edges.
Apache-2.0
Assignments/1_network_science.ipynb
carparel/NTDS
Question 5: Giant Component Check the size of the largest connected component in the citation and feature graphs.
giant_citation = max(nx.connected_component_subgraphs(G_citation), key=len) print('The giant component of the citation graph has {} nodes and {} edges.'.format(giant_citation.number_of_nodes(), giant_citation.size())) giant_feature = max(nx.connected_component_subgraphs(G_feature), key=len) print('The giant component of the feature graph has {} nodes and {} edges.'.format(giant_feature.number_of_nodes(), giant_feature.size()))
The giant component of the feature graph has 117 nodes and 1364 edges.
Apache-2.0
Assignments/1_network_science.ipynb
carparel/NTDS
Check the size of the giant components in the generated Erdős–Rényi graph.
giant_er = max(nx.connected_component_subgraphs(G_er), key=len) print('The giant component of the Erdos-Rényi network has {} nodes and {} edges.'.format(giant_er.number_of_nodes(), giant_er.size()))
The giant component of the Erdos-Rényi network has 771 nodes and 1234 edges.
Apache-2.0
Assignments/1_network_science.ipynb
carparel/NTDS
Let us match the number of nodes in the giant component of the feature graph by simulating a new Erdős–Rényi network.How do you choose the probability parameter this time? **Hint:** Recall the expected giant component size from the lectures. Answer :** We can see the average degree of a network/each node as being the probability p multiplied by the amount of nodes to which it can connect ($N-1$, because the network is not recursive, N the number of nodes,): $\langle k \rangle = p . (N-1)$We can establish that, $S$, the portion of nodes in the Giant Component is given by $S = \frac{N_{GC}}{N}$ ($N_{GC}$, the number of nodes in the giant component) and $u$, the probability that $i$ is not linked to the GC via any other node $j$. U is also the portion of nodes not in the GC : $u = 1 - S$.Knowing that for one $j$ among the $N-1$ nodes, this probability can be seen as the probability to have no link with $j$ if $j$ is in the GC or having a link with $j$ if $j$ is not being in the GC ($p . u$), we can establish that: $u = (1 - p - p.u)^{N-1}$ Using the relationship mentionned above : $p = \frac{}{(N-1)}$, applying a log on both sides of the relationship and taking the Taylor expansion : $S = 1-e^{-\langle k \rangle S}$ => $e^{-\langle k \rangle S} =1-S$ => $-\langle k \rangle S = ln(1-S)$ => $\langle k \rangle = -\frac{1}{S}ln(1-S)$ This expression of the average degree is then used to define $p$ : $p = \frac{\langle k \rangle}{N-1} = \frac{-\frac{1}{S}ln(1-S)}{N-1}$**
GC_node = giant_feature.number_of_nodes() S = GC_node/n avg_k = -1/S*np.log(1-S) p_new = avg_k/(n-1) G_er_new = nx.erdos_renyi_graph(n, p_new)
_____no_output_____
Apache-2.0
Assignments/1_network_science.ipynb
carparel/NTDS
Check the size of the new Erdős–Rényi network and its giant component.
print('My new Erdos Renyi network that simulates the citation graph has {} edges.'.format(G_er_new.size())) giant_er_new = max(nx.connected_component_subgraphs(G_er_new), key=len) print('The giant component of the new Erdos-Rényi network has {} nodes and {} edges.'.format(giant_er_new.number_of_nodes(), giant_er_new.size()))
My new Erdos Renyi network that simulates the citation graph has 437 edges. The giant component of the new Erdos-Rényi network has 208 nodes and 210 edges.
Apache-2.0
Assignments/1_network_science.ipynb
carparel/NTDS
Question 6: Degree Distributions Recall the degree distribution of the citation and the feature graph.
fig, axes = plt.subplots(1, 2, figsize=(15, 6),sharex = True) axes[0].set_title('Citation graph') citation_degrees = [deg for (node, deg) in G_citation.degree()] axes[0].hist(citation_degrees, bins=20, color='salmon', edgecolor='black', linewidth=1); axes[1].set_title('Feature graph') feature_degrees = [deg for (node, deg) in G_feature.degree()] axes[1].hist(feature_degrees, bins=20, color='salmon', edgecolor='black', linewidth=1);
_____no_output_____
Apache-2.0
Assignments/1_network_science.ipynb
carparel/NTDS
What does the degree distribution tell us about a network? Can you make a prediction on the network model type of the citation and the feature graph by looking at their degree distributions? Answer : **The degree distribution tell us about the sparsity of a network.Both show a power law degree distribution (many nodes with few edges but a couple of big components with a lot of edges). Hence they should fall in the scale-free network category which have a similar degree distribution. Therefore the Barabasi-Albert model which is a random scale free model is probably the best match. Those distributions are indeed power laws as it could be seen by the linear behavior of the distribution using a log scale.** Now, plot the degree distribution historgrams for the simulated networks.
fig, axes = plt.subplots(1, 3, figsize=(20, 8)) axes[0].set_title('Erdos-Rényi network') er_degrees = [deg for (node, deg) in G_er.degree()] axes[0].hist(er_degrees, bins=10, color='salmon', edgecolor='black', linewidth=1) axes[1].set_title('Barabási-Albert network') ba_degrees = [deg for (node, deg) in G_ba.degree()] axes[1].hist(ba_degrees, bins=10, color='salmon', edgecolor='black', linewidth=1) axes[2].set_title('new Erdos-Rényi network') er_new_degrees = [deg for (node, deg) in G_er_new.degree()] axes[2].hist(er_new_degrees, bins=6, color='salmon', edgecolor='black', linewidth=1) plt.show()
_____no_output_____
Apache-2.0
Assignments/1_network_science.ipynb
carparel/NTDS
In terms of the degree distribution, is there a good match between the citation and feature graphs and the simulated networks?For the citation graph, choose one of the simulated networks above that match its degree distribution best. Indicate your preference below. Answer : **Regarding the feature network, none of the distributions above matche the range of degrees of the feature network. Also none of the above distributions, model the large portion of hotspots seen in the feature graph. Regarding the citation network, the Barabasi-Albert network seem to be a good match. Indeed, the range of values as well as the power-law shape of the model is close to the distribution of the citation graph showed earlier. Hence, a scale free model seems to be the best match to model the citation network for the Neural Networks field.** You can also simulate a network using the configuration model to match its degree disctribution exactly. Refer to [Configuration model](https://networkx.github.io/documentation/stable/reference/generated/networkx.generators.degree_seq.configuration_model.htmlnetworkx.generators.degree_seq.configuration_model).Let us create another network to match the degree distribution of the feature graph.
G_config = nx.configuration_model(feature_degrees) print('Configuration model has {} nodes and {} edges.'.format(G_config.number_of_nodes(), G_config.size()))
Configuration model has 818 nodes and 1386 edges.
Apache-2.0
Assignments/1_network_science.ipynb
carparel/NTDS
Does it mean that we create the same graph with the feature graph by the configuration model? If not, how do you understand that they are not the same? Answer : **No we don't create the same graph, the number of edges, nodes and degree distribution can be the same but the links can be different. For example, in a group of three papers, various configurations are possible using only 2 links. Also the function used to create this model considers self loops and paralell edges which is not the case for the real feature graph. Hence the network resulting from this modelisation will most probably not be identical to the original graph.** Question 7: Clustering Coefficient Let us check the average clustering coefficient of the original citation and feature graphs.
nx.average_clustering(G_citation) nx.average_clustering(G_feature)
_____no_output_____
Apache-2.0
Assignments/1_network_science.ipynb
carparel/NTDS
What does the clustering coefficient tell us about a network? Comment on the values you obtain for the citation and feature graph. Answer :**Clustering coefficient is linked to the presence of subgroups (or clusters) in the network. A high clustering coefficient means that a node is very likely to be part of a subgroup. Here we can observe that the clustering coefficient of the citation graph is higher (almost double) than the one of the feature graph, this can highlight the fact that citations are more likely to form subgroups than feature.** Now, let us check the average clustering coefficient for the simulated networks.
nx.average_clustering(G_er) nx.average_clustering(G_ba) nx.average_clustering(nx.Graph(G_config))
_____no_output_____
Apache-2.0
Assignments/1_network_science.ipynb
carparel/NTDS
Comment on the values you obtain for the simulated networks. Is there any good match to the citation or feature graph in terms of clustering coefficient? Answer : **No, there is not any match. The clustering coefficients are rather small compared to the ones for feature and citation graphs. Random networks have generally small clustering coefficients because they don't tend to form subgroups as the pairing is random.** Check the other [network model generators](https://networkx.github.io/documentation/networkx-1.10/reference/generators.html) provided by NetworkX. Which one do you predict to have a better match to the citation graph or the feature graph in terms of degree distribution and clustering coefficient at the same time? Justify your answer. Answer : **Based on the course notes about Watts Strogatz model which is a extension of the random network model generating small world properties and high clustering, we tested the watts_strogatz_graph function provided by NetworkX. We used the average degree ($k = m*2/n$) as an initial guess of the number of nearest neighbours to which we connect each node. We then modulated the rewiring probability to find a good match. Results did not show any satisfying match for the clustering coefficient (it was always rather low compared to the original networks). We then tuned parameter k by increasing it for a fixed p of 0.5 (corresponding to the small-world property). k was originally very low and we wanted to increase the occurence of clusters. At k =100, the clustering coefficient matched our expectations (being close to the clustering coeffients of the two distributions) but the distribtion did not match a powerlaw. In conclusion, watts-strogratz was left aside as no combination of parameters enabled to match the clustering coefficent as well as the shape of the distribution.After scrolling through the documention of NetworkX, we came across the power_law_cluster function. According to the documentation, parameter n is the number of nodes, (n = 818 in our case). The second parameter, k, is the _number of random edges to add to each new node_ which we chose to be the average degree of the original graph as an initial guess. Parameter p, the probability of connecting two nodes which already share a common neighbour (forming a triangle) is chosen to be the average of average clustering coefficient across the original distributions. This yield a clustering coefficient that was a bit low compared with our expectations. We therefore tuned this parameter to better match the coefficients, results showed that a good comprise was reached at p = 0.27.** If you find a better fit, create a graph object below for that network model. Print the number of edges and the average clustering coefficient. Plot the histogram of the degree distribution.
k = m*2/n p = (nx.average_clustering(G_citation) + nx.average_clustering(G_feature))*0.8 G_pwc = nx.powerlaw_cluster_graph(n, int(k), p) print('Power law cluster model has {} edges.'.format(G_pwc.size())) print('Power law cluster model has a clustering coefficient of {}'.format(nx.average_clustering(G_pwc))) print('Citation model has {} edges.'.format(G_citation.size())) print('Citation model has a clustering coefficient of {}'.format(nx.average_clustering(G_citation))) print('Feature model has {} edges.'.format(G_feature.size())) print('Feature model has a clustering coefficient of {}'.format(nx.average_clustering(G_feature))) fig, axs = plt.subplots(1, 3, figsize=(15, 5), sharey=True, sharex=True) axs[0].set_title('PWC graph') ws_degrees = [deg for (node, deg) in G_pwc.degree()] axs[0].hist(ws_degrees, bins=20, color='salmon', edgecolor='black', linewidth=1) axs[1].set_title('Citation graph') citation_degrees = [deg for (node, deg) in G_citation.degree()] axs[1].hist(citation_degrees, bins=20, color='salmon', edgecolor='black', linewidth=1) axs[2].set_title('Feature graph') feature_degree = [deg for (node, deg) in G_feature.degree()] axs[2].hist(feature_degree, bins=20, color='salmon', edgecolor='black', linewidth=1) plt.show()
Power law cluster model has 2440 edges. Power law cluster model has a clustering coefficient of 0.16384898371644135 Citation model has 1175 edges. Citation model has a clustering coefficient of 0.21693567980632222 Feature model has 1386 edges. Feature model has a clustering coefficient of 0.1220744470334593
Apache-2.0
Assignments/1_network_science.ipynb
carparel/NTDS
Implementing a CGAN for the Iris data set to generate synthetic data Import necessary modules and packages
import os while os.path.basename(os.getcwd()) != 'Synthetic_Data_GAN_Capstone': os.chdir('..') from utils.utils import * safe_mkdir('experiments') from utils.data_loading import load_raw_dataset import pandas as pd import numpy as np import matplotlib.pyplot as plt import torch import torch.nn as nn import torch.optim as optim from torch.autograd import Variable from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler from models.VGAN import VGAN_Generator, VGAN_Discriminator from models.CGAN_iris import CGAN_Generator, CGAN_Discriminator import random
_____no_output_____
MIT
notebooks/reports/iris.ipynb
Atrus619/CSDGAN
Set random seed for reproducibility
manualSeed = 999 print("Random Seed: ", manualSeed) random.seed(manualSeed) torch.manual_seed(manualSeed)
Random Seed: 999
MIT
notebooks/reports/iris.ipynb
Atrus619/CSDGAN
Import and briefly inspect data
iris = load_raw_dataset('iris') iris.head()
_____no_output_____
MIT
notebooks/reports/iris.ipynb
Atrus619/CSDGAN
Preprocessing dataSplit 50-50 so we can demonstrate the effectiveness of additional data
x_train, x_test, y_train, y_test = train_test_split(iris.drop(columns='species'), iris.species, test_size=0.5, stratify=iris.species, random_state=manualSeed) print("x_train:", x_train.shape) print("x_test:", x_test.shape)
x_train: (75, 4) x_test: (75, 4)
MIT
notebooks/reports/iris.ipynb
Atrus619/CSDGAN
Model parameters (feel free to play with these)
nz = 32 # Size of generator noise input H = 16 # Size of hidden network layer out_dim = x_train.shape[1] # Size of output bs = x_train.shape[0] # Full data set nc = 3 # 3 different types of label in this problem num_batches = 1 num_epochs = 10000 exp_name = 'experiments/iris_1x16' safe_mkdir(exp_name)
_____no_output_____
MIT
notebooks/reports/iris.ipynb
Atrus619/CSDGAN
Adam optimizer hyperparametersI set these based on the original paper, but feel free to play with them as well.
lr = 2e-4 beta1 = 0.5 beta2 = 0.999
_____no_output_____
MIT
notebooks/reports/iris.ipynb
Atrus619/CSDGAN
Set the device
device = torch.device("cuda:0" if (torch.cuda.is_available()) else "cpu")
_____no_output_____
MIT
notebooks/reports/iris.ipynb
Atrus619/CSDGAN
Scale continuous inputs for neural networks
scaler = StandardScaler() x_train = scaler.fit_transform(x_train) x_train_tensor = torch.tensor(x_train, dtype=torch.float) y_train_dummies = pd.get_dummies(y_train) y_train_dummies_tensor = torch.tensor(y_train_dummies.values, dtype=torch.float)
_____no_output_____
MIT
notebooks/reports/iris.ipynb
Atrus619/CSDGAN
Instantiate nets
netG = CGAN_Generator(nz=nz, H=H, out_dim=out_dim, nc=nc, bs=bs, lr=lr, beta1=beta1, beta2=beta2).to(device) netD = CGAN_Discriminator(H=H, out_dim=out_dim, nc=nc, lr=lr, beta1=beta1, beta2=beta2).to(device)
_____no_output_____
MIT
notebooks/reports/iris.ipynb
Atrus619/CSDGAN
Print modelsI chose to avoid using sequential mode in case I wanted to create non-sequential networks, it is more flexible in my opinion, but does not print out as nicely
print(netG) print(netD)
CGAN_Generator( (fc1): Linear(in_features=35, out_features=16, bias=True) (output): Linear(in_features=16, out_features=4, bias=True) (act): LeakyReLU(negative_slope=0.2) (loss_fn): BCELoss() ) CGAN_Discriminator( (fc1): Linear(in_features=7, out_features=16, bias=True) (output): Linear(in_features=16, out_features=1, bias=True) (act): LeakyReLU(negative_slope=0.2) (m): Sigmoid() (loss_fn): BCELoss() )
MIT
notebooks/reports/iris.ipynb
Atrus619/CSDGAN
Define labels
real_label = 1 fake_label = 0
_____no_output_____
MIT
notebooks/reports/iris.ipynb
Atrus619/CSDGAN
Training LoopLook through the comments to better understand the steps that are taking place
print("Starting Training Loop...") for epoch in range(num_epochs): for i in range(num_batches): # Only one batch per epoch since our data is horrifically small # Update Discriminator # All real batch first real_data = x_train_tensor.to(device) # Format batch (entire data set in this case) real_classes = y_train_dummies_tensor.to(device) label = torch.full((bs,), real_label, device=device) # All real labels output = netD(real_data, real_classes).view(-1) # Forward pass with real data through Discriminator netD.train_one_step_real(output, label) # All fake batch next noise = torch.randn(bs, nz, device=device) # Generate batch of latent vectors fake = netG(noise, real_classes) # Fake image batch with netG label.fill_(fake_label) output = netD(fake.detach(), real_classes).view(-1) netD.train_one_step_fake(output, label) netD.combine_and_update_opt() netD.update_history() # Update Generator label.fill_(real_label) # Reverse labels, fakes are real for generator cost output = netD(fake, real_classes).view(-1) # Since D has been updated, perform another forward pass of all-fakes through D netG.train_one_step(output, label) netG.update_history() # Output training stats if epoch % 1000 == 0 or (epoch == num_epochs-1): print('[%d/%d]\tLoss_D: %.4f\tLoss_G: %.4f\tD(x): %.4f\tD(G(z)): %.4f / %.4f' % (epoch+1, num_epochs, netD.loss.item(), netG.loss.item(), netD.D_x, netD.D_G_z1, netG.D_G_z2)) with torch.no_grad(): fake = netG(netG.fixed_noise, real_classes).detach().cpu() netG.fixed_noise_outputs.append(scaler.inverse_transform(fake)) print("Training Complete")
Starting Training Loop... [1/10000] Loss_D: 1.3938 Loss_G: 0.7742 D(x): 0.4603 D(G(z)): 0.4609 / 0.4611 [1001/10000] Loss_D: 1.3487 Loss_G: 0.7206 D(x): 0.5089 D(G(z)): 0.4883 / 0.4882 [2001/10000] Loss_D: 1.3557 Loss_G: 0.7212 D(x): 0.5025 D(G(z)): 0.4864 / 0.4864 [3001/10000] Loss_D: 1.4118 Loss_G: 0.6679 D(x): 0.5029 D(G(z)): 0.5138 / 0.5135 [4001/10000] Loss_D: 1.3725 Loss_G: 0.7053 D(x): 0.5021 D(G(z)): 0.4943 / 0.4942 [5001/10000] Loss_D: 1.3690 Loss_G: 0.7011 D(x): 0.5064 D(G(z)): 0.4967 / 0.4966 [6001/10000] Loss_D: 1.3813 Loss_G: 0.6954 D(x): 0.5028 D(G(z)): 0.4993 / 0.4994 [7001/10000] Loss_D: 1.3799 Loss_G: 0.6996 D(x): 0.5006 D(G(z)): 0.4970 / 0.4969 [8001/10000] Loss_D: 1.3810 Loss_G: 0.6978 D(x): 0.5006 D(G(z)): 0.4976 / 0.4978 [9001/10000] Loss_D: 1.3871 Loss_G: 0.6914 D(x): 0.5008 D(G(z)): 0.5010 / 0.5010 [10000/10000] Loss_D: 1.3838 Loss_G: 0.6955 D(x): 0.5003 D(G(z)): 0.4989 / 0.4989 Training Complete
MIT
notebooks/reports/iris.ipynb
Atrus619/CSDGAN
Output diagnostic plots tracking training progress and statistics
%matplotlib inline training_plots(netD=netD, netG=netG, num_epochs=num_epochs, save=exp_name) plot_layer_scatters(netG, title="Generator", save=exp_name) plot_layer_scatters(netD, title="Discriminator", save=exp_name)
/home/aj/.local/lib/python3.7/site-packages/matplotlib/figure.py:445: UserWarning: Matplotlib is currently using module://ipykernel.pylab.backend_inline, which is a non-GUI backend, so cannot show the figure. % get_backend())
MIT
notebooks/reports/iris.ipynb
Atrus619/CSDGAN
It looks like training stabilized fairly quickly, after only a few thousand iterations. The fact that the weight norm increased over time probably means that this network would benefit from some regularization. Compare performance of training on fake data versus real dataIn this next section, we will lightly tune two models via cross-validation. The first model will be trained on the 75 real training data examples and tested on the remaining 75 testing data examples, whereas the second set of models will be trained on different amounts of generated data (no real data involved whatsoever). We will then compare performance and plot some graphs to evaluate our CGAN.
y_test_dummies = pd.get_dummies(y_test) print("Dummy columns match?", all(y_train_dummies.columns == y_test_dummies.columns)) x_test = scaler.transform(x_test) labels_list = [x for x in y_train_dummies.columns] param_grid = {'tol': [1e-9, 1e-8, 1e-7, 1e-6, 1e-5], 'C': [0.5, 0.75, 1, 1.25], 'l1_ratio': [0, 0.25, 0.5, 0.75, 1]}
Dummy columns match? True
MIT
notebooks/reports/iris.ipynb
Atrus619/CSDGAN
Train on real data
model_real, score_real = train_test_logistic_reg(x_train, y_train, x_test, y_test, param_grid=param_grid, cv=5, random_state=manualSeed, labels=labels_list)
Accuracy: 0.92 Best Parameters: {'C': 1.25, 'l1_ratio': 0.5, 'tol': 1e-09} precision recall f1-score support Iris-setosa 1.00 1.00 1.00 25 Iris-versicolor 0.85 0.92 0.88 25 Iris-virginica 0.91 0.84 0.87 25 accuracy 0.92 75 macro avg 0.92 0.92 0.92 75 weighted avg 0.92 0.92 0.92 75 [[25 0 0] [ 0 23 2] [ 0 4 21]]
MIT
notebooks/reports/iris.ipynb
Atrus619/CSDGAN
Train on various levels of fake data
test_range = [75, 150, 300, 600, 1200] fake_bs = bs fake_models = [] fake_scores = [] for size in test_range: num_batches = size // fake_bs + 1 genned_data = np.empty((0, out_dim)) genned_labels = np.empty(0) rem = size while rem > 0: curr_size = min(fake_bs, rem) noise = torch.randn(curr_size, nz, device=device) fake_labels, output_labels = gen_labels(size=curr_size, num_classes=nc, labels_list=labels_list) fake_labels = fake_labels.to(device) rem -= curr_size fake_data = netG(noise, fake_labels).cpu().detach().numpy() genned_data = np.concatenate((genned_data, fake_data)) genned_labels = np.concatenate((genned_labels, output_labels)) print("For size of:", size) model_fake_tmp, score_fake_tmp = train_test_logistic_reg(genned_data, genned_labels, x_test, y_test, param_grid=param_grid, cv=5, random_state=manualSeed, labels=labels_list) fake_models.append(model_fake_tmp) fake_scores.append(score_fake_tmp)
For size of: 75 Accuracy: 0.92 Best Parameters: {'C': 0.5, 'l1_ratio': 0.25, 'tol': 1e-09} precision recall f1-score support Iris-setosa 1.00 1.00 1.00 25 Iris-versicolor 0.88 0.88 0.88 25 Iris-virginica 0.88 0.88 0.88 25 accuracy 0.92 75 macro avg 0.92 0.92 0.92 75 weighted avg 0.92 0.92 0.92 75 [[25 0 0] [ 0 22 3] [ 0 3 22]] For size of: 150 Accuracy: 0.92 Best Parameters: {'C': 0.5, 'l1_ratio': 1, 'tol': 1e-09} precision recall f1-score support Iris-setosa 1.00 1.00 1.00 25 Iris-versicolor 0.88 0.88 0.88 25 Iris-virginica 0.88 0.88 0.88 25 accuracy 0.92 75 macro avg 0.92 0.92 0.92 75 weighted avg 0.92 0.92 0.92 75 [[25 0 0] [ 0 22 3] [ 0 3 22]] For size of: 300 Accuracy: 0.92 Best Parameters: {'C': 0.5, 'l1_ratio': 0, 'tol': 1e-09} precision recall f1-score support Iris-setosa 1.00 1.00 1.00 25 Iris-versicolor 0.88 0.88 0.88 25 Iris-virginica 0.88 0.88 0.88 25 accuracy 0.92 75 macro avg 0.92 0.92 0.92 75 weighted avg 0.92 0.92 0.92 75 [[25 0 0] [ 0 22 3] [ 0 3 22]] For size of: 600 Accuracy: 0.92 Best Parameters: {'C': 0.5, 'l1_ratio': 0.75, 'tol': 1e-09} precision recall f1-score support Iris-setosa 1.00 1.00 1.00 25 Iris-versicolor 0.88 0.88 0.88 25 Iris-virginica 0.88 0.88 0.88 25 accuracy 0.92 75 macro avg 0.92 0.92 0.92 75 weighted avg 0.92 0.92 0.92 75 [[25 0 0] [ 0 22 3] [ 0 3 22]] For size of: 1200 Accuracy: 0.9466666666666667 Best Parameters: {'C': 0.5, 'l1_ratio': 0, 'tol': 1e-09} precision recall f1-score support Iris-setosa 1.00 1.00 1.00 25 Iris-versicolor 0.89 0.96 0.92 25 Iris-virginica 0.96 0.88 0.92 25 accuracy 0.95 75 macro avg 0.95 0.95 0.95 75 weighted avg 0.95 0.95 0.95 75 [[25 0 0] [ 0 24 1] [ 0 3 22]]
MIT
notebooks/reports/iris.ipynb
Atrus619/CSDGAN
Well, it looks like this experiment was a success. The models trained on fake data were actually able to outperform models trained on real data, which supports the belief that the CGAN is able to understand the distribution of the data it was trained on and generate meaningful examples that can be used to add additional information to the model. Let's visualize some of the distributions of outputs to get a better idea of what took place
iris_plot_scatters(genned_data, genned_labels, "Fake Data", scaler, alpha=0.5, save=exp_name) # Fake data iris_plot_scatters(iris.drop(columns='species'), np.array(iris.species), "Full Real Data Set", alpha=0.5, save=exp_name) # All real data iris_plot_densities(genned_data, genned_labels, "Fake Data", scaler, save=exp_name) # Fake data iris_plot_densities(iris.drop(columns='species'), np.array(iris.species), "Full Real Data Set", save=exp_name) # All real data plot_scatter_matrix(genned_data, "Fake Data", iris.drop(columns='species'), scaler=scaler, save=exp_name) plot_scatter_matrix(iris.drop(columns='species'), "Real Data", iris.drop(columns='species'), scaler=None, save=exp_name)
_____no_output_____
MIT
notebooks/reports/iris.ipynb
Atrus619/CSDGAN
Finally, I present a summary of the test results ran above
fake_data_training_plots(real_range=bs, score_real=score_real, test_range=test_range, fake_scores=fake_scores, save=exp_name)
/home/aj/.local/lib/python3.7/site-packages/matplotlib/figure.py:445: UserWarning: Matplotlib is currently using module://ipykernel.pylab.backend_inline, which is a non-GUI backend, so cannot show the figure. % get_backend())
MIT
notebooks/reports/iris.ipynb
Atrus619/CSDGAN
Predictive Maintenance using Machine Learning on Sagemaker*Part 3 - Timeseries data preparation* Initialization---Directory structure to run this notebook:```nasa-turbofan-rul-lstm|+--- data| || +--- interim: intermediate data we can manipulate and process| || \--- raw: *immutable* data downloaded from the source website|+--- notebooks: all the notebooks are positionned here|+--- src: utility python modules are stored here``` Imports
%load_ext autoreload import matplotlib.pyplot as plt import sagemaker import boto3 import os import errno import pandas as pd import numpy as np import seaborn as sns import json import sys import s3fs import mxnet as mx import joblib %matplotlib inline %autoreload 2 sns.set_style('darkgrid') sys.path.append('../src') figures = [] INTERIM_DATA = '../data/interim' PROCESSED_DATA = '../data/processed'
_____no_output_____
MIT-0
nasa-turbofan-rul-lstm/notebooks/3 - Data preparation.ipynb
michaelhoarau/sagemaker-predictive-maintenance
Loading data from the previous notebook
# Load data from the notebook local storage: %store -r reduced_train_data %store -r reduced_test_data # If the data are not present in the notebook local storage, we need to load them from disk: success_msg = 'Loaded "reduced_train_data"' if 'reduced_train_data' not in locals(): print('Nothing in notebook store, trying to load from disk.') try: local_path = '../data/interim' reduced_train_data = pd.read_csv(os.path.join(local_path, 'reduced_train_data.csv')) reduced_train_data = reduced_train_data.set_index(['unit_number', 'time']) print(success_msg) except Exception as e: if (e.errno == errno.ENOENT): print('Files not found to load train data from: you need to execute the previous notebook.') else: print('Train data found in notebook environment.') print(success_msg) success_msg = 'Loaded "reduced_test_data"' if 'reduced_test_data' not in locals(): print('Nothing in notebook store, trying to load from disk.') try: local_path = '../data/interim' reduced_test_data = pd.read_csv(os.path.join(local_path, 'reduced_test_data.csv')) reduced_test_data = reduced_test_data.set_index(['unit_number', 'time']) print(success_msg) except Exception as e: if (e.errno == errno.ENOENT): print('Files not found to load test data from: you need to execute the previous notebook.') else: print('Test data found in notebook environment.') print(success_msg) print(reduced_train_data.shape) reduced_train_data.head() print(reduced_test_data.shape) reduced_test_data.head()
(13096, 26)
MIT-0
nasa-turbofan-rul-lstm/notebooks/3 - Data preparation.ipynb
michaelhoarau/sagemaker-predictive-maintenance
Study parameters
sequence_length = 20
_____no_output_____
MIT-0
nasa-turbofan-rul-lstm/notebooks/3 - Data preparation.ipynb
michaelhoarau/sagemaker-predictive-maintenance
Normalization--- Normalizing the training dataFirst, we build some scalers based on the training data:
from sklearn import preprocessing # Isolate the columns to normalize: normalized_cols = reduced_train_data.columns.difference(['true_rul', 'piecewise_rul']) # Build MinMax scalers for the features and the labels: features_scaler = preprocessing.MinMaxScaler() labels_scaler = preprocessing.MinMaxScaler() # Normalize the operational settings and sensor measurements data (our features): reduced_train_data['sensor_measurement_17'] = reduced_train_data['sensor_measurement_17'].astype(np.float64) normalized_data = pd.DataFrame( features_scaler.fit_transform(reduced_train_data[normalized_cols]), columns=normalized_cols, index=reduced_train_data.index ) # Normalizing the labels data: reduced_train_data['piecewise_rul'] = reduced_train_data['piecewise_rul'].astype(np.float64) normalized_training_labels = pd.DataFrame( labels_scaler.fit_transform(reduced_train_data[['piecewise_rul']]), columns=['piecewise_rul'], index=reduced_train_data.index ) # Join the normalized features with the RUL (label) data: joined_data = normalized_training_labels.join(normalized_data) normalized_train_data = joined_data.reindex(columns=reduced_train_data.columns) normalized_train_data['true_rul'] = reduced_train_data['true_rul'] print(normalized_train_data.shape) normalized_train_data.head()
(20631, 19)
MIT-0
nasa-turbofan-rul-lstm/notebooks/3 - Data preparation.ipynb
michaelhoarau/sagemaker-predictive-maintenance
Normalizing the testing dataNext, we apply these normalizers to the testing data:
normalized_test_data = pd.DataFrame( features_scaler.transform(reduced_test_data[normalized_cols]), columns=normalized_cols, index=reduced_test_data.index ) reduced_test_data['piecewise_rul'] = reduced_test_data['piecewise_rul'].astype(np.float64) normalized_test_labels = pd.DataFrame( labels_scaler.transform(reduced_test_data[['piecewise_rul']]), columns=['piecewise_rul'], index=reduced_test_data.index ) # Join the normalize data with the RUL data: joined_data = normalized_test_labels.join(normalized_test_data) normalized_test_data = joined_data.reindex(columns=reduced_test_data.columns) normalized_test_data['true_rul'] = reduced_test_data['true_rul'] print(normalized_test_data.shape) normalized_test_data.head()
(13096, 26)
MIT-0
nasa-turbofan-rul-lstm/notebooks/3 - Data preparation.ipynb
michaelhoarau/sagemaker-predictive-maintenance
Sequences generation---
from lstm_utils import generate_sequences, generate_labels, generate_training_sequences, generate_testing_sequences # Building features, target and engine unit lists: features = normalized_train_data.columns.difference(['true_rul', 'piecewise_rul']) target = ['piecewise_rul'] unit_list = list(set(normalized_train_data.index.get_level_values(level=0).tolist())) # Generating features and labels for the training sequences: train_sequences = generate_training_sequences(normalized_train_data, sequence_length, features, unit_list) train_labels = generate_training_sequences(normalized_train_data, sequence_length, target, unit_list) test_sequences, test_labels, unit_span = generate_testing_sequences(normalized_test_data, sequence_length, features, target, unit_list) # Checking sequences shapes: print('train_sequences:', train_sequences.shape) print('train_labels:', train_labels.shape) print('test_sequences:', test_sequences.shape) print('test_labels:', test_labels.shape)
Unit 1 test sequence ignored, not enough data points. Unit 22 test sequence ignored, not enough data points. Unit 39 test sequence ignored, not enough data points. Unit 85 test sequence ignored, not enough data points. train_sequences: (18631, 20, 17) train_labels: (18631, 1) test_sequences: (11035, 20, 17) test_labels: (11035, 1)
MIT-0
nasa-turbofan-rul-lstm/notebooks/3 - Data preparation.ipynb
michaelhoarau/sagemaker-predictive-maintenance
Visualizing the sequencesLet's visualize the sequences we built for an engine (e.g. unit 3 in the example below) to understand what will be fed to the LSTM model:
from lstm_utils import plot_timestep, plot_text # We use the normalized sequences for the plot but the original data for the label (RUL) for understanding purpose: current_unit = 1 tmp_sequences = generate_training_sequences(normalized_train_data, sequence_length, features, [current_unit]) tmp_labels = generate_training_sequences(reduced_train_data, sequence_length, target, [current_unit]) # Initialize the graphics: print('Sequences generated for unit {}:\n'.format(current_unit)) sns.set_style('white') fig = plt.figure(figsize=(35,6)) # Initialize the loop: nb_signals = min(12, len(tmp_sequences[0][0])) plots_per_row = nb_signals + 3 nb_rows = 7 nb_cols = plots_per_row current_row = 0 previous_index = -1 timesteps = [ # We will plot the first 3 sequences (first timesteps fed to the LSTM model): 0, 1, 2, # And the last 3 ones: len(tmp_sequences) - 3, len(tmp_sequences) - 2, len(tmp_sequences) - 1 ] # Loops through all the timesteps we want to plot: for i in timesteps: # We draw a vertical ellispsis for the hole in the timesteps: if (i - previous_index > 1): plot_text(fig, nb_rows, nb_cols, nb_signals, current_row, '. . .', 1, no_axis=True, main_title='', plots_per_row=plots_per_row, options={'fontsize': 32, 'rotation': 270}) current_row += 1 # Timestep column: previous_index = i plot_text(fig, nb_rows, nb_cols, nb_signals, current_row, 'T{}'.format(i), 1, no_axis=True, main_title='Timestep', plots_per_row=plots_per_row, options={'fontsize': 16}) # For a given timestep, we want to loop through all the signals to plot: plot_timestep(nb_rows, nb_cols, nb_signals, current_row, 2, tmp_sequences[i].T, features.tolist(), plots_per_row=plots_per_row) # Then we draw an ellipsis: plot_text(fig, nb_rows, nb_cols, nb_signals, current_row, '. . .', nb_signals + 2, no_axis=True, main_title='', plots_per_row=plots_per_row, options={'fontsize': 32}) # Finally, we show the remaining useful life at the end of the row for this timestep: plot_text(fig, nb_rows, nb_cols, nb_signals, current_row, int(tmp_labels[i][0]), nb_signals + 3, no_axis=False, main_title='RUL', plots_per_row=plots_per_row, options={'fontsize': 16, 'color': '#CC0000'}) current_row += 1
Sequences generated for unit 1:
MIT-0
nasa-turbofan-rul-lstm/notebooks/3 - Data preparation.ipynb
michaelhoarau/sagemaker-predictive-maintenance
Cleanup--- Storing data for the next notebook
%store labels_scaler %store test_sequences %store test_labels %store unit_span #columns = normalized_train_data.columns.tolist() #%store columns #%store train_sequences #%store train_labels #%store normalized_train_data #%store normalized_test_data #%store sequence_length
Stored 'labels_scaler' (MinMaxScaler) Stored 'test_sequences' (ndarray) Stored 'test_labels' (ndarray) Stored 'unit_span' (list)
MIT-0
nasa-turbofan-rul-lstm/notebooks/3 - Data preparation.ipynb
michaelhoarau/sagemaker-predictive-maintenance
Persisting these data to diskThis is useful in case you want to be able to execute each notebook independantly (from one session to another) and don't want to reexecute every notebooks whenever you want to focus on a particular step. Let's start by persisting train and test sequences and the associated labels:
import h5py as h5 train_data = os.path.join(PROCESSED_DATA, 'train.h5') with h5.File(train_data, 'w') as ftrain: ftrain.create_dataset('train_sequences', data=train_sequences) ftrain.create_dataset('train_labels', data=train_labels) ftrain.close() test_data = os.path.join(PROCESSED_DATA, 'test.h5') with h5.File(test_data, 'w') as ftest: ftest.create_dataset('test_sequences', data=test_sequences) ftest.create_dataset('test_labels', data=test_labels) ftest.close()
_____no_output_____
MIT-0
nasa-turbofan-rul-lstm/notebooks/3 - Data preparation.ipynb
michaelhoarau/sagemaker-predictive-maintenance
Pushing these files to S3:
sagemaker_session = sagemaker.Session() bucket = sagemaker_session.default_bucket() prefix = 'nasa-rul-lstm/data' train_data_location = 's3://{}/{}'.format(bucket, prefix) s3_resource = boto3.Session().resource('s3') s3_resource.Bucket(bucket).Object('{}/train/train.h5'.format(prefix)).upload_file(train_data) s3_resource.Bucket(bucket).Object('{}/test/test.h5'.format(prefix)).upload_file(test_data) # Build the data channel and write it to disk: data_channels = {'train': 's3://{}/{}/train/train.h5'.format(bucket, prefix)} with open(os.path.join(PROCESSED_DATA, 'data_channels.txt'), 'w') as f: f.write(str(data_channels)) %store data_channels
Stored 'data_channels' (dict)
MIT-0
nasa-turbofan-rul-lstm/notebooks/3 - Data preparation.ipynb
michaelhoarau/sagemaker-predictive-maintenance