markdown stringlengths 0 1.02M | code stringlengths 0 832k | output stringlengths 0 1.02M | license stringlengths 3 36 | path stringlengths 6 265 | repo_name stringlengths 6 127 |
|---|---|---|---|---|---|
Create a scatter plot presenting the relationship between total_bill and tip--- | _____no_output_____ | MIT | Copy_of_Workshop2Final.ipynb | feliz10/papus | |
Create one plot with the relationship of size and tip. | _____no_output_____ | MIT | Copy_of_Workshop2Final.ipynb | feliz10/papus | |
Importing Packages | import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
import os
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
from sklearn.naive_bayes import GaussianNB
from imblearn.over_sampling import SMOTE
from sklearn.metrics import confusion_matrix,ConfusionMatrixDisplay,classification_report,plot_roc_curve,accuracy_score
pd.set_option('display.max_columns',25)
warnings.filterwarnings('ignore')
# Importing Dataset
data = pd.read_csv(r'./credit_cards_dataset.csv')
data.head(10)
data.info()
#info shows that there is no null values and all the features are numeric
data.describe(include='all') # Descriptive analysis
data.rename(columns={'PAY_0':'PAY_1','default.payment.next.month':'def_pay'},inplace=True)
#rename few columns | _____no_output_____ | MIT | Model/99_kaggle_credit_card_analysis_and_prediction.ipynb | skawns0724/KOSA-Big-Data_Vision |
Exploratory Data Analysis | plt.figure(figsize=(10,6))
data.groupby('def_pay')['AGE'].hist(legend=True)
plt.show()
#here we can see that, between age 20 to 45 most of the people will fall into..
sns.distplot(data['AGE'])
plt.title('Age Distribution')
sns.boxplot('def_pay','LIMIT_BAL',data=data)
data[data['LIMIT_BAL']>700000].sort_values(ascending=False,by='LIMIT_BAL')
data[data['LIMIT_BAL']>700000].value_counts().sum()
plt.figure(figsize=(16,5))
plt.subplot(121)
sns.boxplot(x='SEX', y= 'AGE',data = data)
sns.stripplot(x='SEX', y= 'AGE',data = data,linewidth = 0.9)
plt.title ('Sex vs AGE')
plt.subplot(122)
ax = sns.countplot(x='EDUCATION',data = data, order= data['EDUCATION'].value_counts().index)
plt.title ('EDUCATION')
labels = data['EDUCATION'].value_counts()
for i, v in enumerate(labels):
ax.text(i,v+100,v, horizontalalignment='center')
plt.show()
plt.figure(figsize=(20,5))
plt.subplot(121)
sns.boxplot(x='def_pay', y= 'AGE',data = data)
sns.stripplot(x='def_pay', y= 'AGE',data = data,linewidth = 0.9)
plt.title ('Age vs def_pay')
ax2=plt.subplot(1,2,2)
pay_edu = data.groupby('EDUCATION')['def_pay'].value_counts(normalize=True).unstack()
pay_edu = pay_edu.sort_values(ascending=False,by=1)
pay_edu.plot(kind='bar',stacked= True,color=["#3f3e6fd1", "#85c6a9"], ax = ax2)
plt.legend(loc=(1.04,0))
plt.title('Education vs def_pay')
plt.show()
# function for Multivariate analysis
# This method is used to show point estimates and confidence intervals using scatter plot graphs
def plotfig(df1,col11,col22,deft1):
plt.figure(figsize=(16,6))
plt.subplot(121)
sns.pointplot(df1[col11], df1[deft1],hue = df1[col22])
plt.subplot(122)
sns.countplot(df1[col11], hue = df1[col22])
plt.show()
def varplot(df2, col1, col2, deft, bin=3, unique=10):
df=df2.copy()
if len(df[col1].unique())>unique:
df[col1+'cut']= pd.qcut(df[col1],bin)
if len(df[col2].unique())>unique:
df[col2+'cut']= pd.qcut(df[col2],bin)
return plotfig(df,col1+'cut',col2+'cut',deft)
else:
df[col2+'cut']= df[col2]
return plotfig(df,col1+'cut',col2+'cut',deft)
else:
return plotfig(df,col1,col2,deft)
varplot(data,'AGE','SEX','def_pay',3)
varplot(data,'LIMIT_BAL','AGE','def_pay',3)
# Univariate Analysis
df = data.drop('ID',1)
nuniq = df.nunique()
df = data[[col for col in df if nuniq[col]>1 and nuniq[col]<50]]
row, cols = df.shape
colnames = list(df)
graph_perrow = 5
graph_row = (cols+graph_perrow-1)/ graph_perrow
max_graph = 20
plt.figure(figsize=(graph_perrow*12,graph_row*8))
for i in range(min(cols,max_graph)):
plt.subplot(graph_row,graph_perrow,i+1)
coldf = df.iloc[:,i]
if (not np.issubdtype(type(coldf),np.number)):
sns.countplot(colnames[i],data= df, order= df[colnames[i]].value_counts().index)
else:
coldf.hist()
plt.title(colnames[i])
plt.show()
cont_var = df.select_dtypes(exclude='object').columns
nrow = (len(cont_var)+5-1)/5
plt.figure(figsize=(12*5,6*2))
for i,j in enumerate(cont_var):
plt.subplot(nrow,5,i+1)
sns.distplot(data[j])
plt.show()
# from the above,we can see that we have maximum clients from 20-30 age group followed by 31-40.
# Hence with increasing age group the number of clients that will default the payment next month is decreasing.
# Hence we can see that Age is important feature to predict the default payment for next month.
plt.subplots(figsize=(26,20))
corr = data.corr()
sns.heatmap(corr,annot=True)
plt.show()
from statsmodels.stats.outliers_influence import variance_inflation_factor
df= data.drop(['def_pay','ID'],1)
vif = pd.DataFrame()
vif['Features']= df.columns
vif['vif']= [variance_inflation_factor(df.values,i) for i in range(df.shape[1])]
vif
# From this heatmap and VIF we can see that there are some multicolinearity(values >10) in the data which we can handle
# simply doing feature engineering of some columns
bill_tot = pd.DataFrame(data['BILL_AMT1']+data['BILL_AMT2']+data['BILL_AMT3']+data['BILL_AMT4']+data['BILL_AMT5']+data['BILL_AMT6'],columns=['bill_tot'])
pay_tot =pd.DataFrame(data['PAY_1']+data['PAY_2']+data['PAY_3']+data['PAY_4']+data['PAY_5']+data['PAY_6'],columns=['pay_tot'])
pay_amt_tot = pd.DataFrame(data['PAY_AMT1']+data['PAY_AMT2']+data['PAY_AMT3']+data['PAY_AMT4']+data['PAY_AMT5']+data['PAY_AMT6'],columns=['pay_amt_tot'])
frames=[bill_tot,pay_tot,pay_amt_tot,data['def_pay']]
tot = pd.concat(frames,axis=1)
plt.figure(figsize=(20,4))
plt.subplot(131)
sns.boxplot(x='def_pay',y='pay_tot',data = tot)
sns.stripplot(x='def_pay',y='pay_tot',data = tot,linewidth=1)
plt.subplot(132)
sns.boxplot(x='def_pay', y='bill_tot',data=tot)
sns.stripplot(x='def_pay', y='bill_tot',data=tot,linewidth=1)
plt.subplot(133)
sns.boxplot(x='def_pay', y='pay_amt_tot',data=tot)
sns.stripplot(x='def_pay', y='pay_amt_tot',data=tot,linewidth=1)
plt.show()
sns.pairplot(tot[['bill_tot','pay_amt_tot','pay_tot','def_pay']],hue='def_pay')
plt.show()
sns.violinplot(x=tot['def_pay'], y= tot['bill_tot'])
tot.drop('def_pay',1,inplace=True)
data1 = pd.concat([data,tot],1)
data1.groupby('def_pay')['EDUCATION'].hist(legend=True)
plt.show()
data1.groupby('def_pay')['AGE'].hist()
plt.figure(figsize=(12,6))
# we know that the Bill_AMT is the most correlated column so using that we create a data
df= pd.concat([bill_tot,df],1)
df1 = df.drop(['BILL_AMT1','BILL_AMT2','BILL_AMT3','BILL_AMT4','BILL_AMT5','BILL_AMT6'],1)
vif = pd.DataFrame()
vif['Features']= df1.columns
vif['vif']= [variance_inflation_factor(df1.values,i) for i in range(df1.shape[1])]
vif
# above we can see that now our data doesnt have multicollinearty(no values >10)
data2 = df1.copy()
# using the above plot we can create age bins
age = [20,27,32,37,42,48,58,64,80]
lab = [8,7,6,5,4,3,2,1]
data2['AGE'] = pd.cut(data2['AGE'],bins= age,labels=lab)
data2 = pd.concat([data2,data['def_pay']],1)
data2
data2.groupby('def_pay')['AGE'].hist()
plt.figure(figsize=(12,6))
sns.countplot(data2['AGE'])
data2.groupby('def_pay')['LIMIT_BAL'].hist(legend=True)
plt.show()
data2.columns | _____no_output_____ | MIT | Model/99_kaggle_credit_card_analysis_and_prediction.ipynb | skawns0724/KOSA-Big-Data_Vision |
Model Creation We know that we have a dataset where we have imbalance in the target variable you get a pretty high accuracy just by predicting the majority class, but you fail to capture the minority class which is most often the point of creating the model in the first place. Hence we try to create more model to get the best results | x= data2.drop(['def_pay'],1)
y = data2['def_pay']
x_train,x_test, y_train, y_test = train_test_split(x,y,test_size=0.30, random_state=1)
sc = StandardScaler()
x_train = sc.fit_transform(x_train)
x_test = sc.transform(x_test)
# Accuracy is not the best metric to use when evaluating imbalanced datasets as it can be misleading.
# hence we are using Classification Report and Confusion Matrix
# function for accuracy and confusion matrix
def res(y_test_valid,y_train_valid):
cm_log = confusion_matrix(y_test,y_test_valid)
ConfusionMatrixDisplay(cm_log).plot()
print(classification_report(y_test,y_test_valid))
print('train_accuracy:',accuracy_score(y_train,y_train_valid))
print('test_accuracy:',accuracy_score(y_test,y_test_valid)) | _____no_output_____ | MIT | Model/99_kaggle_credit_card_analysis_and_prediction.ipynb | skawns0724/KOSA-Big-Data_Vision |
Logistic model | log_model= LogisticRegression()
log_model.fit(x_train,y_train)
y_pred_log = log_model.predict(x_test)
y_pred_train = log_model.predict(x_train)
res(y_pred_log,y_pred_train)
plot_roc_curve(log_model,x_test,y_test)
plt.show()
# log model using Threshold
threshold = 0.36
y_log_prob = log_model.predict_proba(x_test)
y_train_log_prob = log_model.predict_proba(x_train)
y_log_prob=y_log_prob[:,1]
y_train_log_prob= y_train_log_prob[:,1]
y_pred_log_prob = np.where(y_log_prob>threshold,1,0)
y_pred_log_prob_train = np.where(y_train_log_prob>threshold,1,0)
res(y_pred_log_prob,y_pred_log_prob_train) | precision recall f1-score support
0 0.85 0.93 0.89 4663
1 0.64 0.41 0.50 1337
accuracy 0.82 6000
macro avg 0.74 0.67 0.69 6000
weighted avg 0.80 0.82 0.80 6000
train_accuracy: 0.8137083333333334
test_accuracy: 0.8158333333333333
| MIT | Model/99_kaggle_credit_card_analysis_and_prediction.ipynb | skawns0724/KOSA-Big-Data_Vision |
using Decision Tree model | dec_model = DecisionTreeClassifier()
dec_model.fit(x_train,y_train)
y_pred_dec = dec_model.predict(x_test)
y_pred_dec_train = dec_model.predict(x_train)
res(y_pred_dec,y_pred_dec_train) | precision recall f1-score support
0 0.82 0.81 0.81 4663
1 0.37 0.40 0.38 1337
accuracy 0.71 6000
macro avg 0.60 0.60 0.60 6000
weighted avg 0.72 0.71 0.72 6000
train_accuracy: 0.9977083333333333
test_accuracy: 0.7146666666666667
| MIT | Model/99_kaggle_credit_card_analysis_and_prediction.ipynb | skawns0724/KOSA-Big-Data_Vision |
Hyper parameter tuning for DecisionTree | parameters = {'max_depth':[1,2,3,4,5,6],'min_samples_split':[3,4,5,6,7],'min_samples_leaf':[1,2,3,4,5,6]}
tree = GridSearchCV(dec_model, parameters,cv=10)
tree.fit(x_train,y_train)
tree.best_params_
# We know that Decision tree will have high variance due to which the model overfit hence we can reduce this by "Pruning"
# By using the best parameter from GridSearchCV best parameters
dec_model1 = DecisionTreeClassifier(max_depth=4,min_samples_split=10,min_samples_leaf=1)
dec_model1.fit(x_train,y_train)
y_pred_dec1 = dec_model1.predict(x_test)
y_pred_dec_train1 = dec_model1.predict(x_train)
res(y_pred_dec1,y_pred_dec_train1) | precision recall f1-score support
0 0.84 0.95 0.89 4663
1 0.68 0.35 0.46 1337
accuracy 0.82 6000
macro avg 0.76 0.65 0.68 6000
weighted avg 0.80 0.82 0.79 6000
train_accuracy: 0.824375
test_accuracy: 0.8183333333333334
| MIT | Model/99_kaggle_credit_card_analysis_and_prediction.ipynb | skawns0724/KOSA-Big-Data_Vision |
Random Forest Model | rf_model = RandomForestClassifier(n_estimators=200, criterion='entropy', max_features='log2', max_depth=15, random_state=42)
rf_model.fit(x_train,y_train)
y_pred_rf = rf_model.predict(x_test)
y_pred_rf_train = rf_model.predict(x_train)
#res(y_pred_rf,y_pred_rf_train)
from sklearn.metrics import confusion_matrix
import itertools
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
cnf_matrix = confusion_matrix(y_test, y_pred_rf)
plt.figure()
plot_confusion_matrix(cnf_matrix, classes=['Non_Default','Default'], normalize=False,
title='Non Normalized confusion matrix')
from sklearn.metrics import recall_score
print("Recall score:"+ str(recall_score(y_test, y_pred_rf))) | Recall score:0.359
| MIT | Model/99_kaggle_credit_card_analysis_and_prediction.ipynb | skawns0724/KOSA-Big-Data_Vision |
Again hyper parameter tuning for Random Forest | parameters = {'n_estimators':[60,70,80],'max_depth':[1,2,3,4,5,6],'min_samples_split':[3,4,5,6,7],
'min_samples_leaf':[1,2,3,4,5,6]}
clf = GridSearchCV(rf_model, parameters,cv=10)
clf.fit(x_train,y_train)
clf.best_params_
# {'max_depth': 5,
# 'min_samples_leaf': 4,
# 'min_samples_split': 3,
# 'n_estimators': 70}
# Decision trees frequently perform well on imbalanced data. so using RandomForest uses bagging of n_trees will be a better idea.
rf_model = RandomForestClassifier(n_estimators=80, max_depth=6, min_samples_leaf=2, min_samples_split=5)
rf_model.fit(x_train,y_train)
y_pred_rf = rf_model.predict(x_test)
y_pred_rf_train = rf_model.predict(x_train)
#res(y_pred_rf,y_pred_rf_train)
cnf_matrix = confusion_matrix(y_test, y_pred_rf)
plt.figure()
plot_confusion_matrix(cnf_matrix, classes=['Non_Default','Default'], normalize=False,
title='Non Normalized confusion matrix')
print("Recall score:"+ str(recall_score(y_test, y_pred_rf))) | Recall score:0.347
| MIT | Model/99_kaggle_credit_card_analysis_and_prediction.ipynb | skawns0724/KOSA-Big-Data_Vision |
KNN model | # finding the K value
error = []
for i in range(1,21,2):
knn = KNeighborsClassifier(n_neighbors=i)
knn.fit(x_train,y_train)
preds = knn.predict(x_test)
error.append(np.mean(preds!=y_test))
plt.plot(range(1,21,2), error, linestyle = 'dashed', marker ='o', mfc= 'red')
# By using the elbow graph we can see that the k=5 will perform better in the first place so impute k = 5
knn_model = KNeighborsClassifier(n_neighbors=5)
knn_model.fit(x_train,y_train)
y_pred_knn = knn_model.predict(x_test)
y_pred_knn_train = knn_model.predict(x_train)
res(y_pred_knn,y_pred_knn_train) | precision recall f1-score support
0 0.83 0.92 0.87 4663
1 0.55 0.34 0.42 1337
accuracy 0.79 6000
macro avg 0.69 0.63 0.65 6000
weighted avg 0.77 0.79 0.77 6000
train_accuracy: 0.84325
test_accuracy: 0.792
| MIT | Model/99_kaggle_credit_card_analysis_and_prediction.ipynb | skawns0724/KOSA-Big-Data_Vision |
SVM Model | # use penalized learning algorithms that increase the cost of classification mistakes on the minority class.
svm_model = SVC(class_weight='balanced', probability=True)
svm_model.fit(x_train,y_train)
y_pred_svm = svm_model.predict(x_test)
y_pred_svm_train = svm_model.predict(x_train)
res(y_pred_svm,y_pred_svm_train)
# we can see in SVM that our recall of target variable is 0.56 which is the best we ever predicted. | _____no_output_____ | MIT | Model/99_kaggle_credit_card_analysis_and_prediction.ipynb | skawns0724/KOSA-Big-Data_Vision |
Naive Bayes | nb_model = GaussianNB()
nb_model.fit(x_train,y_train)
y_pred_nb = nb_model.predict(x_test)
y_pred_nb_train = nb_model.predict(x_train)
res(y_pred_nb,y_pred_nb_train)
# But here Naive bayes out performs every other model though over accuracy is acceptable, checkout the recall | _____no_output_____ | MIT | Model/99_kaggle_credit_card_analysis_and_prediction.ipynb | skawns0724/KOSA-Big-Data_Vision |
Boosting model XGB Classifier | from xgboost import XGBClassifier
xgb_model = XGBClassifier()
xgb_model.fit(x_train, y_train)
xgb_y_predict = xgb_model.predict(x_test)
xgb_y_predict_train = xgb_model.predict(x_train)
res(xgb_y_predict,xgb_y_predict_train)
# Even Boosting technique gives low recall for our target variable
# So from the above model we can conclude that the data imbalance is playing a major part
# Hence we try to fix that by doing ReSample techniques | _____no_output_____ | MIT | Model/99_kaggle_credit_card_analysis_and_prediction.ipynb | skawns0724/KOSA-Big-Data_Vision |
Random under-sampling Let’s apply some of these resampling techniques, using the Python library imbalanced-learn. | from collections import Counter
from imblearn.under_sampling import RandomUnderSampler
from imblearn.over_sampling import RandomOverSampler
from imblearn.under_sampling import TomekLinks
x= data2.drop(['def_pay'],1)
y = data2['def_pay']
rus = RandomUnderSampler(random_state=1)
x_rus, y_rus = rus.fit_resample(x,y)
print('original dataset shape:', Counter(y))
print('Resample dataset shape', Counter(y_rus))
x_train,x_test, y_train, y_test = train_test_split(x_rus,y_rus,test_size=0.20, random_state=1)
x_train = sc.fit_transform(x_train)
x_test = sc.transform(x_test)
# again we try to predict using Random Forest
rf_model_rus = RandomForestClassifier(n_estimators=70, max_depth=5, min_samples_leaf=4, min_samples_split=3,random_state=1)
rf_model_rus.fit(x_train,y_train)
y_pred_rf_rus = rf_model_rus.predict(x_test)
y_pred_rf_rus_train = rf_model_rus.predict(x_train)
res(y_pred_rf_rus,y_pred_rf_rus_train) | precision recall f1-score support
0 0.70 0.84 0.76 1364
1 0.78 0.62 0.69 1291
accuracy 0.73 2655
macro avg 0.74 0.73 0.72 2655
weighted avg 0.74 0.73 0.73 2655
train_accuracy: 0.719318074785721
test_accuracy: 0.7295668549905838
| MIT | Model/99_kaggle_credit_card_analysis_and_prediction.ipynb | skawns0724/KOSA-Big-Data_Vision |
Random over-sampling | x= data2.drop(['def_pay'],1)
y = data2['def_pay']
ros = RandomOverSampler(random_state=42)
x_ros, y_ros = ros.fit_resample(x, y)
print('Original dataset shape', Counter(y))
print('Resample dataset shape', Counter(y_ros))
x_train,x_test, y_train, y_test = train_test_split(x_ros,y_ros,test_size=0.20, random_state=1)
x_train = sc.fit_transform(x_train)
x_test = sc.transform(x_test)
rf_model_ros = RandomForestClassifier(n_estimators=70, max_depth=5, min_samples_leaf=4, min_samples_split=3,random_state=1)
rf_model_ros.fit(x_train,y_train)
y_pred_rf_ros = rf_model_ros.predict(x_test)
y_pred_rf_ros_train = rf_model_ros.predict(x_train)
res(y_pred_rf_ros,y_pred_rf_ros_train) | precision recall f1-score support
0 0.67 0.82 0.74 4607
1 0.77 0.62 0.69 4739
accuracy 0.71 9346
macro avg 0.72 0.72 0.71 9346
weighted avg 0.72 0.71 0.71 9346
train_accuracy: 0.7182066235086405
test_accuracy: 0.7138882944575219
| MIT | Model/99_kaggle_credit_card_analysis_and_prediction.ipynb | skawns0724/KOSA-Big-Data_Vision |
Under-sampling: Tomek links | x= data2.drop(['def_pay'],1)
y = data2['def_pay']
tl = TomekLinks(sampling_strategy='majority')
x_tl, y_tl = tl.fit_resample(x,y)
print('Original dataset shape', Counter(y))
print('Resample dataset shape', Counter(y_tl))
x_train,x_test, y_train, y_test = train_test_split(x_tl,y_tl,test_size=0.20, random_state=1)
x_train = sc.fit_transform(x_train)
x_test = sc.transform(x_test)
rf_model_tl = RandomForestClassifier(n_estimators=70, max_depth=5, min_samples_leaf=4, min_samples_split=3,random_state=1)
rf_model_tl.fit(x_train,y_train)
y_pred_rf_tl = rf_model_tl.predict(x_test)
y_pred_rf_tl_train = rf_model_tl.predict(x_train)
res(y_pred_rf_tl,y_pred_rf_tl_train) | precision recall f1-score support
0 0.83 0.95 0.89 4286
1 0.71 0.38 0.49 1337
accuracy 0.81 5623
macro avg 0.77 0.66 0.69 5623
weighted avg 0.80 0.81 0.79 5623
train_accuracy: 0.8188689311755291
test_accuracy: 0.8143339854170372
| MIT | Model/99_kaggle_credit_card_analysis_and_prediction.ipynb | skawns0724/KOSA-Big-Data_Vision |
Synthetic Minority Oversampling Technique (SMOTE) | from imblearn.over_sampling import SMOTE
smote = SMOTE()
x_smote, y_smote = smote.fit_resample(x, y)
print('Original dataset shape', Counter(y))
print('Resample dataset shape', Counter(y_smote))
x_train,x_test, y_train, y_test = train_test_split(x_smote,y_smote,test_size=0.20, random_state=1)
x_train = sc.fit_transform(x_train)
x_test = sc.transform(x_test)
x_train = pd.DataFrame(x_train).fillna(0)
x_test = pd.DataFrame(x_test).fillna(0)
rf_model_smote = RandomForestClassifier(n_estimators=70, max_depth=5, min_samples_leaf=4, min_samples_split=3,random_state=1)
rf_model_smote.fit(x_train,y_train)
y_pred_rf_smote = rf_model_smote.predict(x_test)
y_pred_rf_smote_train = rf_model_smote.predict(x_train)
res(y_pred_rf_smote,y_pred_rf_smote_train) | precision recall f1-score support
0 0.80 0.87 0.84 4607
1 0.87 0.79 0.83 4739
accuracy 0.83 9346
macro avg 0.83 0.83 0.83 9346
weighted avg 0.83 0.83 0.83 9346
train_accuracy: 0.8365256005564176
test_accuracy: 0.831371709822384
| MIT | Model/99_kaggle_credit_card_analysis_and_prediction.ipynb | skawns0724/KOSA-Big-Data_Vision |
Sequence to Sequence Learning:label:`sec_seq2seq`As we have seen in :numref:`sec_machine_translation`,in machine translationboth the input and output are a variable-length sequence.To address this type of problem,we have designed a general encoder-decoder architecturein :numref:`sec_encoder-decoder`.In this section,we willuse two RNNs to designthe encoder and the decoder ofthis architectureand apply it to *sequence to sequence* learningfor machine translation:cite:`Sutskever.Vinyals.Le.2014,Cho.Van-Merrienboer.Gulcehre.ea.2014`.Following the design principleof the encoder-decoder architecture,the RNN encoder cantake a variable-length sequence as the input and transforms it into a fixed-shape hidden state.In other words,information of the input (source) sequenceis *encoded* in the hidden state of the RNN encoder.To generate the output sequence token by token,a separate RNN decodercan predict the next token based onwhat tokens have been seen (such as in language modeling) or generated,together with the encoded information of the input sequence.:numref:`fig_seq2seq` illustrateshow to use two RNNsfor sequence to sequence learningin machine translation.:label:`fig_seq2seq`In :numref:`fig_seq2seq`,the special "<eos>" tokenmarks the end of the sequence.The model can stop making predictionsonce this token is generated.At the initial time step of the RNN decoder,there are two special design decisions.First, the special beginning-of-sequence "<bos>" token is an input.Second,the final hidden state of the RNN encoder is usedto initiate the hidden state of the decoder.In designs such as :cite:`Sutskever.Vinyals.Le.2014`,this is exactlyhow the encoded input sequence informationis fed into the decoder for generating the output (target) sequence.In some other designs such as :cite:`Cho.Van-Merrienboer.Gulcehre.ea.2014`,the final hidden state of the encoderis also fed into the decoder aspart of the inputsat every time step as shown in :numref:`fig_seq2seq`.Similar to the training of language models in:numref:`sec_language_model`,we can allow the labels to be the original output sequence,shifted by one token:"<bos>", "Ils", "regardent", "." $\rightarrow$"Ils", "regardent", ".", "<eos>".In the following,we will explain the design of :numref:`fig_seq2seq`in greater detail.We will train this model for machine translationon the English-French dataset as introduced in:numref:`sec_machine_translation`. | import collections
import math
from mxnet import autograd, gluon, init, np, npx
from mxnet.gluon import nn, rnn
from d2l import mxnet as d2l
npx.set_np() | _____no_output_____ | MIT | python/d2l-en/mxnet/chapter_recurrent-modern/seq2seq.ipynb | rtp-aws/devpost_aws_disaster_recovery |
EncoderTechnically speaking,the encoder transforms an input sequence of variable length into a fixed-shape *context variable* $\mathbf{c}$, and encodes the input sequence information in this context variable.As depicted in :numref:`fig_seq2seq`,we can use an RNN to design the encoder.Let us consider a sequence example (batch size: 1).Suppose thatthe input sequence is $x_1, \ldots, x_T$, such that $x_t$ is the $t^{\mathrm{th}}$ token in the input text sequence.At time step $t$, the RNN transformsthe input feature vector $\mathbf{x}_t$ for $x_t$and the hidden state $\mathbf{h} _{t-1}$ from the previous time stepinto the current hidden state $\mathbf{h}_t$.We can use a function $f$ to express the transformation of the RNN's recurrent layer:$$\mathbf{h}_t = f(\mathbf{x}_t, \mathbf{h}_{t-1}). $$In general,the encoder transforms the hidden states atall the time stepsinto the context variable through a customized function $q$:$$\mathbf{c} = q(\mathbf{h}_1, \ldots, \mathbf{h}_T).$$For example, when choosing $q(\mathbf{h}_1, \ldots, \mathbf{h}_T) = \mathbf{h}_T$ such as in :numref:`fig_seq2seq`,the context variable is just the hidden state $\mathbf{h}_T$of the input sequence at the final time step.So far we have used a unidirectional RNNto design the encoder,wherea hidden state only depends onthe input subsequence at and before the time step of the hidden state.We can also construct encoders using bidirectional RNNs. In this case, a hidden state depends onthe subsequence before and after the time step (including the input at the current time step), which encodes the information of the entire sequence.Now let us [**implement the RNN encoder**].Note that we use an *embedding layer*to obtain the feature vector for each token in the input sequence.The weightof an embedding layeris a matrixwhose number of rows equals to the size of the input vocabulary (`vocab_size`)and number of columns equals to the feature vector's dimension (`embed_size`).For any input token index $i$,the embedding layerfetches the $i^{\mathrm{th}}$ row (starting from 0) of the weight matrixto return its feature vector.Besides,here we choose a multilayer GRU toimplement the encoder. | #@save
class Seq2SeqEncoder(d2l.Encoder):
"""The RNN encoder for sequence to sequence learning."""
def __init__(self, vocab_size, embed_size, num_hiddens, num_layers,
dropout=0, **kwargs):
super(Seq2SeqEncoder, self).__init__(**kwargs)
# Embedding layer
self.embedding = nn.Embedding(vocab_size, embed_size)
self.rnn = rnn.GRU(num_hiddens, num_layers, dropout=dropout)
def forward(self, X, *args):
# The output `X` shape: (`batch_size`, `num_steps`, `embed_size`)
X = self.embedding(X)
# In RNN models, the first axis corresponds to time steps
X = X.swapaxes(0, 1)
state = self.rnn.begin_state(batch_size=X.shape[1], ctx=X.ctx)
output, state = self.rnn(X, state)
# `output` shape: (`num_steps`, `batch_size`, `num_hiddens`)
# `state[0]` shape: (`num_layers`, `batch_size`, `num_hiddens`)
return output, state | _____no_output_____ | MIT | python/d2l-en/mxnet/chapter_recurrent-modern/seq2seq.ipynb | rtp-aws/devpost_aws_disaster_recovery |
The returned variables of recurrent layershave been explained in :numref:`sec_rnn-concise`.Let us still use a concrete exampleto [**illustrate the above encoder implementation.**]Belowwe instantiate a two-layer GRU encoderwhose number of hidden units is 16.Givena minibatch of sequence inputs `X`(batch size: 4, number of time steps: 7),the hidden states of the last layerat all the time steps(`output` return by the encoder's recurrent layers)are a tensorof shape(number of time steps, batch size, number of hidden units). | encoder = Seq2SeqEncoder(vocab_size=10, embed_size=8, num_hiddens=16,
num_layers=2)
encoder.initialize()
X = np.zeros((4, 7))
output, state = encoder(X)
output.shape | _____no_output_____ | MIT | python/d2l-en/mxnet/chapter_recurrent-modern/seq2seq.ipynb | rtp-aws/devpost_aws_disaster_recovery |
Since a GRU is employed here,the shape of the multilayer hidden statesat the final time stepis(number of hidden layers, batch size, number of hidden units).If an LSTM is used,memory cell information will also be contained in `state`. | len(state), state[0].shape | _____no_output_____ | MIT | python/d2l-en/mxnet/chapter_recurrent-modern/seq2seq.ipynb | rtp-aws/devpost_aws_disaster_recovery |
[**Decoder**]:label:`sec_seq2seq_decoder`As we just mentioned,the context variable $\mathbf{c}$ of the encoder's output encodes the entire input sequence $x_1, \ldots, x_T$. Given the output sequence $y_1, y_2, \ldots, y_{T'}$ from the training dataset,for each time step $t'$(the symbol differs from the time step $t$ of input sequences or encoders),the probability of the decoder output $y_{t'}$is conditionalon the previous output subsequence$y_1, \ldots, y_{t'-1}$ andthe context variable $\mathbf{c}$, i.e., $P(y_{t'} \mid y_1, \ldots, y_{t'-1}, \mathbf{c})$.To model this conditional probability on sequences,we can use another RNN as the decoder.At any time step $t^\prime$ on the output sequence,the RNN takes the output $y_{t^\prime-1}$ from the previous time stepand the context variable $\mathbf{c}$ as its input,then transformsthem andthe previous hidden state $\mathbf{s}_{t^\prime-1}$into thehidden state $\mathbf{s}_{t^\prime}$ at the current time step.As a result, we can use a function $g$ to express the transformation of the decoder's hidden layer:$$\mathbf{s}_{t^\prime} = g(y_{t^\prime-1}, \mathbf{c}, \mathbf{s}_{t^\prime-1}).$$:eqlabel:`eq_seq2seq_s_t`After obtaining the hidden state of the decoder,we can use an output layer and the softmax operation to compute the conditional probability distribution$P(y_{t^\prime} \mid y_1, \ldots, y_{t^\prime-1}, \mathbf{c})$ for the output at time step $t^\prime$.Following :numref:`fig_seq2seq`,when implementing the decoder as follows,we directly use the hidden state at the final time stepof the encoderto initialize the hidden state of the decoder.This requires that the RNN encoder and the RNN decoder have the same number of layers and hidden units.To further incorporate the encoded input sequence information,the context variable is concatenatedwith the decoder input at all the time steps.To predict the probability distribution of the output token,a fully-connected layer is used to transformthe hidden state at the final layer of the RNN decoder. | class Seq2SeqDecoder(d2l.Decoder):
"""The RNN decoder for sequence to sequence learning."""
def __init__(self, vocab_size, embed_size, num_hiddens, num_layers,
dropout=0, **kwargs):
super(Seq2SeqDecoder, self).__init__(**kwargs)
self.embedding = nn.Embedding(vocab_size, embed_size)
self.rnn = rnn.GRU(num_hiddens, num_layers, dropout=dropout)
self.dense = nn.Dense(vocab_size, flatten=False)
def init_state(self, enc_outputs, *args):
return enc_outputs[1]
def forward(self, X, state):
# The output `X` shape: (`num_steps`, `batch_size`, `embed_size`)
X = self.embedding(X).swapaxes(0, 1)
# `context` shape: (`batch_size`, `num_hiddens`)
context = state[0][-1]
# Broadcast `context` so it has the same `num_steps` as `X`
context = np.broadcast_to(context, (
X.shape[0], context.shape[0], context.shape[1]))
X_and_context = np.concatenate((X, context), 2)
output, state = self.rnn(X_and_context, state)
output = self.dense(output).swapaxes(0, 1)
# `output` shape: (`batch_size`, `num_steps`, `vocab_size`)
# `state[0]` shape: (`num_layers`, `batch_size`, `num_hiddens`)
return output, state | _____no_output_____ | MIT | python/d2l-en/mxnet/chapter_recurrent-modern/seq2seq.ipynb | rtp-aws/devpost_aws_disaster_recovery |
To [**illustrate the implemented decoder**],below we instantiate it with the same hyperparameters from the aforementioned encoder.As we can see, the output shape of the decoder becomes (batch size, number of time steps, vocabulary size),where the last dimension of the tensor stores the predicted token distribution. | decoder = Seq2SeqDecoder(vocab_size=10, embed_size=8, num_hiddens=16,
num_layers=2)
decoder.initialize()
state = decoder.init_state(encoder(X))
output, state = decoder(X, state)
output.shape, len(state), state[0].shape | _____no_output_____ | MIT | python/d2l-en/mxnet/chapter_recurrent-modern/seq2seq.ipynb | rtp-aws/devpost_aws_disaster_recovery |
To summarize,the layers in the above RNN encoder-decoder model are illustrated in :numref:`fig_seq2seq_details`.:label:`fig_seq2seq_details` Loss FunctionAt each time step, the decoderpredicts a probability distribution for the output tokens.Similar to language modeling,we can apply softmax to obtain the distributionand calculate the cross-entropy loss for optimization.Recall :numref:`sec_machine_translation`that the special padding tokensare appended to the end of sequencesso sequences of varying lengthscan be efficiently loadedin minibatches of the same shape.However,prediction of padding tokensshould be excluded from loss calculations.To this end,we can use the following`sequence_mask` functionto [**mask irrelevant entries with zero values**]so latermultiplication of any irrelevant predictionwith zero equals to zero.For example,if the valid length of two sequencesexcluding padding tokensare one and two, respectively,the remaining entries afterthe first oneand the first two entries are cleared to zeros. | X = np.array([[1, 2, 3], [4, 5, 6]])
npx.sequence_mask(X, np.array([1, 2]), True, axis=1) | _____no_output_____ | MIT | python/d2l-en/mxnet/chapter_recurrent-modern/seq2seq.ipynb | rtp-aws/devpost_aws_disaster_recovery |
(**We can also mask all the entries across the lastfew axes.**)If you like, you may even specifyto replace such entries with a non-zero value. | X = np.ones((2, 3, 4))
npx.sequence_mask(X, np.array([1, 2]), True, value=-1, axis=1) | _____no_output_____ | MIT | python/d2l-en/mxnet/chapter_recurrent-modern/seq2seq.ipynb | rtp-aws/devpost_aws_disaster_recovery |
Now we can [**extend the softmax cross-entropy lossto allow the masking of irrelevant predictions.**]Initially,masks for all the predicted tokens are set to one.Once the valid length is given,the mask corresponding to any padding tokenwill be cleared to zero.In the end,the loss for all the tokenswill be multipled by the mask to filter outirrelevant predictions of padding tokens in the loss. | #@save
class MaskedSoftmaxCELoss(gluon.loss.SoftmaxCELoss):
"""The softmax cross-entropy loss with masks."""
# `pred` shape: (`batch_size`, `num_steps`, `vocab_size`)
# `label` shape: (`batch_size`, `num_steps`)
# `valid_len` shape: (`batch_size`,)
def forward(self, pred, label, valid_len):
# `weights` shape: (`batch_size`, `num_steps`, 1)
weights = np.expand_dims(np.ones_like(label), axis=-1)
weights = npx.sequence_mask(weights, valid_len, True, axis=1)
return super(MaskedSoftmaxCELoss, self).forward(pred, label, weights) | _____no_output_____ | MIT | python/d2l-en/mxnet/chapter_recurrent-modern/seq2seq.ipynb | rtp-aws/devpost_aws_disaster_recovery |
For [**a sanity check**], we can create three identical sequences.Then we canspecify that the valid lengths of these sequencesare 4, 2, and 0, respectively.As a result,the loss of the first sequenceshould be twice as large as that of the second sequence,while the third sequence should have a zero loss. | loss = MaskedSoftmaxCELoss()
loss(np.ones((3, 4, 10)), np.ones((3, 4)), np.array([4, 2, 0])) | _____no_output_____ | MIT | python/d2l-en/mxnet/chapter_recurrent-modern/seq2seq.ipynb | rtp-aws/devpost_aws_disaster_recovery |
[**Training**]:label:`sec_seq2seq_training`In the following training loop,we concatenate the special beginning-of-sequence tokenand the original output sequence excluding the final token asthe input to the decoder, as shown in :numref:`fig_seq2seq`.This is called *teacher forcing* becausethe original output sequence (token labels) is fed into the decoder.Alternatively,we could also feed the *predicted* tokenfrom the previous time stepas the current input to the decoder. | #@save
def train_seq2seq(net, data_iter, lr, num_epochs, tgt_vocab, device):
"""Train a model for sequence to sequence."""
net.initialize(init.Xavier(), force_reinit=True, ctx=device)
trainer = gluon.Trainer(net.collect_params(), 'adam',
{'learning_rate': lr})
loss = MaskedSoftmaxCELoss()
animator = d2l.Animator(xlabel='epoch', ylabel='loss',
xlim=[10, num_epochs])
for epoch in range(num_epochs):
timer = d2l.Timer()
metric = d2l.Accumulator(2) # Sum of training loss, no. of tokens
for batch in data_iter:
X, X_valid_len, Y, Y_valid_len = [
x.as_in_ctx(device) for x in batch]
bos = np.array(
[tgt_vocab['<bos>']] * Y.shape[0], ctx=device).reshape(-1, 1)
dec_input = np.concatenate([bos, Y[:, :-1]], 1) # Teacher forcing
with autograd.record():
Y_hat, _ = net(X, dec_input, X_valid_len)
l = loss(Y_hat, Y, Y_valid_len)
l.backward()
d2l.grad_clipping(net, 1)
num_tokens = Y_valid_len.sum()
trainer.step(num_tokens)
metric.add(l.sum(), num_tokens)
if (epoch + 1) % 10 == 0:
animator.add(epoch + 1, (metric[0] / metric[1],))
print(f'loss {metric[0] / metric[1]:.3f}, {metric[1] / timer.stop():.1f} '
f'tokens/sec on {str(device)}') | _____no_output_____ | MIT | python/d2l-en/mxnet/chapter_recurrent-modern/seq2seq.ipynb | rtp-aws/devpost_aws_disaster_recovery |
Now we can [**create and train an RNN encoder-decoder model**]for sequence to sequence learning on the machine translation dataset. | embed_size, num_hiddens, num_layers, dropout = 32, 32, 2, 0.1
batch_size, num_steps = 64, 10
lr, num_epochs, device = 0.005, 300, d2l.try_gpu()
train_iter, src_vocab, tgt_vocab = d2l.load_data_nmt(batch_size, num_steps)
encoder = Seq2SeqEncoder(
len(src_vocab), embed_size, num_hiddens, num_layers, dropout)
decoder = Seq2SeqDecoder(
len(tgt_vocab), embed_size, num_hiddens, num_layers, dropout)
net = d2l.EncoderDecoder(encoder, decoder)
train_seq2seq(net, train_iter, lr, num_epochs, tgt_vocab, device) | loss 0.023, 5661.3 tokens/sec on gpu(0)
| MIT | python/d2l-en/mxnet/chapter_recurrent-modern/seq2seq.ipynb | rtp-aws/devpost_aws_disaster_recovery |
[**Prediction**]To predict the output sequencetoken by token,at each decoder time stepthe predicted token from the previoustime step is fed into the decoder as an input.Similar to training,at the initial time stepthe beginning-of-sequence ("<bos>") tokenis fed into the decoder.This prediction processis illustrated in :numref:`fig_seq2seq_predict`.When the end-of-sequence ("<eos>") token is predicted,the prediction of the output sequence is complete.:label:`fig_seq2seq_predict`We will introduce differentstrategies for sequence generation in:numref:`sec_beam-search`. | #@save
def predict_seq2seq(net, src_sentence, src_vocab, tgt_vocab, num_steps,
device, save_attention_weights=False):
"""Predict for sequence to sequence."""
src_tokens = src_vocab[src_sentence.lower().split(' ')] + [
src_vocab['<eos>']]
enc_valid_len = np.array([len(src_tokens)], ctx=device)
src_tokens = d2l.truncate_pad(src_tokens, num_steps, src_vocab['<pad>'])
# Add the batch axis
enc_X = np.expand_dims(np.array(src_tokens, ctx=device), axis=0)
enc_outputs = net.encoder(enc_X, enc_valid_len)
dec_state = net.decoder.init_state(enc_outputs, enc_valid_len)
# Add the batch axis
dec_X = np.expand_dims(np.array([tgt_vocab['<bos>']], ctx=device), axis=0)
output_seq, attention_weight_seq = [], []
for _ in range(num_steps):
Y, dec_state = net.decoder(dec_X, dec_state)
# We use the token with the highest prediction likelihood as the input
# of the decoder at the next time step
dec_X = Y.argmax(axis=2)
pred = dec_X.squeeze(axis=0).astype('int32').item()
# Save attention weights (to be covered later)
if save_attention_weights:
attention_weight_seq.append(net.decoder.attention_weights)
# Once the end-of-sequence token is predicted, the generation of the
# output sequence is complete
if pred == tgt_vocab['<eos>']:
break
output_seq.append(pred)
return ' '.join(tgt_vocab.to_tokens(output_seq)), attention_weight_seq | _____no_output_____ | MIT | python/d2l-en/mxnet/chapter_recurrent-modern/seq2seq.ipynb | rtp-aws/devpost_aws_disaster_recovery |
Evaluation of Predicted SequencesWe can evaluate a predicted sequenceby comparing it with thelabel sequence (the ground-truth).BLEU (Bilingual Evaluation Understudy),though originally proposed for evaluatingmachine translation results :cite:`Papineni.Roukos.Ward.ea.2002`,has been extensively used in measuringthe quality of output sequences for different applications.In principle, for any $n$-grams in the predicted sequence,BLEU evaluates whether this $n$-grams appearsin the label sequence.Denote by $p_n$the precision of $n$-grams,which isthe ratio ofthe number of matched $n$-grams inthe predicted and label sequencestothe number of $n$-grams in the predicted sequence.To explain,given a label sequence $A$, $B$, $C$, $D$, $E$, $F$,and a predicted sequence $A$, $B$, $B$, $C$, $D$,we have $p_1 = 4/5$, $p_2 = 3/4$, $p_3 = 1/3$, and $p_4 = 0$.Besides,let $\mathrm{len}_{\text{label}}$ and $\mathrm{len}_{\text{pred}}$bethe numbers of tokens in the label sequence and the predicted sequence, respectively.Then, BLEU is defined as$$ \exp\left(\min\left(0, 1 - \frac{\mathrm{len}_{\text{label}}}{\mathrm{len}_{\text{pred}}}\right)\right) \prod_{n=1}^k p_n^{1/2^n},$$:eqlabel:`eq_bleu`where $k$ is the longest $n$-grams for matching.Based on the definition of BLEU in :eqref:`eq_bleu`,whenever the predicted sequence is the same as the label sequence, BLEU is 1.Moreover,since matching longer $n$-grams is more difficult,BLEU assigns a greater weightto a longer $n$-gram precision.Specifically, when $p_n$ is fixed,$p_n^{1/2^n}$ increases as $n$ grows (the original paper uses $p_n^{1/n}$).Furthermore,sincepredicting shorter sequencestends to obtain a higher $p_n$ value,the coefficient before the multiplication term in :eqref:`eq_bleu`penalizes shorter predicted sequences.For example, when $k=2$,given the label sequence $A$, $B$, $C$, $D$, $E$, $F$ and the predicted sequence $A$, $B$,although $p_1 = p_2 = 1$, the penalty factor $\exp(1-6/2) \approx 0.14$ lowers the BLEU.We [**implement the BLEU measure**] as follows. | def bleu(pred_seq, label_seq, k): #@save
"""Compute the BLEU."""
pred_tokens, label_tokens = pred_seq.split(' '), label_seq.split(' ')
len_pred, len_label = len(pred_tokens), len(label_tokens)
score = math.exp(min(0, 1 - len_label / len_pred))
for n in range(1, k + 1):
num_matches, label_subs = 0, collections.defaultdict(int)
for i in range(len_label - n + 1):
label_subs[' '.join(label_tokens[i: i + n])] += 1
for i in range(len_pred - n + 1):
if label_subs[' '.join(pred_tokens[i: i + n])] > 0:
num_matches += 1
label_subs[' '.join(pred_tokens[i: i + n])] -= 1
score *= math.pow(num_matches / (len_pred - n + 1), math.pow(0.5, n))
return score | _____no_output_____ | MIT | python/d2l-en/mxnet/chapter_recurrent-modern/seq2seq.ipynb | rtp-aws/devpost_aws_disaster_recovery |
In the end,we use the trained RNN encoder-decoderto [**translate a few English sentences into French**]and compute the BLEU of the results. | engs = ['go .', "i lost .", 'he\'s calm .', 'i\'m home .']
fras = ['va !', 'j\'ai perdu .', 'il est calme .', 'je suis chez moi .']
for eng, fra in zip(engs, fras):
translation, attention_weight_seq = predict_seq2seq(
net, eng, src_vocab, tgt_vocab, num_steps, device)
print(f'{eng} => {translation}, bleu {bleu(translation, fra, k=2):.3f}') | go . => va !, bleu 1.000
i lost . => j'ai perdu ., bleu 1.000
he's calm . => il est malade ., bleu 0.658
i'm home . => je suis fainéante ?, bleu 0.418
| MIT | python/d2l-en/mxnet/chapter_recurrent-modern/seq2seq.ipynb | rtp-aws/devpost_aws_disaster_recovery |
Seaborn In ActionSeaborn is a data visualization library that is based on **Matplotlib**. It is tightly integrated with Pandas library and provides a high level interface for making attractive and informative statistical graphics in Python.This Notebook introduces the basic and essential functions in the seaborn library. Lets go ahead and import the relevant libraries for this tutorials | import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
sns.set()
?sns.relplot | _____no_output_____ | Apache-2.0 | Day 4/Seaborn In Action.ipynb | Abuton/10-Academy-5days-Challenge |
Loading the Data and Inspection | cs = pd.read_csv('data/c_scores.csv')
cs
cs.sample(5)
cs.info() | <class 'pandas.core.frame.DataFrame'>
RangeIndex: 1000 entries, 0 to 999
Data columns (total 21 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 checking_status 979 non-null object
1 duration 1000 non-null int64
2 credit_history 999 non-null object
3 purpose 1000 non-null object
4 credit_amount 1000 non-null int64
5 savings_status 967 non-null object
6 employment 968 non-null object
7 installment_commitment 1000 non-null int64
8 personal_status 997 non-null object
9 other_parties 998 non-null object
10 residence_since 1000 non-null int64
11 property_magnitude 999 non-null object
12 age 1000 non-null int64
13 other_payment_plans 996 non-null object
14 housing 998 non-null object
15 existing_credits 999 non-null float64
16 job 999 non-null object
17 num_dependents 1000 non-null int64
18 own_telephone 1000 non-null object
19 foreign_worker 999 non-null object
20 class 999 non-null object
dtypes: float64(1), int64(6), object(14)
memory usage: 164.2+ KB
| Apache-2.0 | Day 4/Seaborn In Action.ipynb | Abuton/10-Academy-5days-Challenge |
Scatter Plots We shall plot the **age** and **credit_amount** columns using the **jointplot** function. | sns.jointplot(x='age', y='credit_amount', data=cs) | _____no_output_____ | Apache-2.0 | Day 4/Seaborn In Action.ipynb | Abuton/10-Academy-5days-Challenge |
Let's plot the **age** and **credit_amount** again but this time let's break that with **job**. For this, we shall used the **relplot()**. This functions provides access to several axes-level functions that show the relationship between two variables which can also come with semantic mappings. It also come with the **kind** parameter which can be used to specify whether you want a **lineplot** or **scatterplot**. The default is **scatterplot**. The visualization below shows the relation between credit amount given to people and their ages. In addition, I am comparing it over the kind of job. Seaborn uses coloring to show which of the points represent what kind of job. The **height** and the **aspect** parameter is used to adjust the height and width of the FacetGrid. The **hue** parameter helps to group variables that will produce element with different colors. **data** parameter represents the dataset of interest. | sns.relplot(x="age", y="credit_amount", height= 8, aspect=1, hue="job", data=cs) | _____no_output_____ | Apache-2.0 | Day 4/Seaborn In Action.ipynb | Abuton/10-Academy-5days-Challenge |
We can also plot the above visualization where we compare what it looks like over two or more categorical elements. For example, in the below visualization, we shall compare the above visualization over **class** using the **col** parameter in the **relplot()** function. | sns.relplot(x="age", y="credit_amount", height= 8, aspect=1, hue="job", col='class',data=cs) | _____no_output_____ | Apache-2.0 | Day 4/Seaborn In Action.ipynb | Abuton/10-Academy-5days-Challenge |
Boxplots A boxplot is used to show the distribution of numerical variables and facilitates comparisons across multiple categorical variables.We would like to visualize the distribution of **age** of the customers with respect to **class**. | sns.boxplot(x='class', y='age', data=cs) | _____no_output_____ | Apache-2.0 | Day 4/Seaborn In Action.ipynb | Abuton/10-Academy-5days-Challenge |
Let's visualize the distribution of **credit_amount** with respect to **purpose** using **class** as the **hue** | fig, ax = plt.subplots(figsize=(18,7))
sns.boxplot(x='purpose', y='credit_amount', hue='class', ax = ax, data=cs) | _____no_output_____ | Apache-2.0 | Day 4/Seaborn In Action.ipynb | Abuton/10-Academy-5days-Challenge |
Histogram A histrogram represents the distribution of data by forming bins along the range of the data and drawing lines to represents the number of observations that fall within each bin.Let's plot the histogram of **credit_amount**. | sns.distplot(cs['credit_amount']) | _____no_output_____ | Apache-2.0 | Day 4/Seaborn In Action.ipynb | Abuton/10-Academy-5days-Challenge |
Let's plot the histogram of the **age** | sns.distplot(cs['age']) | _____no_output_____ | Apache-2.0 | Day 4/Seaborn In Action.ipynb | Abuton/10-Academy-5days-Challenge |
Let's get the histogram of the **credit_amount** of the customers, this time across the **class** dimension as a faceted histogram. | facet = sns.FacetGrid(cs, height=6, col='class')
facet = facet.map(sns.distplot, 'credit_amount', color='r') | _____no_output_____ | Apache-2.0 | Day 4/Seaborn In Action.ipynb | Abuton/10-Academy-5days-Challenge |
It will however be a fantastic idea to compare the distribution of **class_amount** across **class** overlaid on the same plot. | facet = sns.FacetGrid(cs, height=6, hue='class')
facet = facet.map(sns.distplot, 'credit_amount') | _____no_output_____ | Apache-2.0 | Day 4/Seaborn In Action.ipynb | Abuton/10-Academy-5days-Challenge |
Line Plots To make meaningful line plots, we are going to generate a dataframe to be used to help us understand line plots.We will randomly generate some dates from (1970) to (1970+36) over 12 months period. We will then go ahead and select the first 36 rows for the **duration** and **age** columns to form our new dataframe. | new_series = pd.DataFrame({'time': pd.date_range('1970-12-31', periods=36, freq='12M'),
'duration': cs['duration'].iloc[0:36],
'age': cs['age'].iloc[0:36]})
new_series.head() | _____no_output_____ | Apache-2.0 | Day 4/Seaborn In Action.ipynb | Abuton/10-Academy-5days-Challenge |
Next, we are going to move the **duration** and the **age** columns to rows so that we can plot both on the graph. We are going to do that using the the pandas **melt()** method.The **melt()** method allows us to unpivot a dataframe from a wide to a long format, optionally leaving the identifiers set. It takes in the dataframe you want to unpivot, the **id_vars**, identifier variable (could be single column or list of columns), the **var_name**, variable name (for the variable that are going to be unpivoted), and the **value_name**, the name of the value column. | series = pd.melt(new_series, id_vars=['time'],
var_name='Variables',
value_name='values')
series.sample(10)
lp = sns.lineplot(x='time', y='values', hue='Variables', data=series)
#Position the legend out the graph
lp.legend(bbox_to_anchor=(1.02, 1),
loc=2,
borderaxespad=0.0);
lp.set(title='Line plot of Duration and Age', xlabel='Year', ylabel='Values') | _____no_output_____ | Apache-2.0 | Day 4/Seaborn In Action.ipynb | Abuton/10-Academy-5days-Challenge |
Regression Plot In the regression plotting, we are going to use the **lmplot()**. This function combines the the **regplot()** and FacetGrid. It is intended as a convenient interface to fit regression models across subsets of datasets.We will use the famous iris flower dataset for the regression plot. It is available in the seaborn module. | iris = sns.load_dataset('iris')
iris.sample(8) | _____no_output_____ | Apache-2.0 | Day 4/Seaborn In Action.ipynb | Abuton/10-Academy-5days-Challenge |
Let's plot the **sepal_length** vs the **sepal_withth** only | g = sns.lmplot(x='petal_length', y='petal_width', order=1, data=iris)
g.set_axis_labels("Petal Length(mm)", "Petal Width(mm)" ) | _____no_output_____ | Apache-2.0 | Day 4/Seaborn In Action.ipynb | Abuton/10-Academy-5days-Challenge |
Using the species, lets break the regression line with respect to the species and fit a first order regression to each species' respective data point. | g = sns.lmplot(x='petal_length', y='petal_width', hue='species',height=8, order=1, data=iris)
g.set_axis_labels("Petal Length(mm)", "Petal Width(mm)" ) | _____no_output_____ | Apache-2.0 | Day 4/Seaborn In Action.ipynb | Abuton/10-Academy-5days-Challenge |
Now, let's use the **species** as the **col**, column parameter | g = sns.lmplot(x='petal_length', y='petal_width', col='species',height=10, order=1, data=iris)
g.set_axis_labels("Petal Length(mm)", "Petal Width(mm)" ) | _____no_output_____ | Apache-2.0 | Day 4/Seaborn In Action.ipynb | Abuton/10-Academy-5days-Challenge |
Percentiles | %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
vals = np.random.normal(0, 0.25, 10000)
plt.hist(vals, 50)
plt.show()
np.percentile(vals, 50)
np.percentile(vals, 99)
np.percentile(vals, 10) | _____no_output_____ | MIT | mlcourse/Percentiles.ipynb | chauhanr/DLTFpT |
Watch Me Code 2: String Formatting | name = "Mike"
age = 45
salary = 15.75
# string formatting
print("Hello there %s. How are you? " % (name))
# formatting redux
print("%s makes %f per hour." % (name, salary))
# let's use spacing
print("%s makes %.2f per hour." % (name, salary))
# right alignment
print("-" * 10) # print 10 dashes
print("%10d" %(age))
# left alignment
print("-" * 10)
print("%-10d" % (age)) | ----------
45
| MIT | content/lessons/02/Watch-Me-Code/WMC2-String-Formatting.ipynb | MahopacHS/spring-2020-oubinam0717 |
**[Image Tools](2_Image_Tools.ipynb)** Selective Fourier Transformpart of **pyTEMlib**, a **pycroscopy** library Notebook by Gerd DuscherMaterials Science & EngineeringJoint Institute of Advanced MaterialsThe University of Tennessee, KnoxvilleAn introduction into Fourier Filtering of images. Install pyTEMlibIf you have not done so in the [Introduction Notebook](_.ipynb), please test and install [pyTEMlib](https://github.com/gduscher/pyTEMlib) and other important packages with the code cell below. | import sys
from pkg_resources import get_distribution, DistributionNotFound
def test_package(package_name):
"""Test if package exists and returns version or -1"""
try:
version = (get_distribution(package_name).version)
except (DistributionNotFound, ImportError) as err:
version = '-1'
return version
# Colab setup ------------------
if 'google.colab' in sys.modules:
!pip install git+https://github.com/pycroscopy/pyTEMlib/ -q
# pyTEMlib setup ------------------
else:
if test_package('sidpy') < '0.0.7':
print('installing sidpy')
!{sys.executable} -m pip install --upgrade sidpy -q
if test_package('pyNSID') < '0.0.3':
print('installing pyNSID')
!{sys.executable} -m pip install --upgrade pyNSID -q
if test_package('pyTEMlib') < '0.2022.10.1':
print('installing pyTEMlib')
!{sys.executable} -m pip install --upgrade pyTEMlib -q
# ------------------------------
print('done') | _____no_output_____ | MIT | notebooks/Imaging/Selective_Fourier_Transform.ipynb | gduscher/pyTEMlib |
Loading of necessary librariesPlease note, that we only need to load the pyTEMlib library, which is based on sidpy Datsets. | %pylab notebook
from matplotlib.widgets import RectangleSelector
sys.path.insert(0,'../../')
import pyTEMlib
import pyTEMlib.file_tools as ft
import pyTEMlib.image_tools as it
print('pyTEMlib version: ', pyTEMlib.__version__)
note_book_version = '2021.10.25'
note_book_name='pyTEMib/notebooks/Imaging/Adaptive_Fourier_Filter' | Populating the interactive namespace from numpy and matplotlib
pyTEMlib version: 0.2021.10.2
| MIT | notebooks/Imaging/Selective_Fourier_Transform.ipynb | gduscher/pyTEMlib |
Open FileThese datasets are stored in the pyNSID data format (extension: hf5) automatically. All results can be stored in that file. First we select the file | file_widget = ft.FileWidget() | _____no_output_____ | MIT | notebooks/Imaging/Selective_Fourier_Transform.ipynb | gduscher/pyTEMlib |
Now, we open and plot themSelect with the moue an area; rectangle will apear! | try:
dataset.h5_dataset.file.close()
except:
pass
dataset= ft.open_file(file_widget.file_name)
print(file_widget.file_name)
if dataset.data_type.name != 'IMAGE':
print('We really would need an image here')
dataset.plot()
selector = RectangleSelector(dataset.view.axis, None,interactive=True , drawtype='box')
def get_selection(dataset, extents):
if (np.array(extents) <2).all():
return dataset
xmin, xmax, ymin, ymax = selector.extents/(dataset.x[1]-dataset.x[0])
return dataset.like_data(dataset[int(xmin):int(xmax), int(ymin):int(ymax)])
selection = it.get_selection(dataset, selector.extents)
selection.plot() | _____no_output_____ | MIT | notebooks/Imaging/Selective_Fourier_Transform.ipynb | gduscher/pyTEMlib |
Power Spectrum of Image | power_spectrum = it.power_spectrum(selection, smoothing=1)
power_spectrum.view_metadata()
print('source: ', power_spectrum.source)
power_spectrum.plot()
| fft :
smoothing : 1
minimum_intensity : 0.04261075359154022
maximum_intensity : 1.5140884612372518
source: /Measurement_000/Channel_000/2_HAADF/2_HAADF_new
| MIT | notebooks/Imaging/Selective_Fourier_Transform.ipynb | gduscher/pyTEMlib |
Spot Detection in Fourier Transform | # ------Input----------
spot_threshold=0.1
# ---------------------
spots = it.diffractogram_spots(power_spectrum, spot_threshold=spot_threshold)
spots = spots[np.linalg.norm(spots[:,:2],axis=1)<8,:]
spots = spots[np.linalg.norm(spots[:,:2],axis=1)>0.5,:]
power_spectrum.plot()
plt.gca().scatter(spots[:,0],spots[:,1], color='red', alpha=0.4);
#print(spots[:,:2])
#print(np.round(np.linalg.norm(spots[:,:2], axis=1),2))
#print(np.round(np.degrees(np.arctan2(spots[:,0], spots[:,1])+np.pi)%180,2))
angles=np.arctan2(spots[:,0], spots[:,1])
radius= np.linalg.norm(spots[:,:2], axis=1)
args = angles>0
radii = radius[angles>0]
angles = angles[angles>0]
print(radii, np.degrees(angles))
#print(np.degrees(angles[1]-angles[0]), np.degrees(angles[2]-angles[0]))
#print(1/radii)
new_angles = np.round(np.degrees(angles+np.pi-angles[0]+0.0000001)%180,2)
print(new_angles)
print(np.degrees(angles[1]-angles[0]), np.degrees(angles[2]-angles[0]))
angles=np.arctan2(spots[:,0], spots[:,1])
radius= np.linalg.norm(spots[:,:2], axis=1)
args = angles>0
radii = radius[angles>0]
angles = angles[angles>0]
print(radii, np.degrees(angles))
# clockwise from up
angles =(-np.degrees(np.arctan2(spots[:,0], spots[:,1]))+180) % 360
spots = spots[np.argsort(angles)]
angles =(-np.degrees(np.arctan2(spots[:,0], spots[:,1]))+180) % 360
plane_distances = 1/np.linalg.norm(spots[:,:2],axis=1)
rolled_angles= np.roll(angles,1) %360
rolled_angles[0] -= 360
relative_angles = angles - rolled_angles
print(np.round(plane_distances,3))
print(np.round(relative_angles,1))
import pyTEMlib.kinematic_scattering as ks
#Initialize the dictionary of the input
tags_simulation = {}
### Define Crystal
tags_simulation = ft.read_poscar('./POSCAR.mp-2418_PdSe2')
### Define experimental parameters:
tags_simulation['acceleration_voltage_V'] = 200.0 *1000.0 #V
tags_simulation['new_figure'] = False
tags_simulation['plot FOV'] = 30
tags_simulation['convergence_angle_mrad'] = 0
tags_simulation['zone_hkl'] = np.array([0,0,1]) # incident neares zone axis: defines Laue Zones!!!!
tags_simulation['mistilt'] = np.array([0,0,0]) # mistilt in degrees
tags_simulation['Sg_max'] = .2 # 1/nm maximum allowed excitation error ; This parameter is related to the thickness
tags_simulation['hkl_max'] = 6 # Highest evaluated Miller indices
######################################
# Diffraction Simulation of Crystal #
######################################
import itertools
hkl_list = [list([0, 0, 0])]
spot_dict = {}
for hkl in itertools.product(range(6), repeat=3):
if list(hkl) not in hkl_list:
#print(hkl, hkl_list)
tags_simulation['zone_hkl'] = hkl
ks.kinematic_scattering(tags_simulation, verbose = False)
if list(tags_simulation['nearest_zone_axes']['0']['hkl']) not in hkl_list:
print('- ', tags_simulation['nearest_zone_axes']['0']['hkl'])
spots = tags_simulation['allowed']['g'][np.linalg.norm(tags_simulation['allowed']['g'][:,:2], axis=1)<4.7,:2]
angles=np.arctan2(spots[:,0], spots[:,1])
radius= np.linalg.norm(spots[:,:2], axis=1)
args = angles>0
radii = radius[angles>0]
angles = angles[angles>0]
spot_dict[hkl] = {"radii": radii, "angles": angles}
print(radii, np.degrees(angles%np.pi))
hkl_list.append(list(hkl))
spot_dict
for hkl, refl in spot_dict.items():
if len(refl['radii'])>4:
print(hkl, 1/refl['radii']) | (0, 1, 0) [0.42925305 0.21462653 0.34491887 0.34491887 0.24014125 0.2897209
0.24014125]
(0, 1, 1) [0.29838947 0.29838947 0.22268168 0.2897209 0.22268168]
(0, 1, 2) [0.24443554 0.22521644 0.37365047 0.37365047 0.22521644 0.24924007
0.2897209 0.24924007]
(0, 2, 5) [0.37434128 0.37434128 0.24944479 0.2897209 0.24944479]
(1, 0, 0) [0.24438842 0.28090735 0.2972711 0.21462653 0.42925305 0.24438842
0.28090735]
(1, 0, 1) [0.2972711 0.22527197 0.29856219 0.3452512 0.29856219 0.22527197]
(1, 0, 2) [0.2972711 0.25283763 0.37382609 0.37382609 0.25283763 0.22284852
0.24037262 0.22284852]
(1, 1, 0) [0.21462653 0.42925305 0.23560622 0.29839823 0.37370861 0.37370861
0.29839823 0.23560622]
(1, 1, 2) [0.24442305 0.25277107 0.29851821 0.24934585 0.2403143 ]
(1, 2, 0) [0.21462653 0.42925305 0.22269858 0.24927592 0.26049888 0.24927592
0.22269858]
(2, 0, 5) [0.2972711 0.25289088 0.37399827 0.37399827 0.25289088]
(2, 1, 0) [0.21462653 0.42925305 0.22521415 0.25281914 0.25281914 0.22521415]
| MIT | notebooks/Imaging/Selective_Fourier_Transform.ipynb | gduscher/pyTEMlib |
Log the result | # results_channel = ft.log_results(dataset.h5_dataset.parent.parent, filtered_dataset)
| _____no_output_____ | MIT | notebooks/Imaging/Selective_Fourier_Transform.ipynb | gduscher/pyTEMlib |
A tree-like plot of the file | ft.h5_tree(dataset.h5_dataset.file) | _____no_output_____ | MIT | notebooks/Imaging/Selective_Fourier_Transform.ipynb | gduscher/pyTEMlib |
Close Filelet's close the file but keep the filename | dataset.h5_dataset.file.close() | _____no_output_____ | MIT | notebooks/Imaging/Selective_Fourier_Transform.ipynb | gduscher/pyTEMlib |
Stage 1: Correlation for individual enhancers | import pandas as pd
import numpy as np
import time, re, datetime
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from scipy.stats import zscore
import random
from multiprocessing import Pool,cpu_count
num_processors = cpu_count()
print('Starting analysis; %d processors; %s' % (num_processors, datetime.datetime.today()))
t00 =time.time()
# np.random.seed(0)
import sys
sys.path.insert(0, '/cndd/fangming/CEMBA/snmcseq_dev')
from __init__jupyterlab import *
import snmcseq_utils
today=datetime.datetime.today().strftime('%d-%m-%Y')
use_kmers = False
corr_type = 'Pearson' # corr_type = 'Spearman'
features_use = 'mCG+ATAC'
analysis_prefix = 'eran_model_{}'.format(features_use)
output_fig = '/cndd2/fangming/projects/scf_enhancers/results/figures/{}_{{}}_{}.pdf'.format(analysis_prefix, today)
output = '/cndd2/fangming/projects/scf_enhancers/results/{}_{{}}_{}'.format(analysis_prefix, today)
# fn_load_prefix = 'RegressData/Regress_data_6143genes_19cells_'
# fn_load_prefix = 'RegressData/Regress_data_6174genes_20cells_'
fn_load_prefix = 'RegressData/Regress_data_9811genes_24cells_'
# Load datasets
save_vars = ['genes2enhu', 'rnau', 'df_mlevelu', 'df_atacu', 'genes']
# save_vars = ['rnau','genes']
for var in save_vars:
fn = fn_load_prefix+var+'.pkl'
cmd = '%s=pd.read_pickle("%s")' % (var, fn)
exec(cmd)
print('Loaded %s from %s' % (var, fn))
if use_kmers:
with np.load(fn_load_prefix+'kmer_countsu.npz', allow_pickle=True) as x:
kmer_countsu=x['kmer_countsu']
kmer_countsu = kmer_countsu/kmer_countsu.shape[1]/100
# Testing:
kmer_countsu = kmer_countsu[:,:2]
print('Kmers shape: ', kmer_countsu.shape)
Nk=kmer_countsu.shape[1]
print('Loaded kmers')
else:
Nk=0
# Cell type names
df_cellnames = pd.read_csv(
'/cndd/Public_Datasets/CEMBA/BICCN_minibrain_data/data_freeze/supp_info/clusters_final/cluster_annotation_scf_round2.tsv',
sep='\t', index_col='cluster')
genes2enhu = genes2enhu.iloc[[i in genes.index for i in genes2enhu['ensid']],:]
genes2enhu.shape, genes2enhu.index.unique().shape
celltypes = df_mlevelu.columns
assert np.all(celltypes == df_atacu.columns)
if (features_use=='mCG'):
x = df_mlevelu.loc[genes2enhu['enh_pos'],:].to_numpy()
elif (features_use=='ATAC'):
x = df_atacu.loc[genes2enhu['enh_pos'],:].to_numpy()
elif (features_use=='mCG_ATAC'):
x1 = df_mlevelu.loc[genes2enhu['enh_pos'],:].to_numpy()
x2 = df_atacu.loc[genes2enhu['enh_pos'],:].to_numpy()
x = f_mcg(x1) * f_atac(x2)
elif (features_use=='mCG+ATAC'):
x1 = df_mlevelu.loc[genes2enhu['enh_pos'],:].to_numpy()
x2 = df_atacu.loc[genes2enhu['enh_pos'],:].to_numpy()
else:
x = []
y = rnau.loc[genes2enhu['ensid'],:].to_numpy()
print(
rnau.shape, # rna by celltype
df_mlevelu.shape, # enh by cell type
df_atacu.shape, # enh by cell type
genes.shape, # gene annotation
genes2enhu.shape, # gene-enh pair
x1.shape, # enh_mcg by cell type (mcg_enh for each enh-gene pair) how ?
x2.shape, # enh_atac by cell type (mcg_enh for each enh-gene pair) how ?
y.shape, # rna by cell type (rna for each enh-gene pair)
)
def my_cc(x,y,ensid,doshuff=False,jshuff=0,corr_type='Pearson',use_abs=True, doshuffgene=False,verbose=False):
"""Calculate corr for each row of x and y
x, y: enh_mcg/gene_rna (pair) vs celltype
ensid: matched gene ensid for each row
x, y contains no nan; but constant rows of x and y produces nan with zscoring
"""
t0=time.time()
seed = int(time.time()*1e7 + jshuff) % 100
np.random.seed(seed)
ngenes, ncells = y.shape
print('Computing correlations for %d gene-enhancer pairs; jshuff=%d; ' % (ngenes, jshuff))
if doshuff:
y = y[:,np.random.permutation(ncells)] # permute cells
if doshuffgene:
y = y[np.random.permutation(ngenes),:] # permute genes (pairs)
if (corr_type=='Spearman'):
y = np.argsort(y,axis=1)
x = np.argsort(x,axis=1)
xz = zscore(x, axis=1, nan_policy='propagate', ddof=0)
yz = zscore(y, axis=1, nan_policy='propagate', ddof=0)
xy_cc = np.nan_to_num(np.nanmean(xz*yz, axis=1)) # turn np.nan into zero
xy_cc_df = pd.DataFrame(data=xy_cc, columns=['cc'])
xy_cc_df['enh_num'] = np.arange(ngenes)
xy_cc_df['ensid'] = ensid.values
xy_cc_df['cc_abs'] = np.abs(xy_cc_df['cc'])
if use_abs: # max abs_corr for each gene
xy_cc_df = xy_cc_df.sort_values(['ensid','cc_abs'],
ascending=[True,False]).drop_duplicates(['ensid'])
else: # max corr for each gene
xy_cc_df = xy_cc_df.sort_values(['ensid','cc'],
ascending=[True,False]).drop_duplicates(['ensid'])
best_cc = xy_cc_df['cc'] # corr (not abs)
best_enh = xy_cc_df['enh_num'] # enh
best_ensid = xy_cc_df['ensid'] # gene
if verbose:
print('t=%3.3f' % (time.time()-t0))
return best_cc,best_enh,best_ensid,xy_cc
def my_cc_shuffgene(x, y, ensid, rnau,
doshuff=False, jshuff=0,
corr_type='Pearson', use_abs=True,
doshuffgene=False,
):
"""
"""
seed = int(time.time()*1e7 + jshuff) % 100
rnau_shuff = rnau.copy()
rnau_shuff.index = rnau.index.values[
np.random.RandomState(seed=seed).permutation(len(rnau))
]
y_shuff = rnau_shuff.loc[ensid,:].to_numpy()
return my_cc(x, y_shuff, ensid,
doshuff, jshuff,
corr_type, use_abs,
doshuffgene,
)
def corr_pipe(x, y, genes2enhu, rnau, corr_type,):
"""
"""
# observed
best_cc, best_enh, best_ensid, all_cc = my_cc(x,y,genes2enhu['ensid'],False,0,corr_type,True,False)
print(best_cc.shape, best_enh.shape, best_ensid.shape, all_cc.shape)
# shuffled
nshuff = np.min((num_processors*16,128))
np.random.seed(0)
with Pool(processes = num_processors) as p:
best_cc_shuff_list = p.starmap(my_cc_shuffgene,
[(x,y,genes2enhu['ensid'],rnau,False,jshuff,corr_type,True,False) for jshuff in range(nshuff)])
# significance
alpha = 0.01;
best_cc_shuff = np.hstack([b[0].values[:,np.newaxis] for b in best_cc_shuff_list]) # gene (best corr) by num_shuff
best_cc_shuff_max = np.percentile(np.abs(best_cc_shuff), 100*(1-alpha), axis=1) # get 99% (robust max) across shuffles
best_cc_shuff_mean = np.abs(best_cc_shuff).mean(axis=1) # get mean across shuffles for each gene
sig = np.abs(best_cc).squeeze()>best_cc_shuff_max # corr greater than 99% of the shuffled
fdr = (alpha*len(sig))/np.sum(sig) # fdr - alpha
print(np.sum(sig), len(sig), alpha, fdr)
return best_cc, best_enh, best_ensid, all_cc, best_cc_shuff, best_cc_shuff_max, best_cc_shuff_mean, sig, fdr
import warnings
warnings.filterwarnings('ignore')
(best_cc_1, best_enh_1, best_ensid_1, all_cc_1,
best_cc_shuff_1, best_cc_shuff_max_1, best_cc_shuff_mean_1, sig_1, fdr_1,
) = corr_pipe(x1, y, genes2enhu, rnau, corr_type,)
(best_cc_2, best_enh_2, best_ensid_2, all_cc_2,
best_cc_shuff_2, best_cc_shuff_max_2, best_cc_shuff_mean_2, sig_2, fdr_2,
) = corr_pipe(x2, y, genes2enhu, rnau, corr_type,)
def plot_dists(best_cc, best_enh, best_ensid, all_cc,
best_cc_shuff, best_cc_shuff_max, best_cc_shuff_mean,
sig, fdr, alpha,
feature):
ngenes = best_cc.shape[0]
fig, axs = plt.subplots(3,1,figsize=(5,10))
ax = axs[0]
ax.scatter(best_cc, best_cc_shuff_mean,
s=2,c=sig,
cmap=ListedColormap(["gray",'red']),
rasterized=True,
)
ax.plot([-1,0,1],[1,0,1],'k--')
ax.set_xlabel('Max %s correlation' % corr_type)
ax.set_ylabel('Max %s correlation\n(Mean of shuffles)' % corr_type)
ax.set_title('%s\n%d/%d=%3.1f%%\nsig. genes (p<%3.2g, FDR=%3.1f%%)' % (
feature,
sig.sum(),ngenes,
100*sig.sum()/ngenes,
alpha, fdr*100), )
ax = axs[1]
bins = np.arange(-2,2,0.1)
hist_config = {
'histtype': 'bar',
'edgecolor': 'none',
'alpha': 0.5,
'density': False,
}
_vec = best_cc.squeeze()/best_cc_shuff_mean.squeeze()
cond_pos_sig = np.logical_and(sig, best_cc > 0)
cond_neg_sig = np.logical_and(sig, best_cc <= 0)
ax.hist(_vec, bins=bins,
color='gray', label='All genes',
**hist_config,
)
ax.hist(_vec[sig], bins=bins,
color='red', label='Significant',
**hist_config,
)
ax.axvline(-1, linestyle='--', color='k')
ax.axvline(1, linestyle='--', color='k')
ax.set_xlabel(corr_type+' correlation/(Mean abs. corr. of shuffles)')
ax.set_ylabel('Number of genes')
num_sig, num_pos_sig, num_neg_sig = (sig.sum(),
cond_pos_sig.sum(),
cond_neg_sig.sum(),
)
ax.set_title("Num. pos={} ({:.1f}%)\nNum. neg={} ({:.1f}%)".format(
num_pos_sig, num_pos_sig/num_sig*100,
num_neg_sig, num_neg_sig/num_sig*100,
))
ax.legend(bbox_to_anchor=(1,1))
ax = axs[2]
bins = bins=np.arange(0,1,0.02)
hist_config = {
'histtype': 'bar',
'edgecolor': 'none',
'alpha': 0.5,
'density': True,
}
ax.hist(np.abs(all_cc), bins=bins,
color='C1',
label='All enh-gene pairs',
**hist_config,
)
ax.hist(best_cc_shuff.reshape(-1,1), bins=bins,
color='gray',
label='Best (all shuffles)',
**hist_config,
)
ax.hist(best_cc_shuff_max, bins=bins,
color='C2',
label='Best (max. shuffle)',
**hist_config,
)
ax.hist(best_cc_shuff_mean, bins=bins,
color='C0',
label='Best (mean shuffle)',
**hist_config,
)
ax.hist(best_cc.squeeze(), bins=bins,
color='C3',
label='Best (data)',
**hist_config,
)
ax.legend(bbox_to_anchor=(1,1))
ax.set_xlabel(corr_type+' correlation')
ax.set_ylabel('Density of genes')
fig.subplots_adjust(hspace=0.9)
fn_plot = output.format("genes_corr_"+feature+'_'+corr_type)
snmcseq_utils.savefig(fig, fn_plot)
print('Saved %s' % fn_plot)
alpha = 0.01
feature = 'mCG'
plot_dists(best_cc_1, best_enh_1, best_ensid_1, all_cc_1,
best_cc_shuff_1, best_cc_shuff_max_1, best_cc_shuff_mean_1, sig_1, fdr_1, alpha,
feature)
feature = 'ATAC'
plot_dists(best_cc_2, best_enh_2, best_ensid_2, all_cc_2,
best_cc_shuff_2, best_cc_shuff_max_2, best_cc_shuff_mean_2, sig_2, fdr_2, alpha,
feature)
# np.savez(
# output.format('GenesCorr_%s_%s.npz' % (features_use, today)),
# best_cc=best_cc,best_enh=best_enh,best_ensid=best_ensid,
# sig=sig, best_cc_shuff=best_cc_shuff)
# print('Saved data; t=%3.3f; %s' % (time.time()-t00, datetime.datetime.today()))
# check randomness
# plt.scatter(np.arange(best_cc_shuff.shape[1]), best_cc_shuff[0])
# plt.scatter(np.arange(best_cc_shuff.shape[1]), best_cc_shuff[1])
# plt.scatter(np.arange(best_cc_shuff.shape[1]), best_cc_shuff[2])
# plt.title("num_processors = {}".format(num_processors))
# plt.xlabel('n shuffle')
# plt.ylabel('corr for a gene-enh pair')
genes2enhu.head()
genes2enhu['cc'] = all_cc
best_ensid_inv = pd.Series(best_ensid.index.values, index=best_ensid)
i = best_ensid_inv.loc[genes2enhu.index].values
genes2enhu['best_cc'] = genes2enhu.iloc[i,:]['cc']
i = pd.Series(np.arange(best_ensid.shape[0]), index=best_ensid)
genes2enhu['best_cc_shuff_max'] = best_cc_shuff_max[i.loc[genes2enhu.index]]
isig = sig[best_ensid_inv.loc[genes2enhu.index]].values
genes2enhu['sig'] = (genes2enhu['cc'].abs() >= genes2enhu['best_cc_shuff_max'].abs())
genes2enhu['nonsig'] = (genes2enhu['cc'].abs() < genes2enhu['best_cc_shuff_max'].abs())
# How many enhancers are
# best_cc_shuff_max
nsig = genes2enhu.groupby(level=0).sum()[['sig','nonsig']]
nsig['best_cc'] = best_cc.values
plt.semilogy(nsig['best_cc'], nsig['sig'], '.',
markersize=5);
# top significant genes
nsig['gene_name'] = genes2enhu.loc[nsig.index,:]['gene_name'].drop_duplicates()
nsig.sort_values('sig').iloc[-10:,:]
def my_cdfplot(ax, x, label=''):
ax.semilogx(np.sort(np.abs(x)), np.linspace(0,1,len(x)),
label='%s (%d)\nd=%3.1f±%3.1f kb' %
(label, len(x), x.mean()/1000, x.std()/1000/np.sqrt(len(x))))
return
fig, axs = plt.subplots(1, 2, figsize=(8,5))
ax = axs[0]
hist_config = {
'histtype': 'bar',
'edgecolor': 'none',
'alpha': 1,
'density': False,
}
ax.hist(nsig['sig'].values, bins=np.arange(100),
**hist_config
)
ax.set_xlabel('Number of significant enhancers')
ax.set_ylabel('Number of genes')
ax.set_yscale('log')
ax = axs[1]
my_cdfplot(ax, nsig['sig'].values,)
ax.set_xlabel('Number of significant enhancers')
ax.set_ylabel('Cumulative fraction of genes')
fig.tight_layout()
snmcseq_utils.savefig(fig,
output_fig.format('GenesCorr_NumSigEnh_%s_%s_%s.pdf' % (features_use, today, corr_type))
)
| _____no_output_____ | MIT | eran/.ipynb_checkpoints/Regress_June25_mc-checkpoint.ipynb | FangmingXie/scf_enhancer_paper |
Stage 1.5: Compare ATAC and mC | print(all_cc_1.shape, best_cc_1.shape, sig_1.shape, best_cc_1[sig_1].shape, best_ensid_1.shape, best_enh_1.shape)
# best_cc_1[sig_1]
all_cc_2[best_enh_1[sig_1].index.values].shape
fig, ax = plt.subplots()
ax.scatter(all_cc_1, all_cc_2,
color='lightgray', s=1, alpha=0.3,
rasterized=True,)
ax.scatter(
all_cc_1[best_enh_1.index.values], # same as best_cc_1[sig_1]
all_cc_2[best_enh_1.index.values],
color='lightblue', label='best mCG',
s=1, alpha=0.5,
rasterized=True,)
ax.scatter(
all_cc_1[best_enh_2.index.values], # same as best_cc_1[sig_1]
all_cc_2[best_enh_2.index.values],
color='wheat', label='best ATAC',
s=1, alpha=0.5,
rasterized=True,)
ax.scatter(
all_cc_1[best_enh_1[sig_1].index.values], # same as best_cc_1[sig_1]
all_cc_2[best_enh_1[sig_1].index.values],
color='C0', label='sig. mCG',
s=1, alpha=0.5,
rasterized=True,)
ax.scatter(
all_cc_1[best_enh_2[sig_2].index.values], # same as best_cc_1[sig_1]
all_cc_2[best_enh_2[sig_2].index.values],
color='C1', label='sig. ATAC',
s=1, alpha=0.5,
rasterized=True,)
ax.legend(bbox_to_anchor=(1,1))
ax.set_xlabel('mCG-RNA {} corr'.format(corr_type))
ax.set_ylabel('ATAC-RNA {} corr'.format(corr_type))
snmcseq_utils.savefig(fig,
output_fig.format('mCG_ATAC_agreement_%s_%s_%s.pdf' % (features_use, today, corr_type))
)
plt.show()
fig, ax = plt.subplots()
ax.scatter(
all_cc_1[best_enh_1.index.values], # same as best_cc_1[sig_1]
all_cc_2[best_enh_1.index.values],
color='lightgray', label='best',
s=1, alpha=0.3,
rasterized=True,)
ax.scatter(
all_cc_1[best_enh_2.index.values], # same as best_cc_1[sig_1]
all_cc_2[best_enh_2.index.values],
color='lightgray', label='best',
s=1, alpha=0.5,
rasterized=True,)
ax.scatter(
all_cc_1[best_enh_1[sig_1].index.values], # same as best_cc_1[sig_1]
all_cc_2[best_enh_1[sig_1].index.values],
color='C0', label='sig. mCG',
s=1, alpha=0.5,
rasterized=True,)
ax.scatter(
all_cc_1[best_enh_2[sig_2].index.values], # same as best_cc_1[sig_1]
all_cc_2[best_enh_2[sig_2].index.values],
color='C1', label='sig. ATAC',
s=1, alpha=0.5,
rasterized=True,)
ax.legend(bbox_to_anchor=(1,1))
ax.set_xlabel('mCG-RNA {} corr'.format(corr_type))
ax.set_ylabel('ATAC-RNA {} corr'.format(corr_type))
snmcseq_utils.savefig(fig,
output_fig.format('mCG_ATAC_agreement2_%s_%s_%s.pdf' % (features_use, today, corr_type))
)
plt.show()
from matplotlib_venn import venn2
fig, ax = plt.subplots()
venn2([set(best_ensid_1[sig_1].values), set(best_ensid_2[sig_2].values)],
set_labels=('sig. mCG', 'sig. ATAC'),
set_colors=('C0', 'C1'),
ax=ax
)
ax.set_title('Overlap of sig. genes')
snmcseq_utils.savefig(fig,
output_fig.format('mCG_ATAC_agreement3_%s_%s_%s.pdf' % (features_use, today, corr_type))
)
plt.show()
fig, ax = plt.subplots()
venn2([set(sig_1[sig_1].index.values), set(sig_2[sig_2].index.values)],
set_labels=('sig. mCG', 'sig. ATAC'),
set_colors=('C0', 'C1'),
ax=ax
)
ax.set_title('Overlap of sig. gene-enhancer pairs')
snmcseq_utils.savefig(fig,
output_fig.format('mCG_ATAC_agreement4_%s_%s_%s.pdf' % (features_use, today, corr_type))
)
plt.show() | _____no_output_____ | MIT | eran/.ipynb_checkpoints/Regress_June25_mc-checkpoint.ipynb | FangmingXie/scf_enhancer_paper |
Stage 2: Regression modeling across sig. genes | # Are there any duplicate enhancers?
_x = genes2enhu.iloc[(best_enh_1[sig_1].values),:]
nenh_sig = len(_x)
nenh_sig_unique = len(_x['enh_pos'].unique())
nenh_sig_genes_unique = len(_x['ensid'].unique())
print(nenh_sig, nenh_sig_unique, nenh_sig_genes_unique)
# best_enh_1[sig_1]
# get sig. mC enhancer-gene pairs (1 for each gene) only
mc_u = df_mlevelu.loc[genes2enhu['enh_pos'],:].to_numpy()[best_enh_1[sig_1],:]
atac_u = df_atacu.loc[genes2enhu['enh_pos'],:].to_numpy()[best_enh_1[sig_1],:]
rna_u = rnau.loc[genes2enhu['ensid'],:].to_numpy()[best_enh_1[sig_1],:].copy()
genes2enhu_u = genes2enhu.iloc[best_enh_1[sig_1],:].copy()
genes2enhu_u = genes2enhu_u.drop('ensid',axis=1).reset_index()
# genes2enhu.iloc[(best_enh_1[sig_1].values),:]['enh_pos'].shape
# cc_mc_rna = np.array([np.corrcoef(x1,y1)[0,1] for (x1,y1) in zip(mc_u,rna_u)])
# cc_atac_rna = np.array([np.corrcoef(x1,y1)[0,1] for (x1,y1) in zip(atac_u,rna_u)])
# genes2enhu_u.loc[:,'cc_mc_rna'] = cc_mc_rna
# genes2enhu_u.loc[:,'cc_atac_rna'] = cc_atac_rna
# genes2enhu_u.sort_values('cc_mc_rna')
# # genes2enhu_u['cc_atac_rna'] = cc_atac_rna
# fig, ax = plt.subplots()
# sig_pos = (genes2enhu_u['cc_mc_rna']<0) & (genes2enhu_u['cc_atac_rna']>0)
# sig_neg = (genes2enhu_u['cc_mc_rna']>0) & (genes2enhu_u['cc_atac_rna']<-0)
# ax.plot(cc_mc_rna, cc_atac_rna, '.', color='gray', label='%d significnat pairs' % np.sum(sig))
# ax.plot(cc_mc_rna[sig_pos], cc_atac_rna[sig_pos], 'r.', label='%d corr pairs' % np.sum(sig_pos))
# ax.plot(cc_mc_rna[sig_neg], cc_atac_rna[sig_neg], 'g.', label='%d anti-corr pairs' % np.sum(sig_neg))
# ax.set_xlabel('Correlation mCG vs. RNA')
# ax.set_ylabel('Correlation ATAC vs. RNA')
# ax.legend(bbox_to_anchor=(1,1))
# print('We found %d significant enhancer-gene links, covering %d unique enhancers and %d unique genes' %
# (nenh_sig, nenh_sig_unique, nenh_sig_genes_unique))
# print('%d of these have the expected correlation (negative for mCG, positive for ATAC)' %
# (np.sum(sig_pos)))
# print('%d of these have the opposite correlation (positive for mCG, negative for ATAC)' %
# (np.sum(sig_neg)))
# snmcseq_utils.savefig(fig, output_fig.format(
# 'EnhancerRegression_SigEnhancers_scatter_mCG_ATAC_corr_%dGenes_%dCelltypes_%s' %
# (genes2enhu.ensid.unique().shape[0], len(celltypes), today)
# ))
# fig, ax = plt.subplots(figsize=(7,4))
# my_cdfplot(ax, genes2enhu['dtss'], label='All pairs')
# my_cdfplot(ax, genes2enhu_u['dtss'], label='Best pair for each gene')
# my_cdfplot(ax, genes2enhu_u['dtss'][sig_pos], label='Positive corr')
# my_cdfplot(ax, genes2enhu_u['dtss'][sig_neg], label='Negative corr')
# ax.legend(bbox_to_anchor=(1, 0.8))
# ax.set_xlim([1e3,3e5])
# ax.set_xlabel('Distance of enhancer from TSS')
# ax.set_ylabel('Cumulative fraction')
# ax.set_yticks(ticks=[0,.25,.5,.75,1]);
# snmcseq_utils.savefig(fig, output_fig.format(
# 'EnhancerRegression_SigEnhancers_dTSS_cdf_%dGenes_%dCelltypes_%s' %
# (genes2enhu.ensid.unique().shape[0], len(celltypes), today)
# ))
# Ordinary linear regression with CV
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import cross_validate
from sklearn.metrics import r2_score, make_scorer
from sklearn.preprocessing import PolynomialFeatures
X = np.concatenate((mc_u,atac_u),axis=1).copy()
y = np.log10(rna_u+1).copy()
X = zscore(X, axis=0)
y = zscore(y, axis=0)
y = y - np.mean(y,axis=1,keepdims=True)
# X = X[sig_pos,:]
# y = y[sig_pos,:]
mdl = LinearRegression(fit_intercept=True, normalize=True)
ngenes,ncells = y.shape
print('%d genes, %d celltypes' % (ngenes,ncells))
intxn_order = 3
my_r2 = make_scorer(r2_score)
res_cv = {}
cv = 5
for i,yi in enumerate(y.T):
# Regression using only mCG and ATAC from the same cell type
Xu = X[:,[i,i+ncells]]
Xu = np.concatenate((X[:,[i,i+ncells]],
# np.mean(X[:,:ncells],axis=1,keepdims=True),
# np.mean(X[:,ncells:],axis=1,keepdims=True),
),axis=1)
# Xu = PolynomialFeatures(degree=3, include_bias=False).fit_transform(Xu)
res_cvi = cross_validate(mdl,Xu,yi,cv=cv,
scoring=my_r2,
return_train_score=True,
verbose=0)
if i==0:
print('Simple model: %d parameters' % Xu.shape[1])
dof_simple=Xu.shape[1]
for m in res_cvi:
if (m in res_cv):
res_cv[m] = np.vstack((res_cv[m], res_cvi[m]))
else:
res_cv[m]=res_cvi[m]
# Regression using mCG and ATAC from the same cell type, as well as the mean across all cell types
# res_cvi = cross_validate(mdl,X,yi,cv=cv,
# scoring=my_r2,
# return_train_score=True,
# verbose=0)
Xu = np.concatenate((X[:,[i,i+ncells]],
np.mean(X[:,:ncells],axis=1,keepdims=True),
np.mean(X[:,ncells:],axis=1,keepdims=True),
),axis=1)
Xu = PolynomialFeatures(degree=intxn_order, include_bias=False).fit_transform(Xu)
res_cvi = cross_validate(mdl, Xu, yi,
cv=cv,
scoring=my_r2,
return_train_score=True,
verbose=0)
if i==0:
print('Complex model: %d parameters' % Xu.shape[1])
dof_complex=Xu.shape[1]
for m1 in res_cvi:
m = m1+'_all'
if (m in res_cv):
res_cv[m] = np.vstack((res_cv[m], res_cvi[m1]))
else:
res_cv[m]=res_cvi[m1]
cellnames = df_cellnames.loc[celltypes]['annot']
# Show the OLS results
def myplot(ax, x, label='', fmt=''):
x[x<0] = 0
# xu = np.sqrt(x)
xu = x
ax.errorbar(cellnames, xu.mean(axis=1), xu.std(axis=1)/np.sqrt(cv),
label=label, fmt=fmt)
return
fig, ax = plt.subplots(figsize=(8,6))
myplot(ax, res_cv['train_score'],
fmt='rs-', label='Train simple model:\nRNA~mCG+ATAC\n(%d params)' % dof_simple)
myplot(ax, res_cv['test_score'],
fmt='ro-', label='Test')
myplot(ax, res_cv['train_score_all'],
fmt='bs--', label='Train complex model:\nRNA~mCG+ATAC+mean(mCG)+mean(ATAC)+%dth order intxn\n(%d params)' %
(intxn_order, dof_complex))
myplot(ax, res_cv['test_score_all'],
fmt='bo--', label='Test')
ax.legend(bbox_to_anchor=(1, 1))
ax.set_xlabel('Cell type')
ax.set_ylabel('Score (R^2)')
ax.xaxis.set_tick_params(rotation=90)
ax.grid(axis='y')
ax.set_title('%d genes, separate model for each of %d celltypes' % y.shape)
snmcseq_utils.savefig(fig, output_fig.format(
'EnhancerRegression_SigEnhancers_OLS_%dGenes_%dCelltypes_%s' %
(genes2enhu.ensid.unique().shape[0], len(celltypes), today)
))
# # Multi-task LASSO regression with CV
# from sklearn.linear_model import MultiTaskLassoCV
# t0=time.time()
# mdl = MultiTaskLassoCV(fit_intercept=True, normalize=True, cv=cv,
# selection='random',
# random_state=0)
# X = np.concatenate((mc_u,atac_u),axis=1).copy()
# y = np.log10(rna_u+1).copy()
# X = zscore(X[sig_pos,:], axis=0)
# y = zscore(np.log10(y[sig_pos,:]+1), axis=0)
# reg = mdl.fit(X,y)
# print('Done fitting LASSO, t=%3.3f s' % (time.time()-t0))
# plt.errorbar(reg.alphas_, reg.mse_path_.mean(axis=1), reg.mse_path_.std(axis=1))
# plt.vlines(reg.alpha_, plt.ylim()[0], plt.ylim()[1], 'k')
# plt.xscale('log')
# Single task LASSO with CV, interaction terms
from sklearn.linear_model import LassoCV
Xu_all = []
for i,yi in enumerate(y.T):
Xu = np.concatenate((X[:,[i,i+ncells]],
np.mean(X[:,:ncells],axis=1,keepdims=True),
np.mean(X[:,ncells:],axis=1,keepdims=True),
),axis=1)
Xu_all.append(Xu.T)
Xu_all = np.dstack(Xu_all).reshape(4,-1).T
Xu_fit = PolynomialFeatures(degree=intxn_order, include_bias=False)
Xu_all = Xu_fit.fit_transform(Xu_all)
feature_names = Xu_fit.get_feature_names(input_features=['mC','A','mCm','Am'])
print(Xu_all.shape, y.shape)
yu = y.ravel()
print(Xu_all.shape, yu.shape)
t0=time.time()
mdl = LassoCV(fit_intercept=True, normalize=True, cv=cv,
selection='random',
random_state=0,
n_jobs=8)
reg = mdl.fit(Xu_all,yu)
print('Done fitting LASSO, t=%3.3f s' % (time.time()-t0))
plt.errorbar(reg.alphas_, reg.mse_path_.mean(axis=1), reg.mse_path_.std(axis=1))
plt.vlines(reg.alpha_, plt.ylim()[0], plt.ylim()[1], 'k')
plt.xscale('log')
plt.xlabel('LASSO Regularization (lambda)')
plt.ylabel('MSE')
yhat = reg.predict(Xu_all).reshape(y.shape)
cc = [np.corrcoef(y1,y1hat)[0,1] for (y1,y1hat) in zip(y.T,yhat.T)]
fig, ax = plt.subplots(figsize=(10,5))
ax.plot(cellnames, np.power(cc, 2), 'o-', color='C1', label='LASSO fit, single model for all cell types')
# myplot(ax, res_cv['test_score_all'], label='Test (RNA~mCG+ATAC+mean(mCG)+mean(ATAC)+Intxn)', fmt='o--')
myplot(ax, res_cv['train_score'],
fmt='rs-', label='Train simple model:\nRNA~mCG+ATAC\n(%d params)' % dof_simple)
myplot(ax, res_cv['test_score'],
fmt='ro-', label='Test')
myplot(ax, res_cv['train_score_all'],
fmt='bs--', label='Train complex model:\nRNA~mCG+ATAC+mean(mCG)+mean(ATAC)+%dth order intxn\n(%d params)' %
(intxn_order, dof_complex))
myplot(ax, res_cv['test_score_all'],
fmt='bo--', label='Test')
ax.legend(bbox_to_anchor=(1, 0.8))
ax.set_xlabel('Cell type')
ax.set_ylabel('Score (R^2)')
ax.xaxis.set_tick_params(rotation=90)
ax.grid(axis='y')
ax.set_ylim([0,0.8])
ax.set_title('Model for %d genes across %d celltypes' % y.shape)
snmcseq_utils.savefig(fig, output_fig.format(
'EnhancerRegression_SigEnhancers_CompareLASSO_%dGenes_%dCelltypes_%s' %
(genes2enhu.ensid.unique().shape[0], len(celltypes), today)
))
fig, ax = plt.subplots(figsize=(10,5))
show = np.abs(reg.coef_)>0.01
show = np.argsort(np.abs(reg.coef_))[-30:][::-1]
ax.bar(np.array(feature_names)[show], reg.coef_[show])
ax.xaxis.set_tick_params(rotation=90)
ax.set_ylabel('Regression coefficient')
ax.grid(axis='y')
snmcseq_utils.savefig(fig, output_fig.format(
'EnhancerRegression_SigEnhancers_LASSO_CorrCoef_%dGenes_%dCelltypes_%s' %
(genes2enhu.ensid.unique().shape[0], len(celltypes), today)
)) | _____no_output_____ | MIT | eran/.ipynb_checkpoints/Regress_June25_mc-checkpoint.ipynb | FangmingXie/scf_enhancer_paper |
Apply the nonlinear model to all enhancer | mc_u = df_mlevelu.loc[genes2enhu['enh_pos'],:].to_numpy()
atac_u = df_atacu.loc[genes2enhu['enh_pos'],:].to_numpy()
genes2enhu_u = genes2enhu.copy()
genes2enhu_u = genes2enhu_u.drop('ensid',axis=1).reset_index()
rna_u = rnau.loc[genes2enhu['ensid'],:].to_numpy()
rna_u.shape, mc_u.shape, atac_u.shape
X = np.concatenate((mc_u,atac_u),axis=1).copy()
y = np.log10(rna_u+1).copy()
X = zscore(X, axis=0)
y = zscore(y, axis=0)
y = y - np.mean(y,axis=1,keepdims=True)
X.shape, y.shape
Xu_all = []
for i,yi in enumerate(y.T):
Xu = np.concatenate((X[:,[i,i+ncells]],
np.mean(X[:,:ncells],axis=1,keepdims=True),
np.mean(X[:,ncells:],axis=1,keepdims=True),
),axis=1)
Xu_all.append(Xu.T)
Xu_all = np.dstack(Xu_all).reshape(4,-1).T
Xu_fit = PolynomialFeatures(degree=intxn_order, include_bias=False).fit(Xu_all)
feature_names = Xu_fit.get_feature_names(input_features=['mC','A','mCm','Am'])
Xu_all = PolynomialFeatures(degree=intxn_order, include_bias=False).fit_transform(Xu_all)
Xu_all.shape, y.shape
yhat = reg.predict(Xu_all).reshape(y.shape)
x = df_mlevelu.loc[genes2enhu['enh_pos'],:].to_numpy()
best_cc,best_enh,best_ensid,all_cc = my_cc(-x,y,genes2enhu['ensid'],False,0,corr_type)
(~np.isfinite(best_cc2)).sum()
best_cc2,best_enh2,best_ensid2,all_cc2 = my_cc(yhat,y,genes2enhu['ensid'],False,0,corr_type)
plt.figure(figsize=(10,10))
plt.plot(np.abs(all_cc[best_enh]), np.abs(all_cc2[best_enh]), '.', markersize=1, rasterized=True)
plt.plot(np.abs(all_cc[best_enh2]), np.abs(all_cc2[best_enh2]), '.', markersize=1, rasterized=True)
plt.plot([0,1],[0,1],'k')
np.abs(best_cc2)/(np.abs(best_cc)+1e-6)
best_cc2.shape, best_cc.shape
plt.hist(np.abs(best_cc2).values/np.abs(best_cc).values, bins=np.arange(0.7,1.3,0.01));
print(np.abs(best_cc2).values/np.abs(best_cc).values.mean())
# For each gene, find all enhancers with significant cc
df = pd.DataFrame(data=all_cc, columns=['cc'], index=genes2enhu[['ensid','enh_pos']])
df['ensid'] = genes2enhu['ensid'].values
df['enh_pos'] = genes2enhu['enh_pos'].values
df['cc2'] = all_cc2
df['good_pairs'] = df['cc']>0.6
df['good_pairs2'] = df['cc2']>0.6
npairs_df=df.groupby('ensid')[['good_pairs','good_pairs2']].sum()
plt.loglog(npairs_df['good_pairs']+1,npairs_df['good_pairs2']+1,'.')
plt.plot([1,1e3],[1,1e3],'k')
np.mean((npairs_df['good_pairs2']+1)/(npairs_df['good_pairs']+1)) | _____no_output_____ | MIT | eran/.ipynb_checkpoints/Regress_June25_mc-checkpoint.ipynb | FangmingXie/scf_enhancer_paper |
Average over all the enhancers linked to a single gene | def myz(x):
z = zscore(x, axis=1, nan_policy='omit', ddof=0)
return z
def make_df(z):
z_df = pd.DataFrame(data=z, columns=df_mlevelu.columns, index=rnau.index)
return z_df
multiEnh = {}
multiEnh['rna'] = myz(rnau.values);
multiEnh['rna_hat_1Enh'] = myz(yhat[best_enh2,:])
multiEnh['rna_hat_AllEnh'] = myz(yhat[best_enh2,:])
multiEnh['rna_hat_AllSigEnh'] = np.zeros(yhat[best_enh2,:].shape)+np.nan;
t0=time.time()
for i,c in enumerate(celltypes):
df = pd.DataFrame(data=yhat[:,i], columns=['yhat'])
df['ensid'] = genes2enhu.loc[:,'ensid'].values
multiEnh['rna_hat_AllEnh'][:,i] = df.groupby('ensid')['yhat'].mean()
df = df.loc[genes2enhu.sig.values,:]
multiEnh['rna_hat_AllSigEnh'][sig,i] = df.groupby('ensid')['yhat'].mean()
multiEnh['rna'] = make_df(multiEnh['rna']);
multiEnh['rna_hat_1Enh'] = make_df(multiEnh['rna_hat_1Enh']);
multiEnh['rna_hat_AllEnh'] = make_df(multiEnh['rna_hat_AllEnh'])
multiEnh['rna_hat_AllSigEnh'] = make_df(multiEnh['rna_hat_AllSigEnh'])
print(time.time()-t0)
cc_1Enh = np.diag(np.corrcoef(multiEnh['rna'].values, multiEnh['rna_hat_1Enh'].values, rowvar=False)[:ncells,ncells:])
cc_AllEnh = np.diag(np.corrcoef(multiEnh['rna'].values, multiEnh['rna_hat_AllEnh'].values, rowvar=False)[:ncells,ncells:])
cc_AllSigEnh = np.diag(np.corrcoef(multiEnh['rna'].values[sig,:], multiEnh['rna_hat_AllSigEnh'].values[sig,:], rowvar=False)[:ncells,ncells:])
plt.plot(cellnames, cc_1Enh, label='1 enhancer')
plt.plot(cellnames, cc_AllEnh, label='All enhancers')
plt.plot(cellnames, cc_AllSigEnh, label='Significant enhancers')
plt.legend(bbox_to_anchor=(1,1))
plt.xticks(rotation=90);
plt.ylabel('Correlation across genes')
def cc_gene(x,y):
c = np.nan_to_num([np.corrcoef(x1,y1)[0,1] for (x1,y1) in zip(x,y)])
return c
cc_1Enh = cc_gene(multiEnh['rna'].values, multiEnh['rna_hat_1Enh'].values)
cc_AllEnh = cc_gene(multiEnh['rna'].values, multiEnh['rna_hat_AllEnh'].values)
cc_AllSigEnh = cc_gene(multiEnh['rna'].values, multiEnh['rna_hat_AllSigEnh'].values)
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(2,2,1)
ax.plot(np.abs(cc_1Enh), np.abs(cc_AllEnh), '.', markersize=1, rasterized=True)
ax.set_xlabel('Corr with 1 best enhancer')
ax.set_ylabel('Corr with avg. prediction\nbased on all enhancers')
ax = fig.add_subplot(2,2,2)
ax.plot(np.abs(cc_1Enh), np.abs(cc_AllSigEnh), '.', markersize=1, rasterized=True)
ax.set_xlabel('Corr with 1 best enhancer')
ax.set_ylabel('Corr with avg. prediction\nbased on sig. enhancers')
ax = fig.add_subplot(2,1,2)
bins = np.arange(-1,1,1/100)
hist_config = {
'histtype': 'bar',
'edgecolor': 'none',
'alpha': 0.5,
'density': False,
}
ax.hist(np.abs(cc_AllEnh)-np.abs(cc_1Enh), bins=bins,
label='All enhancers-Best enhancer',
**hist_config,
)
ax.hist(np.abs(cc_AllSigEnh)-np.abs(cc_1Enh), bins=bins,
label='Sig enhancers-Best enhancer',
**hist_config,
)
ax.legend(bbox_to_anchor=(1,1))
ax.set_xlabel('Difference in correlation')
ax.set_ylabel('Number of genes')
fig.subplots_adjust(wspace=0.5, hspace=0.3)
snmcseq_utils.savefig(fig, output_fig.format(
'EnhancerRegression_Correlation_1Enh_vs_AllEnh_%dGenes_%dCelltypes_%s' %
(genes2enhu.ensid.unique().shape[0], len(celltypes), today))
)
| _____no_output_____ | MIT | eran/.ipynb_checkpoints/Regress_June25_mc-checkpoint.ipynb | FangmingXie/scf_enhancer_paper |
Nonlinear model fitting | import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print(device)
X = np.concatenate((mc_u,atac_u),axis=1).copy()
y = np.log10(rna_u+1).copy()
ngenes,ncells = y.shape
X.shape, y.shape
# Define a class for the NN architecture
Ngenes, Nc = y.shape
Nx = X.shape[1]
N1 = 128
N2 = 32
N3 = 0
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = nn.Linear(Nx, N1);
self.fc2 = nn.Linear(N1, N2);
# self.fc3 = nn.Linear(N2, N3);
self.fc4 = nn.Linear(N2, Nc);
def forward(self, x):
x = F.relu(self.fc1(x)) # Out: N x N1
x = F.relu(self.fc2(x)) # Out: N x N2
# x = F.relu(self.fc3(x)) # Out: N x N3
x = self.fc4(x) # Out: N x C
return x
# Initialize
def myinit():
global net, optimizer, criterion, scheduler, loss_test, loss_train, test, train, ensids
net = Net()
net.to(device)
# # Initialize the kmer weights to 0 and turn off learning
# net.fc1_kmers.requires_grad_(False)
# net.fc1_kmers.weight.fill_(0)
# net.fc1_kmers.bias.fill_(0)
criterion = nn.MSELoss(reduction='sum')
optimizer = optim.Adam(net.parameters(), lr=lr)
scheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer, factor=0.25)
loss_test=np.array([])
loss_train = np.array([])
# Train/Test split
test = (np.random.rand(Ngenes,1)<0.2)
train = [not i for i in test]
test = np.random.permutation(np.nonzero(test)[0]).squeeze()
train = np.random.permutation(np.nonzero(train)[0]).squeeze()
ensids = rnau.index.values
return
def train_epoch(epoch):
nsamp = 0
running_loss = 0.0
running_time = 0.0
net.train()
t0train = time.time()
for i in range(0, len(train), batch_size):
tstart = time.time()
indices = train[i:i+batch_size]
# Input should be of size: (batch, channels, samples)
batch_X = torch.tensor(X[indices,:],dtype=torch.float)
batch_y = torch.tensor(y[indices,:],dtype=torch.float)
# Send training data to CUDA
if device is not "cpu":
batch_X = batch_X.to(device)
batch_y = batch_y.to(device)
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(batch_X)
loss = criterion(outputs, batch_y)
loss.backward()
optimizer.step()
running_loss += loss.item()
running_time += time.time()-tstart
nsamp += len(indices)
if (time.time()-t0train)>5:
print('Epoch %d, i=%d/%d, LR=%3.5g, loss=%3.8f, t=%3.3f, %3.5f s/sample' % (epoch, i, len(train),
optimizer.state_dict()['param_groups'][0]['lr'],
running_loss/nsamp, running_time, running_time/nsamp))
t0train=time.time()
return running_loss/nsamp
def test_epoch(epoch):
net.eval()
running_loss_test = 0.0
nsamp = 0
yyhat = {'y':[], 'yhat':[]}
for i in range(0, len(test), batch_size):
indices = test[i:i+batch_size]
# Input should be of size: (batch, channels, samples)
batch_X = torch.tensor(X[indices,:],dtype=torch.float)
batch_y = torch.tensor(y[indices,:],dtype=torch.float)
# Send training data to CUDA
if device is not "cpu":
batch_X = batch_X.to(device)
batch_y = batch_y.to(device)
# forward + backward + optimize
outputs = net(batch_X)
loss = criterion(outputs, batch_y)
running_loss_test += loss.item()
nsamp += len(indices)
yyhat['yhat'].append(outputs.detach().cpu().numpy())
yyhat['y'].append(batch_y.detach().cpu().numpy())
return running_loss_test/nsamp
lr = 0.0002
myinit()
train.shape, test.shape
import glob
from IPython import display
def test_net(indices):
net.eval()
yyhat = {'y':[], 'yhat':[]}
for i in range(0, len(indices), batch_size):
i = indices[i:i+batch_size]
# Input should be of size: (batch, channels, samples)
batch_X = torch.tensor(X[indices,:],dtype=torch.float)
batch_y = torch.tensor(y[indices,:],dtype=torch.float)
# Send training data to CUDA
if device is not "cpu":
batch_X = batch_X.to(device)
outputs = net(batch_X)
yyhat['yhat'].append(outputs.detach().cpu().numpy())
yyhat['y'].append(batch_y.numpy())
yyhat['yhat'] = np.concatenate(yyhat['yhat'],axis=0)
yyhat['y'] = np.concatenate(yyhat['y'],axis=0)
cc = np.zeros((Nc,1))
for i in range(yyhat['y'].shape[1]):
cc[i,0] = np.corrcoef(yyhat['y'][:,i], yyhat['yhat'][:,i])[0,1]
return yyhat, cc
def make_plot1(save=False):
plt.figure(figsize=(15,4))
plt.clf()
plt.subplot(1,3,1)
plt.semilogx(loss_train[2:],'o-',label='Train')
plt.plot(loss_test[2:],'o-',label='Test')
plt.legend()
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.title(fn_save)
plt.subplot(1,3,2)
plt.plot(yyhat_test['y'].T, yyhat_test['yhat'].T,'.');
plt.plot([0,3],[0,3],'k--')
plt.xlabel('True RNA expression')
plt.ylabel('Estimated RNA expression')
plt.subplot(1,3,3)
plt.plot(np.arange(Nc), cc)
plt.ylabel('R^2?')
plt.xlabel('Cell type')
plt.legend(['Train','Test'])
if save:
fn_plot = output_fig.format(fn_save.replace('.torch','')+'_corrcoef').replace('pdf', 'png')
plt.savefig(fn_plot)
print('Saved plot: '+fn_plot)
plt.tight_layout()
plt.show();
def make_plot2(save=False):
plt.figure(figsize=(20,20))
for i in range(Nc):
plt.subplot(5,6,i+1)
plt.plot([0,2],[0,2],'k--')
plt.plot(yyhat_train['y'][:,i], yyhat_train['yhat'][:,i],'.');
plt.plot(yyhat_test['y'][:,i], yyhat_test['yhat'][:,i],'.');
# cc = np.corrcoef(yyhat['y'][:,i], yyhat['yhat'][:,i])[0,1]
plt.title('r=%3.3f train/%3.3f test' % (cc[i,0], cc[i,1]))
if save:
fn_plot = output_fig.format(fn_save.replace('.torch','')+'_scatter').replace('pdf', 'png')
plt.savefig(fn_plot)
print('Saved plot: '+fn_plot)
plt.tight_layout()
plt.show();
num_epochs1 = 1000
fn_id = len(glob.glob('./RegressEnh*.pt'))+1 # Generate a unique ID for this run
fn_save = 'RegressEnh%0.4d_%s_N_%d_%d_%d.%s.pt' % (fn_id, ('UseKmers' if use_kmers else 'NoKmers'), N1,N2,N3,today)
t0 = time.time()
batch_size = 16
for epoch in range(num_epochs1): # loop over the dataset multiple times
# while epoch<num_epochs1:
new_loss_train = train_epoch(epoch);
loss_train = np.append(loss_train, new_loss_train)
new_loss_test = test_epoch(epoch);
loss_test = np.append(loss_test,new_loss_test)
scheduler.step(new_loss_test)
print('**** Phase1 epoch %d, LR=%3.5g, loss_train=%3.8f, loss_test=%3.8f, time = %3.5f s/epoch' % (
len(loss_train),
optimizer.param_groups[0]['lr'],
loss_train[-1],
loss_test[-1],
(time.time()-t0))
)
if (time.time()-t0)>60 or (epoch==num_epochs1-1):
if (epoch>0):
cc = np.zeros((Nc,2))
yyhat_train, cc[:,[0]] = test_net(random.sample(train.tolist(), 500))
yyhat_test, cc[:,[1]] = test_net(random.sample(test.tolist(), 500))
display.clear_output(wait=True)
display.display(plt.gcf())
make_plot1(save=True)
# make_plot2(save=True)
display.display(plt.gcf())
t0=time.time()
torch.save({
'epoch': epoch,
'model_state_dict': net.state_dict(),
'optimizer_state_dict': optimizer.state_dict(),
'loss_train': loss_train,
'loss_test': loss_test,
}, fn_save)
print('Saved data: %s' % fn_save)
output_fig
# test.max()
# plt.hist2d(df['log_rna'], mdl.predict(), bins=(50,50), cmap=plt.cm.Reds);
# plt.scatter(df['log_rna'], mdl.predict(),s=1)
plt.hist(np.log(rnau.loc[genes2enhu['ensid'][best_enh],:].iloc[:,3]+1), bins=100); | _____no_output_____ | MIT | eran/.ipynb_checkpoints/Regress_June25_mc-checkpoint.ipynb | FangmingXie/scf_enhancer_paper |
If you're opening this Notebook on colab, you will probably need to install 🤗 Transformers and 🤗 Datasets. Right now this requires the current master branch of both. Uncomment the following cell and run it. | #! pip install git+https://github.com/huggingface/transformers.git
#! pip install git+https://github.com/huggingface/datasets.git | _____no_output_____ | Apache-2.0 | examples/multiple_choice-tf.ipynb | HAMZA310/notebooks |
If you're opening this notebook locally, make sure your environment has an install from the last version of those libraries.To be able to share your model with the community, there are a few more steps to follow.First you have to store your authentication token from the Hugging Face website (sign up [here](https://huggingface.co/join) if you haven't already!) then uncomment the following cell and input your username and password (this only works on Colab, in a regular notebook, you need to do this in a terminal): | from huggingface_hub import notebook_login
notebook_login() | Login successful
Your token has been saved to /home/matt/.huggingface/token
| Apache-2.0 | examples/multiple_choice-tf.ipynb | HAMZA310/notebooks |
Then you need to install Git-LFS and setup Git if you haven't already. Uncomment the following instructions and adapt with your name and email: | # !apt install git-lfs
# !git config --global user.email "you@example.com"
# !git config --global user.name "Your Name" | _____no_output_____ | Apache-2.0 | examples/multiple_choice-tf.ipynb | HAMZA310/notebooks |
Make sure your version of Transformers is at least 4.8.1 since the functionality was introduced in that version: | import transformers
print(transformers.__version__) | 4.12.0.dev0
| Apache-2.0 | examples/multiple_choice-tf.ipynb | HAMZA310/notebooks |
If you're opening this Notebook on colab, you will probably need to install 🤗 Transformers and 🤗 Datasets. Uncomment the following cell and run it. Fine-tuning a model on a multiple choice task In this notebook, we will see how to fine-tune one of the [🤗 Transformers](https://github.com/huggingface/transformers) model to a multiple choice task, which is the task of selecting the most plausible inputs in a given selection. The dataset used here is [SWAG](https://www.aclweb.org/anthology/D18-1009/) but you can adapt the pre-processing to any other multiple choice dataset you like, or your own data. SWAG is a dataset about commonsense reasoning, where each example describes a situation then proposes four options that could go after it. This notebook is built to run with any model checkpoint from the [Model Hub](https://huggingface.co/models) as long as that model has a version with a mutiple choice head. Depending on you model and the GPU you are using, you might need to adjust the batch size to avoid out-of-memory errors. Set those two parameters, then the rest of the notebook should run smoothly: | model_checkpoint = "bert-base-uncased"
batch_size = 16 | _____no_output_____ | Apache-2.0 | examples/multiple_choice-tf.ipynb | HAMZA310/notebooks |
Loading the dataset We will use the [🤗 Datasets](https://github.com/huggingface/datasets) library to download the data. This can be easily done with the functions `load_dataset`. | from datasets import load_dataset, load_metric | _____no_output_____ | Apache-2.0 | examples/multiple_choice-tf.ipynb | HAMZA310/notebooks |
`load_dataset` will cache the dataset to avoid downloading it again the next time you run this cell. | datasets = load_dataset("swag", "regular") | Reusing dataset swag (/home/matt/.cache/huggingface/datasets/swag/regular/0.0.0/9640de08cdba6a1469ed3834fcab4b8ad8e38caf5d1ba5e7436d8b1fd067ad4c)
| Apache-2.0 | examples/multiple_choice-tf.ipynb | HAMZA310/notebooks |
The `dataset` object itself is [`DatasetDict`](https://huggingface.co/docs/datasets/package_reference/main_classes.htmldatasetdict), which contains one key for the training, validation and test set (with more keys for the mismatched validation and test set in the special case of `mnli`). | datasets | _____no_output_____ | Apache-2.0 | examples/multiple_choice-tf.ipynb | HAMZA310/notebooks |
To access an actual element, you need to select a split first, then give an index: | datasets["train"][0] | _____no_output_____ | Apache-2.0 | examples/multiple_choice-tf.ipynb | HAMZA310/notebooks |
To get a sense of what the data looks like, the following function will show some examples picked randomly in the dataset. | from datasets import ClassLabel
import random
import pandas as pd
from IPython.display import display, HTML
def show_random_elements(dataset, num_examples=10):
assert num_examples <= len(
dataset
), "Can't pick more elements than there are in the dataset."
picks = []
for _ in range(num_examples):
pick = random.randint(0, len(dataset) - 1)
while pick in picks:
pick = random.randint(0, len(dataset) - 1)
picks.append(pick)
df = pd.DataFrame(dataset[picks])
for column, typ in dataset.features.items():
if isinstance(typ, ClassLabel):
df[column] = df[column].transform(lambda i: typ.names[i])
display(HTML(df.to_html()))
show_random_elements(datasets["train"]) | _____no_output_____ | Apache-2.0 | examples/multiple_choice-tf.ipynb | HAMZA310/notebooks |
Each example in the dataset has a context composed of a first sentence (in the field `sent1`) and an introduction to the second sentence (in the field `sent2`). Then four possible endings are given (in the fields `ending0`, `ending1`, `ending2` and `ending3`) and the model must pick the right one (indicated in the field `label`). The following function lets us visualize a give example a bit better: | def show_one(example):
print(f"Context: {example['sent1']}")
print(f" A - {example['sent2']} {example['ending0']}")
print(f" B - {example['sent2']} {example['ending1']}")
print(f" C - {example['sent2']} {example['ending2']}")
print(f" D - {example['sent2']} {example['ending3']}")
print(f"\nGround truth: option {['A', 'B', 'C', 'D'][example['label']]}")
show_one(datasets["train"][0])
show_one(datasets["train"][15]) | Context: Now it's someone's turn to rain blades on his opponent.
A - Someone pats his shoulder and spins wildly.
B - Someone lunges forward through the window.
C - Someone falls to the ground.
D - Someone rolls up his fast run from the water and tosses in the sky.
Ground truth: option C
| Apache-2.0 | examples/multiple_choice-tf.ipynb | HAMZA310/notebooks |
Preprocessing the data Before we can feed those texts to our model, we need to preprocess them. This is done by a 🤗 Transformers `Tokenizer` which will (as the name indicates) tokenize the inputs (including converting the tokens to their corresponding IDs in the pretrained vocabulary) and put it in a format the model expects, as well as generate the other inputs that model requires.To do all of this, we instantiate our tokenizer with the `AutoTokenizer.from_pretrained` method, which will ensure:- we get a tokenizer that corresponds to the model architecture we want to use,- we download the vocabulary used when pretraining this specific checkpoint.That vocabulary will be cached, so it's not downloaded again the next time we run the cell. | from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) | _____no_output_____ | Apache-2.0 | examples/multiple_choice-tf.ipynb | HAMZA310/notebooks |
You can directly call this tokenizer on one sentence or a pair of sentences: | tokenizer("Hello, this one sentence!", "And this sentence goes with it.") | _____no_output_____ | Apache-2.0 | examples/multiple_choice-tf.ipynb | HAMZA310/notebooks |
Depending on the model you selected, you will see different keys in the dictionary returned by the cell above. They don't matter much for what we're doing here (just know they are required by the model we will instantiate later), you can learn more about them in [this tutorial](https://huggingface.co/transformers/preprocessing.html) if you're interested.To preprocess our dataset, we will thus need the names of the columns containing the sentence(s). The following dictionary keeps track of the correspondence task to column names: We can them write the function that will preprocess our samples. The tricky part is to put all the possible pairs of sentences in two big lists before passing them to the tokenizer, then un-flatten the result so that each example has four input ids, attentions masks, etc.When calling the `tokenizer`, we use the argument `truncation=True`. This will ensure that an input longer that what the model selected can handle will be truncated to the maximum length accepted by the model. | ending_names = ["ending0", "ending1", "ending2", "ending3"]
def preprocess_function(examples):
# Repeat each first sentence four times to go with the four possibilities of second sentences.
first_sentences = [[context] * 4 for context in examples["sent1"]]
# Grab all second sentences possible for each context.
question_headers = examples["sent2"]
second_sentences = [
[f"{header} {examples[end][i]}" for end in ending_names]
for i, header in enumerate(question_headers)
]
# Flatten everything
first_sentences = sum(first_sentences, [])
second_sentences = sum(second_sentences, [])
# Tokenize
tokenized_examples = tokenizer(first_sentences, second_sentences, truncation=True)
# Un-flatten
return {
k: [v[i : i + 4] for i in range(0, len(v), 4)]
for k, v in tokenized_examples.items()
} | _____no_output_____ | Apache-2.0 | examples/multiple_choice-tf.ipynb | HAMZA310/notebooks |
This function works with one or several examples. In the case of several examples, the tokenizer will return a list of lists of lists for each key: a list of all examples (here 5), then a list of all choices (4) and a list of input IDs (length varying here since we did not apply any padding): | examples = datasets["train"][:5]
features = preprocess_function(examples)
print(
len(features["input_ids"]),
len(features["input_ids"][0]),
[len(x) for x in features["input_ids"][0]],
) | 5 4 [30, 25, 30, 28]
| Apache-2.0 | examples/multiple_choice-tf.ipynb | HAMZA310/notebooks |
To check we didn't do anything group when grouping all possibilites then unflattening, let's have a look at the decoded inputs for a given example: | idx = 3
[tokenizer.decode(features["input_ids"][idx][i]) for i in range(4)] | _____no_output_____ | Apache-2.0 | examples/multiple_choice-tf.ipynb | HAMZA310/notebooks |
We can compare it to the ground truth: | show_one(datasets["train"][3]) | Context: A drum line passes by walking down the street playing their instruments.
A - Members of the procession are playing ping pong and celebrating one left each in quick.
B - Members of the procession wait slowly towards the cadets.
C - Members of the procession makes a square call and ends by jumping down into snowy streets where fans begin to take their positions.
D - Members of the procession play and go back and forth hitting the drums while the audience claps for them.
Ground truth: option D
| Apache-2.0 | examples/multiple_choice-tf.ipynb | HAMZA310/notebooks |
This seems alright, so we can apply this function on all the examples in our dataset, we just use the `map` method of our `dataset` object we created earlier. This will apply the function on all the elements of all the splits in `dataset`, so our training, validation and testing data will be preprocessed in one single command. | encoded_datasets = datasets.map(preprocess_function, batched=True) | Loading cached processed dataset at /home/matt/.cache/huggingface/datasets/swag/regular/0.0.0/9640de08cdba6a1469ed3834fcab4b8ad8e38caf5d1ba5e7436d8b1fd067ad4c/cache-ce78ab0ead9f175e.arrow
Loading cached processed dataset at /home/matt/.cache/huggingface/datasets/swag/regular/0.0.0/9640de08cdba6a1469ed3834fcab4b8ad8e38caf5d1ba5e7436d8b1fd067ad4c/cache-0d9c087b50e2d5ce.arrow
Loading cached processed dataset at /home/matt/.cache/huggingface/datasets/swag/regular/0.0.0/9640de08cdba6a1469ed3834fcab4b8ad8e38caf5d1ba5e7436d8b1fd067ad4c/cache-741a913dcfe8f6c0.arrow
| Apache-2.0 | examples/multiple_choice-tf.ipynb | HAMZA310/notebooks |
Even better, the results are automatically cached by the 🤗 Datasets library to avoid spending time on this step the next time you run your notebook. The 🤗 Datasets library is normally smart enough to detect when the function you pass to map has changed (and thus requires to not use the cache data). For instance, it will properly detect if you change the task in the first cell and rerun the notebook. 🤗 Datasets warns you when it uses cached files, you can pass `load_from_cache_file=False` in the call to `map` to not use the cached files and force the preprocessing to be applied again.Note that we passed `batched=True` to encode the texts by batches together. This is to leverage the full benefit of the fast tokenizer we loaded earlier, which will use multi-threading to treat the texts in a batch concurrently. Fine-tuning the model Now that our data is ready, we can download the pretrained model and fine-tune it. Since all our task is about mutliple choice, we use the `AutoModelForMultipleChoice` class. Like with the tokenizer, the `from_pretrained` method will download and cache the model for us. | from transformers import TFAutoModelForMultipleChoice
model = TFAutoModelForMultipleChoice.from_pretrained(model_checkpoint) | All model checkpoint layers were used when initializing TFBertForMultipleChoice.
Some layers of TFBertForMultipleChoice were not initialized from the model checkpoint at bert-base-uncased and are newly initialized: ['classifier']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
| Apache-2.0 | examples/multiple_choice-tf.ipynb | HAMZA310/notebooks |
The warning is telling us we are throwing away some weights (the `vocab_transform` and `vocab_layer_norm` layers) and randomly initializing some other (the `pre_classifier` and `classifier` layers). This is absolutely normal in this case, because we are removing the head used to pretrain the model on a masked language modeling objective and replacing it with a new head for which we don't have pretrained weights, so the library warns us we should fine-tune this model before using it for inference, which is exactly what we are going to do. Next, we set some names and hyperparameters for the model. The first two variables are used so we can push the model to the [Hub](https://huggingface.co/models) at the end of training. Remove the two of them if you didn't follow the installation steps at the top of the notebook, otherwise you can change the value of `push_to_hub_model_id` to something you would prefer. | model_name = model_checkpoint.split("/")[-1]
push_to_hub_model_id = f"{model_name}-finetuned-swag"
learning_rate = 5e-5
batch_size = batch_size
num_train_epochs = 2
weight_decay = 0.01 | _____no_output_____ | Apache-2.0 | examples/multiple_choice-tf.ipynb | HAMZA310/notebooks |
Next we need to tell our `Dataset` how to form batches from the pre-processed inputs. We haven't done any padding yet because we will pad each batch to the maximum length inside the batch (instead of doing so with the maximum length of the whole dataset). This will be the job of the *data collator*. A data collator takes a list of examples and converts them to a batch (by, in our case, applying padding). Since there is no data collator in the library that works on our specific problem, we will write one, adapted from the `DataCollatorWithPadding`: | from dataclasses import dataclass
from transformers.tokenization_utils_base import (
PreTrainedTokenizerBase,
PaddingStrategy,
)
from typing import Optional, Union
import tensorflow as tf
@dataclass
class DataCollatorForMultipleChoice:
"""
Data collator that will dynamically pad the inputs for multiple choice received.
"""
tokenizer: PreTrainedTokenizerBase
padding: Union[bool, str, PaddingStrategy] = True
max_length: Optional[int] = None
pad_to_multiple_of: Optional[int] = None
def __call__(self, features):
label_name = "label" if "label" in features[0].keys() else "labels"
labels = [feature.pop(label_name) for feature in features]
batch_size = len(features)
num_choices = len(features[0]["input_ids"])
flattened_features = [
[{k: v[i] for k, v in feature.items()} for i in range(num_choices)]
for feature in features
]
flattened_features = sum(flattened_features, [])
batch = self.tokenizer.pad(
flattened_features,
padding=self.padding,
max_length=self.max_length,
pad_to_multiple_of=self.pad_to_multiple_of,
return_tensors="tf",
)
# Un-flatten
batch = {
k: tf.reshape(v, (batch_size, num_choices, -1)) for k, v in batch.items()
}
# Add back labels
batch["labels"] = tf.convert_to_tensor(labels, dtype=tf.int64)
return batch | _____no_output_____ | Apache-2.0 | examples/multiple_choice-tf.ipynb | HAMZA310/notebooks |
When called on a list of examples, it will flatten all the inputs/attentions masks etc. in big lists that it will pass to the `tokenizer.pad` method. This will return a dictionary with big tensors (of shape `(batch_size * 4) x seq_length`) that we then unflatten.We can check this data collator works on a list of features, we just have to make sure to remove all features that are not inputs accepted by our model (something the `Trainer` will do automatically for us after): | accepted_keys = ["input_ids", "attention_mask", "label"]
features = [
{k: v for k, v in encoded_datasets["train"][i].items() if k in accepted_keys}
for i in range(10)
]
batch = DataCollatorForMultipleChoice(tokenizer)(features)
encoded_datasets["train"].features["attention_mask"].feature.feature | _____no_output_____ | Apache-2.0 | examples/multiple_choice-tf.ipynb | HAMZA310/notebooks |
Again, all those flatten/un-flatten are sources of potential errors so let's make another sanity check on our inputs: | [tokenizer.decode(batch["input_ids"][8][i].numpy().tolist()) for i in range(4)]
show_one(datasets["train"][8]) | Context: Someone walks over to the radio.
A - Someone hands her another phone.
B - Someone takes the drink, then holds it.
C - Someone looks off then looks at someone.
D - Someone stares blearily down at the floor.
Ground truth: option D
| Apache-2.0 | examples/multiple_choice-tf.ipynb | HAMZA310/notebooks |
All good! Now we can use this collator as a collation function for our dataset. The best way to do this is with the `to_tf_dataset()` method. This converts our dataset to a `tf.data.Dataset` that Keras can take as input. It also applies our collation function to each batch. | data_collator = DataCollatorForMultipleChoice(tokenizer)
train_set = encoded_datasets["train"].to_tf_dataset(
columns=["attention_mask", "input_ids", "labels"],
shuffle=True,
batch_size=batch_size,
collate_fn=data_collator,
)
validation_set = encoded_datasets["validation"].to_tf_dataset(
columns=["attention_mask", "input_ids", "labels"],
shuffle=False,
batch_size=batch_size,
collate_fn=data_collator,
) | _____no_output_____ | Apache-2.0 | examples/multiple_choice-tf.ipynb | HAMZA310/notebooks |
Now we can create our model. First, we specify an optimizer. Using the `create_optimizer` function we can get a nice `AdamW` optimizer with weight decay and a learning rate decay schedule set up for free - but to compute that schedule, it needs to know how long training will take. | from transformers import create_optimizer
total_train_steps = (len(encoded_datasets["train"]) // batch_size) * num_train_epochs
optimizer, schedule = create_optimizer(
init_lr=learning_rate, num_warmup_steps=0, num_train_steps=total_train_steps
) | _____no_output_____ | Apache-2.0 | examples/multiple_choice-tf.ipynb | HAMZA310/notebooks |
All Transformers models have a `loss` output head, so we can simply leave the loss argument to `compile()` blank to train on it. | import tensorflow as tf
model.compile(optimizer=optimizer) | No loss specified in compile() - the model's internal loss computation will be used as the loss. Don't panic - this is a common way to train TensorFlow models in Transformers! Please ensure your labels are passed as the 'labels' key of the input dict so that they are accessible to the model during the forward pass. To disable this behaviour, please pass a loss argument, or explicitly pass loss=None if you do not want your model to compute a loss.
| Apache-2.0 | examples/multiple_choice-tf.ipynb | HAMZA310/notebooks |
Now we can train our model. We can also add a callback to sync up our model with the Hub - this allows us to resume training from other machines and even test the model's inference quality midway through training! Make sure to change the `username` if you do. If you don't want to do this, simply remove the callbacks argument in the call to `fit()`. | from transformers.keras_callbacks import PushToHubCallback
username = "Rocketknight1"
callback = PushToHubCallback(
output_dir="./mc_model_save",
tokenizer=tokenizer,
hub_model_id=f"{username}/{push_to_hub_model_id}",
)
model.fit(
train_set,
validation_data=validation_set,
epochs=num_train_epochs,
callbacks=[callback],
) | huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using `tokenizers` before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using `tokenizers` before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
| Apache-2.0 | examples/multiple_choice-tf.ipynb | HAMZA310/notebooks |
One downside of using the internal loss, however, is that we can't use Keras metrics with it. So let's compute accuracy after the fact, to see how our model is performing. First, we need to get our model's predicted answers on the validation set. | predictions = model.predict(validation_set)["logits"]
labels = encoded_datasets["validation"]["label"] | _____no_output_____ | Apache-2.0 | examples/multiple_choice-tf.ipynb | HAMZA310/notebooks |
And now we can compute our accuracy with Numpy. | import numpy as np
preds = np.argmax(predictions, axis=1)
print({"accuracy": (preds == labels).astype(np.float32).mean().item()}) | {'accuracy': 0.7951614260673523}
| Apache-2.0 | examples/multiple_choice-tf.ipynb | HAMZA310/notebooks |
How to recover a known planet in Kepler data This tutorial demonstrates the basic steps required to recover a transiting planet candidate in the Kepler data.We will show how you can recover the signal of [Kepler-10b](https://en.wikipedia.org/wiki/Kepler-10b), the first rocky planet that was discovered by Kepler! Kepler-10 is a Sun-like (G-type) star approximately 600 light years away in the constellation of Cygnus. In this tutorial, we will download the pixel data of Kepler-10, extract a lightcurve, and recover the planet. Kepler pixel data is distributed in "Target Pixel Files". You can read more about them in our tutorial [here](http://lightkurve.keplerscience.org/tutorials/target-pixel-files.html). The `lightkurve` package provides a `KeplerTargetPixelFile` class, which enables you to load and interact with data in this format.The class can take a path (local or url), or you can load data straight from the [MAST archive](https://archive.stsci.edu/kepler/), which holds all of the Kepler and K2 data archive. We'll download the Kepler-10 light curve using the `from_archive` function, as shown below. *(Note: we're adding the keyword `quarter=3` to download only the data from the third Kepler quarter. There were 17 quarters during the Kepler mission.)* | from lightkurve import KeplerTargetPixelFile
tpf = KeplerTargetPixelFile.from_archive("Kepler-10", quarter=3) | Downloading URL https://mast.stsci.edu/api/v0/download/file?uri=mast:Kepler/url/missions/kepler/target_pixel_files/0119/011904151/kplr011904151-2009350155506_lpd-targ.fits.gz to ./mastDownload/Kepler/kplr011904151_lc_Q111111110111011101/kplr011904151-2009350155506_lpd-targ.fits.gz ... [Done]
| MIT | docs/source/tutorials/2.02-recover-a-planet.ipynb | ceb8/lightkurve |
Let's use the `plot` method and pass along an aperture mask and a few plotting arguments. | tpf.plot(scale='log'); | _____no_output_____ | MIT | docs/source/tutorials/2.02-recover-a-planet.ipynb | ceb8/lightkurve |
The target pixel file contains one bright star with approximately 50,000 counts. Now, we will use the ``to_lightcurve`` method to create a simple aperture photometry lightcurve using themask defined by the pipeline which is stored in `tpf.pipeline_mask`. | lc = tpf.to_lightcurve(aperture_mask=tpf.pipeline_mask) | _____no_output_____ | MIT | docs/source/tutorials/2.02-recover-a-planet.ipynb | ceb8/lightkurve |
Let's take a look at the output lightcurve. | lc.plot(); | _____no_output_____ | MIT | docs/source/tutorials/2.02-recover-a-planet.ipynb | ceb8/lightkurve |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.