markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Review Let's remind ourselves what we've learned. Exercise. We explore the Census's Business Dynamics Statistics, a huge collection of data about firms. We've extracted a small piece of one of their databases that includes these variables for 2013: Size: size category of firms based on number of employees Firms: number of firms in each size category Emp: number of employees in each size category Run the code cell below to load the data.
data = {'Size': ['a) 1 to 4', 'b) 5 to 9', 'c) 10 to 19', 'd) 20 to 49', 'e) 50 to 99', 'f) 100 to 249', 'g) 250 to 499', 'h) 500 to 999', 'i) 1000 to 2499', 'j) 2500 to 4999', 'k) 5000 to 9999', 'l) 10000+'], 'Firms': [2846416, 1020772, 598153, 373345, 115544, 63845, 19389, 9588, 6088, 2287, 1250, 1357], 'Emp': [5998912, 6714924, 8151891, 11425545, 8055535, 9788341, 6611734, 6340775, 8321486, 6738218, 6559020, 32556671]} bds = pd.DataFrame(data) bds .head(3)
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
NYUDataBootcamp/Materials
mit
Introducing kNN
knn3scores = cross_val_score(knn3, XTrain, yTrain, cv = 5) print knn3scores print "Mean of scores KNN3:", knn3scores.mean() knn99scores = cross_val_score(knn99, XTrain, yTrain, cv = 5) print knn99scores print "Mean of scores KNN99:", knn99scores.mean() XTrain, XTest, yTrain, yTest = train_test_split(X, y, random_state = 1) #seed 1 knn = KNeighborsClassifier() n_neighbors = np.arange(3, 151, 2) grid = GridSearchCV(knn, [{'n_neighbors':n_neighbors}], cv = 10) grid.fit(XTrain, yTrain) cv_scores = [x[1] for x in grid.grid_scores_] train_scores = list() test_scores = list() for n in n_neighbors: knn.n_neighbors = n knn.fit(XTrain, yTrain) train_scores.append(metrics.accuracy_score(yTrain, knn.predict(XTrain))) test_scores.append(metrics.accuracy_score(yTest, knn.predict(XTest))) plt.plot(n_neighbors, train_scores, c = "blue", label = "Training Scores") plt.plot(n_neighbors, test_scores, c = "brown", label = "Test Scores") plt.plot(n_neighbors, cv_scores, c = "black", label = "CV Scores") plt.xlabel('Number of K nearest neighbors') plt.ylabel('Classification Accuracy') plt.gca().invert_xaxis() plt.legend(loc = "upper left") plt.show()
model_optimization/messy_modelling.ipynb
nslatysheva/data_science_blogging
gpl-3.0
Next up, let's split the dataset into a training and test set. The training set will be used to develop and tune our predictive models. The test will be completely left alone until the very end, at which point you'll run your finished models on it. Having a test set will allow you to get a good estimate of how well our models would perform out in the wild on unseen data.
from sklearn.cross_validation import train_test_split # Split into training and test sets XTrain, XTest, yTrain, yTest = train_test_split(X, y, random_state=1)
model_optimization/messy_modelling.ipynb
nslatysheva/data_science_blogging
gpl-3.0
We are first going to try to predict spam emails with a random forest classifier. Chapter 8 of the Introduction to Statistical Learning book provides a truly excellent introduction to theory behind random forests. Briefly, random forests build a collection of classification trees, which each try to predict classes by recursively splitting the data on the features (and feature values) that split the classes best. Each tree is trained on bootstrapped data, and each split is only allowed to use certain variables. So, an element of randomness is introduced, a variety of different trees are built, and the 'random forest' ensembles these base learners together. Out of the box, scikit's random forest classifier already performs quite well on the spam dataset:
from sklearn.ensemble import RandomForestClassifier from sklearn import metrics rf = RandomForestClassifier() rf.fit(XTrain, yTrain) rf_predictions = rf.predict(XTest) print (metrics.classification_report(yTest, rf_predictions)) print ("Overall Accuracy:", round(metrics.accuracy_score(yTest, rf_predictions),2))
model_optimization/messy_modelling.ipynb
nslatysheva/data_science_blogging
gpl-3.0
An overall accuracy of 0.95 is very good for a start, but keep in mind that this is a heavily idealized dataset. Next up, we are going to learn how to pick the best parameters for the random forest algorithm (as well as for an SVM and logistic regression classifier) in order to get better models with (hopefully!) improved accuracy. The perils of overfitting In order to build the best possible model that does a good job at describing the underlying trends in a dataset, we need to pick the right HP values. In the following example, we will introduce different strategies of searching for the set of HPs that define the best model, but we will first need to make a slight detour to explain how to avoid a major pitfall when it comes to tuning models - overfitting. The hallmark of overfitting is good training performance and bad testing performance. As we mentioned above, HPs are not optimised while a learning algorithm is learning. Hence, we need other strategies to optimise them. The most basic way would just to test different possible values for the HPs and see how the model performs. In a random forest, some hyperparameters we can optimise are n_estimators and max_features. n_estimators controls the number of trees in the forest - the more the better, but more trees comes at the expense of longer training time. max_features controls the size of the random selection of features the algorithm is allowed to consider when splitting a node. Let's try out some HP values.
n_estimators = np.array([5, 100]) max_features = np.array([10, 50])
model_optimization/messy_modelling.ipynb
nslatysheva/data_science_blogging
gpl-3.0
We can manually write a small loop to test out how well the different combinations of these fare (later, we'll find out better ways to do this):
from itertools import product # get grid of all possible combinations of hp values hp_combinations = list(itertools.product(n_estimators, max_features)) for hp_combo in range(len(hp_combinations)): print (hp_combinations[hp_combo]) # Train and output accuracies rf = RandomForestClassifier(n_estimators=hp_combinations[hp_combo][0], max_features=hp_combinations[hp_combo][1]) rf.fit(XTrain, yTrain) RF_predictions = rf.predict(XTest) print ("Overall Accuracy:", round(metrics.accuracy_score(yTest, RF_predictions),2))
model_optimization/messy_modelling.ipynb
nslatysheva/data_science_blogging
gpl-3.0
Pandas Series and DataFrame objects There are two main data structures in pandas: - Series (1 dimensional data) - Dataframes (2 dimensional data) - There are other, lesser used data structures used for higher dimensional data, but are less frequently used - Panel (3 dimensional data) - panel will be removed from future versions of Pandas and replaced with xarray - Xarray (>2 dimensions) Here, the 1- and 2-dimensional data sets are the focus of this lesson. Pandas DataFrames are analogus to R's data.frame, but aim to provide additional functionality. Both dataframes and series data structures have indicies, which are shown on the left:
series1 = pd.Series([1,2,3,4]) print(series1)
10 - Pandas Crash Course.ipynb
blakeflei/IntroScientificPythonWithJupyter
bsd-3-clause
Dataframes use the IPython display method to look pretty, but will show just fine when printed also. (There's a way to make all of the dataframes print pretty via the IPython.display.display method, but this isn't necessary to view the values):
df1 = pd.DataFrame([[1,2,3,4],[10,20,30,40]]) print(df1) df1
10 - Pandas Crash Course.ipynb
blakeflei/IntroScientificPythonWithJupyter
bsd-3-clause
Indices can be named:
# Rename the columns df1.columns = ['A','B','C','D'] df1.index = ['zero','one'] df1 # Create the dataframe with the columns df1 = pd.DataFrame([[1,2,3,4],[10,20,30,40]], columns=['A','B','C',"D"], index=['zero','one']) df1
10 - Pandas Crash Course.ipynb
blakeflei/IntroScientificPythonWithJupyter
bsd-3-clause
Data Input Output
df1 = pd.DataFrame(np.random.randn(5,4), columns = ['A','B','C','D'], index=['zero','one','two','three','four']) print(df1)
10 - Pandas Crash Course.ipynb
blakeflei/IntroScientificPythonWithJupyter
bsd-3-clause
CSV Files
df1.to_csv('datafiles/pandas_df1.csv') !ls datafiles df2 = pd.read_csv('datafiles/pandas_df1.csv', index_col=0) print(df2)
10 - Pandas Crash Course.ipynb
blakeflei/IntroScientificPythonWithJupyter
bsd-3-clause
hdf5 files
df1.to_hdf('datafiles/pandas_df1.h5', 'df') !ls datafiles df2 = pd.read_hdf('datafiles/pandas_df1.h5', 'df') print(df2)
10 - Pandas Crash Course.ipynb
blakeflei/IntroScientificPythonWithJupyter
bsd-3-clause
Data types Show the datatypes of each column:
df2.dtypes
10 - Pandas Crash Course.ipynb
blakeflei/IntroScientificPythonWithJupyter
bsd-3-clause
We can create dataframes of multiple datatypes:
col1 = range(6) col2 = np.random.rand(6) col3 = ['zero','one','two','three','four','five'] col4 = ['blue', 'cow','blue', 'cow','blue', 'cow'] df_types = pd.DataFrame( {'integers': col1, 'floats': col2, 'words': col3, 'cow color': col4} ) print(df_types) df_types.dtypes
10 - Pandas Crash Course.ipynb
blakeflei/IntroScientificPythonWithJupyter
bsd-3-clause
We can also set the 'cow color' column to a category:
df_types['cow color'] = df_types['cow color'].astype("category") df_types.dtypes
10 - Pandas Crash Course.ipynb
blakeflei/IntroScientificPythonWithJupyter
bsd-3-clause
Indexing and Setting Data Pandas does a lot of different operations, here are the meat and potatoes. The following describes the indexing of data, but setting the data is as simple as a reassignment.
time_stamps = pd.date_range(start='2000-01-01', end='2000-01-20', freq='D') # Define index of time stamps df1 = pd.DataFrame(np.random.randn(20,4), columns = ['A','B','C','D'], index=time_stamps) print(df1)
10 - Pandas Crash Course.ipynb
blakeflei/IntroScientificPythonWithJupyter
bsd-3-clause
Head and Tail Print the beginning and ending entries of a pandas data structure
df1.head(3) # Show the first n rows, default is 5 df1.tail() # Show the last n rows
10 - Pandas Crash Course.ipynb
blakeflei/IntroScientificPythonWithJupyter
bsd-3-clause
We can also separate the metadata (labels, etc) from the data, yielding a numpy-like output.
df1.columns df1.values
10 - Pandas Crash Course.ipynb
blakeflei/IntroScientificPythonWithJupyter
bsd-3-clause
Indexing Data Pandas provides the means to index data via named columns, or as numpy like indices. Indexing is [row, column], just as it was in numpy. Data is visible via column:
df1['A'].head() # df1.A.head() is equivalent
10 - Pandas Crash Course.ipynb
blakeflei/IntroScientificPythonWithJupyter
bsd-3-clause
Note that tab completion is enabled for column names:
df1.A
10 - Pandas Crash Course.ipynb
blakeflei/IntroScientificPythonWithJupyter
bsd-3-clause
<div> <img style="float: left;" src="images/10-01_column-tab.png" width=30%> </div> We can specify row ranges:
df1[:2]
10 - Pandas Crash Course.ipynb
blakeflei/IntroScientificPythonWithJupyter
bsd-3-clause
Label based indexing (.loc) Slice based on the labels.
df1.loc[:'2000-01-5',"A"] # Note that this includes the upper index
10 - Pandas Crash Course.ipynb
blakeflei/IntroScientificPythonWithJupyter
bsd-3-clause
Integer based indexing (.iloc) Slice based on the index number.
df1.iloc[:3,0] # Note that this does not include the upper index like numpy
10 - Pandas Crash Course.ipynb
blakeflei/IntroScientificPythonWithJupyter
bsd-3-clause
Fast single element label indexing (.at) - fast .loc Intended for fast, single indexes.
index_timestamp = pd.Timestamp('2000-01-03') # Create a timestamp object to index df1.at[index_timestamp,"A"] # Index using timestamp (vs string)
10 - Pandas Crash Course.ipynb
blakeflei/IntroScientificPythonWithJupyter
bsd-3-clause
Fast single element label indexing (.iat) - fast .iloc Intended for fast, single indexes.
df1.iat[3,0]
10 - Pandas Crash Course.ipynb
blakeflei/IntroScientificPythonWithJupyter
bsd-3-clause
Logical indexing A condition is used to select the values within a slice or the entire Pandas object. Using a conditional statement, a true/false DataFrame is produced:
df1.head()>0.5
10 - Pandas Crash Course.ipynb
blakeflei/IntroScientificPythonWithJupyter
bsd-3-clause
That matrix can then be used to index the DataFrame:
df1[df1>0.5].head() # Note that the values that were 'False' are 'NaN'
10 - Pandas Crash Course.ipynb
blakeflei/IntroScientificPythonWithJupyter
bsd-3-clause
Logical indexing via isin It's also possible to filter via the index value:
df_types bool_series = df_types['cow color'].isin(['blue']) print(bool_series) # Show the logical indexing df_types[bool_series] # Index where the values are true
10 - Pandas Crash Course.ipynb
blakeflei/IntroScientificPythonWithJupyter
bsd-3-clause
Sorting by column
df_types.sort_values(by="floats")
10 - Pandas Crash Course.ipynb
blakeflei/IntroScientificPythonWithJupyter
bsd-3-clause
Dealing with Missing Data By convention, pandas uses the NaN value to represent missing data. There are a few functions surrounding the handling of NaN values:
df_nan = pd.DataFrame(np.random.rand(6,2), columns = ['A','B']) df_nan df_nan['B'] = df_nan[df_nan['B']>0.5] # Prints NaN Where ['B'] <= 0.5 print(df_nan)
10 - Pandas Crash Course.ipynb
blakeflei/IntroScientificPythonWithJupyter
bsd-3-clause
Print a logical DataFrame where NaN is located:
df_nan.isnull()
10 - Pandas Crash Course.ipynb
blakeflei/IntroScientificPythonWithJupyter
bsd-3-clause
Drop all rows with NaN:
df_nan.dropna(how = 'any')
10 - Pandas Crash Course.ipynb
blakeflei/IntroScientificPythonWithJupyter
bsd-3-clause
Replace NaN entries:
df_nan.fillna(value = -1)
10 - Pandas Crash Course.ipynb
blakeflei/IntroScientificPythonWithJupyter
bsd-3-clause
Concatenating and Merging Data Bringing together DataFrames or Series objects: Concatenate
df1 = pd.DataFrame(np.zeros([3,3], dtype=np.int)) df1 df2 = pd.concat([df1, df1], axis=0) df2 = df2.reset_index(drop=True) # Renumber indexing df2
10 - Pandas Crash Course.ipynb
blakeflei/IntroScientificPythonWithJupyter
bsd-3-clause
Append Adding an additional group after the first group:
newdf = pd.DataFrame({0: [1], 1:[1], 2:[1]}) print(newdf) df3 = df2.append(newdf, ignore_index=True) df3
10 - Pandas Crash Course.ipynb
blakeflei/IntroScientificPythonWithJupyter
bsd-3-clause
SQL-like merging Pandas can do structured query language (SQL) like merges of data:
left = pd.DataFrame({'numbers': ['K0', 'K1', 'K2', 'K3'], 'English': ['one', 'two', 'three', 'four'], 'Spanish': ['uno', 'dos', 'tres', 'quatro'], 'German': ['erste', 'zweite','dritte','vierte']}) left right = pd.DataFrame({'numbers': ['K0', 'K1', 'K2', 'K3'], 'French': ['un', 'deux', 'trois', 'quatre'], 'Afrikaans': ['een', 'twee', 'drie', 'vier']}) right result = pd.merge(left, right, on='numbers') result
10 - Pandas Crash Course.ipynb
blakeflei/IntroScientificPythonWithJupyter
bsd-3-clause
Grouping Operations Often, there is a need to summarize the data or change the output of the data to make it easier to work with, especially for categorical data types.
dfg = pd.DataFrame({'A': ['clogs','sandals','jellies']*2, 'B': ['socks','footies']*3, 'C': [1,1,1,3,2,2], 'D': np.random.rand(6)}) dfg
10 - Pandas Crash Course.ipynb
blakeflei/IntroScientificPythonWithJupyter
bsd-3-clause
Pivot Table Without changing the data in any way, summarize the output in a different format. Specify the indicies, columns, and values:
dfg.pivot_table(index=['A','B'], columns=['C'], values='D')
10 - Pandas Crash Course.ipynb
blakeflei/IntroScientificPythonWithJupyter
bsd-3-clause
Stacking Column labels can be brought into the rows.
dfg.stack()
10 - Pandas Crash Course.ipynb
blakeflei/IntroScientificPythonWithJupyter
bsd-3-clause
Groupby Groupby groups values, creating a Python object to which functions can be applied:
dfg.groupby(['B']).count() dfg.groupby(['A']).mean()
10 - Pandas Crash Course.ipynb
blakeflei/IntroScientificPythonWithJupyter
bsd-3-clause
Operations on Pandas Data Objects Wether it's the entire data frame or a series within a single dataframe, there are a variety of methods that can be applied. Here's a list of a few helpful ones: Simple statistics (mean, stdev, etc).
dfg['D'].mean()
10 - Pandas Crash Course.ipynb
blakeflei/IntroScientificPythonWithJupyter
bsd-3-clause
Rotation Note that the values rotated out leave NaN behind:
dfg['D'] dfg_Ds = dfg['D'].shift(2) dfg_Ds
10 - Pandas Crash Course.ipynb
blakeflei/IntroScientificPythonWithJupyter
bsd-3-clause
Add, subtract, multiply, divide: Operations are element-wise:
dfg['D'].div(dfg_Ds )
10 - Pandas Crash Course.ipynb
blakeflei/IntroScientificPythonWithJupyter
bsd-3-clause
Histogram
dfg dfg['C'].value_counts()
10 - Pandas Crash Course.ipynb
blakeflei/IntroScientificPythonWithJupyter
bsd-3-clause
Describe Excluding NaN values, print some descriptive statistics about the collection of values.
df_types.describe()
10 - Pandas Crash Course.ipynb
blakeflei/IntroScientificPythonWithJupyter
bsd-3-clause
Transpose Exchange the rows and columns (flip about the diagonal):
df_types.T
10 - Pandas Crash Course.ipynb
blakeflei/IntroScientificPythonWithJupyter
bsd-3-clause
Applying Any Function to Pandas Data Objects Pandas objects have methods that allow function to be applied with greater control, namely the .apply function:
def f(x): # Define function return x + 1 dfg['C'].apply(f)
10 - Pandas Crash Course.ipynb
blakeflei/IntroScientificPythonWithJupyter
bsd-3-clause
Lambda functions may also be used
dfg['C'].apply(lambda x: x + 1)
10 - Pandas Crash Course.ipynb
blakeflei/IntroScientificPythonWithJupyter
bsd-3-clause
String functions: Pandas has access to string methods:
dfg['A'].str.title() # Make the first letter uppercase
10 - Pandas Crash Course.ipynb
blakeflei/IntroScientificPythonWithJupyter
bsd-3-clause
Plotting Pandas exposes the matplotlib library for use.
n = 100 X = np.linspace(0, 5, n) Y1,Y2 = np.log((X)**2+2), np.sin(X)+2 dfp = pd.DataFrame({'X' : X, 'Y1': Y1, 'Y2': Y2}) dfp.head() dfp.plot(x = 'X') plt.show()
10 - Pandas Crash Course.ipynb
blakeflei/IntroScientificPythonWithJupyter
bsd-3-clause
Matplotlib styles are available too:
style_name = 'classic' plt.style.use(style_name) dfp.plot(x = 'X') plt.title('Log($x^2$) and Sine', fontsize=16) plt.xlabel('X Label', fontsize=16) plt.ylabel('Y Label', fontsize=16) plt.show() mpl.rcdefaults() # Reset matplotlib rc defaults
10 - Pandas Crash Course.ipynb
blakeflei/IntroScientificPythonWithJupyter
bsd-3-clause
Simplifación de datos Cargar datos
#Path para linux path = '../Recursos/indian_liver_patient.csv' #Path para Windows #path = '..\Recursos\indian_liver_patient.csv' dataset = pd.read_csv(path,delimiter=',',header=0) # Eliminación de Missing values dataset=dataset.dropna() # Transformación de valores a binario dataset["Gender"] = pd.Categorical.from_array(dataset["Gender"]).codes #Modificamos la clase, para que la clase pase de 1-2 a 0-1 dataset['Dataset']=dataset['Dataset']-1 # División aleatorioa 70 Traning 30 Test train_test=train_test_split(dataset, test_size=0.3) train=train_test[0] test=train_test[1] dataset #Separamos la variable target del resto train_Y=train["Dataset"] train_X=train.drop("Dataset",1) test_Y=test["Dataset"] test_X=test.drop("Dataset",1)
LAB2/src/Practica2.ipynb
asharel/ml
gpl-3.0
Mutual information
mi_regr = FS.mutual_info_regression(train_X, train_Y) print(mi_regr) indice_regr=np.argsort(mi_regr)[::-1] print(indice_regr)
LAB2/src/Practica2.ipynb
asharel/ml
gpl-3.0
Como en cada ejecución de información mutua los valores varian bastante, debido a los bajos resultados obtenidos, hemos decidido realizar el calculo 100 veces y trabajar con la media de los calculos, donde vemos que ya mas o menos se obtienen resultados similares, o con cambios de posicion pequeños. Algunos ejemplos de variacion de indices:<br> ej1: 5 0 9 6 3 4 2 8 7 1 <br> ej2: 0 6 3 4 2 5 8 9 1 7<br> ej3: 0 6 2 3 5 1 4 9 8 7<br> ej4: 6 2 3 0 9 5 8 4 7 1<br> ej5: 9 2 3 0 6 5 4 8 7 1<br>
mi_regr = [ 0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,] for i in range(1, 100): mi_regr = mi_regr+ FS.mutual_info_regression(train_X, train_Y) mi_regr=mi_regr/100 print(mi_regr) names=train_X.axes[1] print (names) indice_regr=np.argsort(mi_regr)[::-1] print(indice_regr) #print(names) names[indice_regr]
LAB2/src/Practica2.ipynb
asharel/ml
gpl-3.0
En el caso de realizar la media vemos que los valores no varian tanto de una ejecucion a otra:<br> ej1:3 5 0 6 2 4 9 1 8 7<br> ej2:3 0 5 6 2 4 9 8 1 7<br> ej3:3 0 5 6 2 4 9 8 1 7<br> ej4:3 5 6 0 2 4 9 8 1 7<br> ej5:5 6 3 0 2 4 9 8 1 7<br> Y nos a partir de aqui vamos a usar los valores de ejecución 2 y 3 que son los mismos.
indice_regr = [ 3, 0, 5, 6, 2, 4, 9, 8, 1, 7] regr_var=names[indice_regr] regr_var plt.figure(figsize=(8,6)) plt.subplot(121) plt.scatter(dataset[dataset.Dataset==0].Direct_Bilirubin,dataset[dataset.Dataset==0].Age, color='red') plt.scatter(dataset[dataset.Dataset==1].Direct_Bilirubin,dataset[dataset.Dataset==1].Age, color='blue') plt.title('Good Predictor Variables \n Direct_Bilirubin vs Age') plt.xlabel('Direct_Bilirubin') plt.ylabel('Age') plt.legend(['Enfermos','Sanos']) plt.subplot(122) plt.scatter(dataset[dataset.Dataset==0].Total_Protiens,dataset[dataset.Dataset==0].Albumin, color='red') plt.scatter(dataset[dataset.Dataset==1].Total_Protiens,dataset[dataset.Dataset==1].Albumin, color='blue') plt.title('Good Predictor Variables \n Total_Protiens vs Albumin') plt.xlabel('Total_Protiens') plt.ylabel('Albumin') plt.legend(['Sick','Healthy']) plt.show()
LAB2/src/Practica2.ipynb
asharel/ml
gpl-3.0
Como se ve en las graficas, aunque las primeras dos variables (Direct_Bilirubin y Age) no separan bien los elementos, algo de esperar por los resultados tan bajos obtenidos, si estan mejor separadas que con dos de las tres ultimas variables (Total_Protiens y Albumin)
def specificity(y_true, y_pred): tn, fp, fn, tp = metrics.confusion_matrix(y_true, y_pred).ravel() return tn/(tn+fp) #Con todas las variables 10 modelo_lr = LogisticRegression() modelo_lr.fit(X=train_X,y=train_Y) predicion = modelo_lr.predict(test_X) print('Regresion logistica con todas las variables\n') print(f'\tprecision_score={metrics.precision_score(y_true=test_Y, y_pred=predicion)}') print(f'\trecall_score={metrics.recall_score(y_true=test_Y, y_pred=predicion)}') print(f'\taccuracy_score={metrics.accuracy_score(y_true=test_Y, y_pred=predicion)}') print(f'\tspecificity_score={specificity(y_true=test_Y, y_pred=predicion)}') train_X_copia=train_X.copy() test_X_copia=test_X.copy() allvar=""; #quitando variables nvar=9 for i in regr_var[:0:-1]: allvar=allvar+i+", " train_X_copia=train_X_copia.drop(i,1) test_X_copia=test_X_copia.drop(i,1) modelo_lr = LogisticRegression() modelo_lr.fit(X=train_X_copia,y=train_Y) print('Con',nvar," variables") nvar=nvar-1 print('Quitando \n'+allvar+"\n") predicion = modelo_lr.predict(test_X_copia) print(f'\tprecision_score={metrics.precision_score(y_true=test_Y, y_pred=predicion)}') print(f'\trecall_score={metrics.recall_score(y_true=test_Y, y_pred=predicion)}') print(f'\taccuracy_score={metrics.accuracy_score(y_true=test_Y, y_pred=predicion)}') print(f'\tspecificity_score={specificity(y_true=test_Y, y_pred=predicion)}')
LAB2/src/Practica2.ipynb
asharel/ml
gpl-3.0
Chi2
#EN chi^2 no se pueden usar valores negativos, por lo que no se pueden usar los datos normalizados chi = FS.chi2(X = train_X, y = train_Y)[0] print(chi) indice_chi=np.argsort(chi)[::-1] print(indice_chi) print(names[indice_chi]) chi_var=names[indice_chi] plt.figure() plt.scatter(dataset[dataset.Dataset==0].Aspartate_Aminotransferase,dataset[dataset.Dataset==0].Alamine_Aminotransferase, color='red') plt.scatter(dataset[dataset.Dataset==1].Aspartate_Aminotransferase,dataset[dataset.Dataset==1].Alamine_Aminotransferase, color='blue') plt.title('Good Predictor Variables Chi-Square \n Aspartate_Aminotransferase vs Alamine_Aminotransferase') plt.xlabel('Aspartate_Aminotransferase') plt.ylabel('Alamine_Aminotransferase') plt.legend(['Sick','Healthy']) plt.show() train_X_copia=train_X.copy() test_X_copia=test_X.copy() allvar=""; nvar=9 #quitando variables for i in chi_var[:0:-1]: allvar=allvar+i+", " train_X_copia=train_X_copia.drop(i,1) test_X_copia=test_X_copia.drop(i,1) modelo_lr = LogisticRegression() modelo_lr.fit(X=train_X_copia,y=train_Y) print('Con',nvar," variables") nvar=nvar-1 print('Quitando \n'+allvar+'\n') predicion = modelo_lr.predict(test_X_copia) print(f'\tprecision_score={metrics.precision_score(y_true=test_Y, y_pred=predicion)}') print(f'\trecall_score={metrics.recall_score(y_true=test_Y, y_pred=predicion)}') print(f'\taccuracy_score={metrics.accuracy_score(y_true=test_Y, y_pred=predicion)}') print(f'\tspecificity_score={specificity(y_true=test_Y, y_pred=predicion)}')
LAB2/src/Practica2.ipynb
asharel/ml
gpl-3.0
PCA Según se ha visto es teoría, PCA es un método que varía según la escala que apliquemos, así pues, el análisis que realizaremos será con datos normalizados. Para ello usaremos el objeto preprocessing de la librería skleanr.
from sklearn.decomposition.pca import PCA from sklearn import preprocessing X_scaled = preprocessing.scale(train_X) pca = PCA() pca.fit(X_scaled) # Representamos los resultados de PCA plt.plot(pca.explained_variance_) plt.ylabel("eigenvalues") plt.xlabel("position") plt.show() print ("Eigenvalues\n",pca.explained_variance_) # # Porcentaje de varianza por cada componente print('\nExplained variance ratio:\n %s' % str(pca.explained_variance_ratio_)) pca = pca.explained_variance_ratio_ indice_pca=np.argsort(pca)[::-1] print(indice_pca) print(names[indice_pca]) pca_var=names[indice_pca] train_X_copia=train_X.copy() test_X_copia=test_X.copy() allvar=""; nvar=9 # Calculamos algoritmo de ML según variables guiadas por PCA for i in pca_var[:0:-1]: allvar=allvar+i+", " train_X_copia=train_X_copia.drop(i,1) test_X_copia=test_X_copia.drop(i,1) modelo_lr = LogisticRegression() modelo_lr.fit(X=train_X_copia,y=train_Y) print('Con',nvar," variables") nvar=nvar-1 print('Quitando \n'+allvar+'\n') predicion = modelo_lr.predict(test_X_copia) print(f'\tprecision_score={metrics.precision_score(y_true=test_Y, y_pred=predicion)}') print(f'\trecall_score={metrics.recall_score(y_true=test_Y, y_pred=predicion)}') print(f'\taccuracy_score={metrics.accuracy_score(y_true=test_Y, y_pred=predicion)}') print(f'\tspecificity_score={specificity(y_true=test_Y, y_pred=predicion)}')
LAB2/src/Practica2.ipynb
asharel/ml
gpl-3.0
Según hemos visto con las pruebas anteriores, podemos ver como alcanzamos la mejor precisión quitando las variables Albumin_and_Globulin_Ratio, Albumin, Total_Protiens, Aspartate_Aminotransferase, Alamine_Aminotransferase, Alkaline_Phosphotase. Hasta ahora ha sido el mejor resultado que hemos obtenido entre todos los métodos. LDA Al contrario de PCA, LDA es invariante a escala y por tanto no tenemos por que trabajara con los datos normalizados.
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA lda = LDA() lda.fit(train_X,train_Y) print("Porcentaje explicado:", lda.explained_variance_ratio_) ldaRes = lda.explained_variance_ratio_ indice_lda=np.argsort(ldaRes)[::-1] print(indice_lda) print(names[indice_lda]) pca_var=names[indice_lda] train_X_copia=train_X.copy() test_X_copia=test_X.copy() allvar=""; nvar=9 #quitando variables for i in pca_var[:0:-1]: allvar=allvar+i+", " train_X_copia=train_X_copia.drop(i,1) test_X_copia=test_X_copia.drop(i,1) modelo_lr = LogisticRegression() modelo_lr.fit(X=train_X_copia,y=train_Y) print('Con',nvar," variables") nvar=nvar-1 print('Quitando \n'+allvar+'\n') predicion = modelo_lr.predict(test_X_copia) print(f'\tprecision_score={metrics.precision_score(y_true=test_Y, y_pred=predicion)}') print(f'\trecall_score={metrics.recall_score(y_true=test_Y, y_pred=predicion)}') print(f'\taccuracy_score={metrics.accuracy_score(y_true=test_Y, y_pred=predicion)}') print(f'\tspecificity_score={specificity(y_true=test_Y, y_pred=predicion)}')
LAB2/src/Practica2.ipynb
asharel/ml
gpl-3.0
Task #3 Given the variables: stock_index = "SP500" price = 300 Use .format() to print the following string: The SP500 is at 300 today.
stock_index = "SP500" price = 300 print("The {} is at {} today.".format(stock_index,price))
ml-for-finance/python-for-financial-analysis-and-algorithmic-trading/01-Python-Crash-Course/Python Crash Course Exercises - Solutions.ipynb
Almaz-KG/MachineLearning
apache-2.0
Task #5 Given strings with this form where the last source value is always separated by two dashes -- "PRICE:345.324:SOURCE--QUANDL" Create a function called source_finder() that returns the source. For example, the above string passed into the function would return "QUANDL"
def source_finder(s): return s.split('--')[-1] source_finder("PRICE:345.324:SOURCE--QUANDL")
ml-for-finance/python-for-financial-analysis-and-algorithmic-trading/01-Python-Crash-Course/Python Crash Course Exercises - Solutions.ipynb
Almaz-KG/MachineLearning
apache-2.0
Task #5 Create a function called price_finder that returns True if the word 'price' is in a string. Your function should work even if 'Price' is capitalized or next to punctuation ('price!')
def price_finder(s): return 'price' in s.lower() price_finder("What is the price?") price_finder("DUDE, WHAT IS PRICE!!!") price_finder("The price is 300")
ml-for-finance/python-for-financial-analysis-and-algorithmic-trading/01-Python-Crash-Course/Python Crash Course Exercises - Solutions.ipynb
Almaz-KG/MachineLearning
apache-2.0
Task #6 Create a function called count_price() that counts the number of times the word "price" occurs in a string. Account for capitalization and if the word price is next to punctuation.
def count_price(s): count = 0 for word in s.lower().split(): # Need to use in, can't use == or will get error with punctuation if 'price' in word: count += 1 # Note the indentation! return count # Simpler Alternative def count_price(s): return s.lower().count('price') s = 'Wow that is a nice price, very nice Price! I said price 3 times.' count_price(s)
ml-for-finance/python-for-financial-analysis-and-algorithmic-trading/01-Python-Crash-Course/Python Crash Course Exercises - Solutions.ipynb
Almaz-KG/MachineLearning
apache-2.0
Task #7 Create a function called avg_price that takes in a list of stock price numbers and calculates the average (Sum of the numbers divided by the number of elements in the list). It should return a float.
def avg_price(stocks): return sum(stocks)/len(stocks) # Python 2 users should multiply numerator by 1.0 avg_price([3,4,5])
ml-for-finance/python-for-financial-analysis-and-algorithmic-trading/01-Python-Crash-Course/Python Crash Course Exercises - Solutions.ipynb
Almaz-KG/MachineLearning
apache-2.0
シングルマシンシミュレーション 現在、デフォルトで次のようにオンになっています。
evaluate()
site/ja/federated/tutorials/simulations.ipynb
tensorflow/docs-l10n
apache-2.0
1. Load and examine the FITS file Here we begin with a 2-dimensional data that were stored in FITS format from some simulations. We have Stokes I, Q, and U maps. We we'll first load a FITS file and examine the header.
file_i = download_file( 'http://data.astropy.org/tutorials/synthetic-images/synchrotron_i_lobe_0700_150MHz_sm.fits', cache=True) hdulist = fits.open(file_i) hdulist.info() hdu = hdulist['NN_EMISSIVITY_I_LOBE_150.0MHZ'] hdu.header
notebooks/synthetic-images/synthetic-images.ipynb
adrn/tutorials
cc0-1.0
We can see this FITS file, which was created in yt, has x and y coordinate in physical units (cm). We want to convert it into sky coordinates. Before we proceed, let's find out the range of the data and plot a histogram.
print(hdu.data.max()) print(hdu.data.min()) np.seterr(divide='ignore') #suppress the warnings raised by taking log10 of data with zeros plt.hist(np.log10(hdu.data.flatten()), range=(-3, 2), bins=100);
notebooks/synthetic-images/synthetic-images.ipynb
adrn/tutorials
cc0-1.0
Once we know the range of the data, we can do a visualization with the proper range (vmin and vmax).
fig = plt.figure(figsize=(6,12)) fig.add_subplot(111) # We plot it in log-scale and add a small number to avoid nan values. plt.imshow(np.log10(hdu.data+1E-3), vmin=-1, vmax=1, origin='lower')
notebooks/synthetic-images/synthetic-images.ipynb
adrn/tutorials
cc0-1.0
2. Set up astrometry coordinates From the header, we know that the x and y axes are in centimeter. However, in an observation we usually have RA and Dec. To convert physical units to sky coordinates, we will need to make some assumptions about where the object is located, i.e. the distance to the object and the central RA and Dec.
# distance to the object dist_obj = 200*u.Mpc # We have the RA in hh:mm:ss and DEC in dd:mm:ss format. # We will use Skycoord to convert them into degrees later. ra_obj = '19h59m28.3566s' dec_obj = '+40d44m02.096s'
notebooks/synthetic-images/synthetic-images.ipynb
adrn/tutorials
cc0-1.0
Here we convert the pixel scale from cm to degree by dividing the distance to the object.
cdelt1 = ((hdu.header['CDELT1']*u.cm/dist_obj.to('cm'))*u.rad).to('deg') cdelt2 = ((hdu.header['CDELT2']*u.cm/dist_obj.to('cm'))*u.rad).to('deg') print(cdelt1, cdelt2)
notebooks/synthetic-images/synthetic-images.ipynb
adrn/tutorials
cc0-1.0
Use astropy.wcs.WCS to prepare a FITS header.
w = WCS(naxis=2) # reference pixel coordinate w.wcs.crpix = [hdu.data.shape[0]/2,hdu.data.shape[1]/2] # sizes of the pixel in degrees w.wcs.cdelt = [-cdelt1.base, cdelt2.base] # converting ra and dec into degrees c = SkyCoord(ra_obj, dec_obj) w.wcs.crval = [c.ra.deg, c.dec.deg] # the units of the axes are in degrees w.wcs.cunit = ['deg', 'deg']
notebooks/synthetic-images/synthetic-images.ipynb
adrn/tutorials
cc0-1.0
Now we can convert the WCS coordinate into header and update the hdu.
wcs_header = w.to_header() hdu.header.update(wcs_header)
notebooks/synthetic-images/synthetic-images.ipynb
adrn/tutorials
cc0-1.0
Let's take a look at the header. CDELT1, CDELT2, CUNIT1, CUNIT2, CRVAL1, and CRVAL2 are in sky coordinates now.
hdu.header wcs = WCS(hdu.header) fig = plt.figure(figsize=(6,12)) fig.add_subplot(111, projection=wcs) plt.imshow(np.log10(hdu.data+1e-3), vmin=-1, vmax=1, origin='lower') plt.xlabel('RA') plt.ylabel('Dec')
notebooks/synthetic-images/synthetic-images.ipynb
adrn/tutorials
cc0-1.0
Now we have the sky coordinate for the image! 3. Prepare a Point Spread Function (PSF) Simple PSFs are included in astropy.convolution.kernel. We'll use astropy.convolution.Gaussian2DKernel here. First we need to set the telescope resolution. For a 2D Gaussian, we can calculate sigma in pixels by using our pixel scale keyword cdelt2 from above.
# assume our telescope has 1 arcsecond resolution telescope_resolution = 1*u.arcsecond # calculate the sigma in pixels. # since cdelt is in degrees, we use _.to('deg') sigma = telescope_resolution.to('deg')/cdelt2 # By default, the Gaussian kernel will go to 4 sigma # in each direction psf = Gaussian2DKernel(sigma) # let's take a look: plt.imshow(psf.array.value)
notebooks/synthetic-images/synthetic-images.ipynb
adrn/tutorials
cc0-1.0
3.a How to do this without astropy kernels Maybe your PSF is more complicated. Here's an alternative way to do this, using a 2D Lorentzian
# set FWHM and psf grid telescope_resolution = 1*u.arcsecond gamma = telescope_resolution.to('deg')/cdelt2 x_grid = np.outer(np.linspace(-gamma*4,gamma*4,int(8*gamma)),np.ones(int(8*gamma))) r_grid = np.sqrt(x_grid**2 + np.transpose(x_grid**2)) lorentzian = Lorentz1D(fwhm=2*gamma) # extrude a 2D azimuthally symmetric PSF lorentzian_psf = lorentzian(r_grid) # normalization lorentzian_psf /= np.sum(lorentzian_psf) # let's take a look again: plt.imshow(lorentzian_psf.value, interpolation='none')
notebooks/synthetic-images/synthetic-images.ipynb
adrn/tutorials
cc0-1.0
4. Convolve image with PSF Here we use astropy.convolution.convolve_fft to convolve image. This routine uses fourier transform for faster calculation. Especially since our data is $2^n$ sized, which makes it particually fast. Using a fft, however, causes boundary effects. We'll need to specify how we want to handle the boundary. Here we choose to "wrap" the data, which means making the data periodic.
convolved_image = convolve_fft(hdu.data, psf, boundary='wrap') # Put a psf at the corner of the image delta_x_psf=100 # number of pixels from the edges xmin, xmax = -psf.shape[1]-delta_x_psf, -delta_x_psf ymin, ymax = delta_x_psf, delta_x_psf+psf.shape[0] convolved_image[xmin:xmax, ymin:ymax] = psf.array/psf.array.max()*10
notebooks/synthetic-images/synthetic-images.ipynb
adrn/tutorials
cc0-1.0
Now let's take a look at the convolved image.
wcs = WCS(hdu.header) fig = plt.figure(figsize=(8,12)) i_plot = fig.add_subplot(111, projection=wcs) plt.imshow(np.log10(convolved_image+1e-3), vmin=-1, vmax=1.0, origin='lower')#, cmap=plt.cm.viridis) plt.xlabel('RA') plt.ylabel('Dec') plt.colorbar()
notebooks/synthetic-images/synthetic-images.ipynb
adrn/tutorials
cc0-1.0
5. Convolve Stokes Q and U images
hdulist.info() file_q = download_file( 'http://data.astropy.org/tutorials/synthetic-images/synchrotron_q_lobe_0700_150MHz_sm.fits', cache=True) hdulist = fits.open(file_q) hdu_q = hdulist['NN_EMISSIVITY_Q_LOBE_150.0MHZ'] file_u = download_file( 'http://data.astropy.org/tutorials/synthetic-images/synchrotron_u_lobe_0700_150MHz_sm.fits', cache=True) hdulist = fits.open(file_u) hdu_u = hdulist['NN_EMISSIVITY_U_LOBE_150.0MHZ'] # Update the header with the wcs_header we created earlier hdu_q.header.update(wcs_header) hdu_u.header.update(wcs_header) # Convolve the images with the the psf convolved_image_q = convolve_fft(hdu_q.data, psf, boundary='wrap') convolved_image_u = convolve_fft(hdu_u.data, psf, boundary='wrap')
notebooks/synthetic-images/synthetic-images.ipynb
adrn/tutorials
cc0-1.0
Let's plot the Q and U images.
wcs = WCS(hdu.header) fig = plt.figure(figsize=(16,12)) fig.add_subplot(121, projection=wcs) plt.imshow(convolved_image_q, cmap='seismic', vmin=-0.5, vmax=0.5, origin='lower')#, cmap=plt.cm.viridis) plt.xlabel('RA') plt.ylabel('Dec') plt.colorbar() fig.add_subplot(122, projection=wcs) plt.imshow(convolved_image_u, cmap='seismic', vmin=-0.5, vmax=0.5, origin='lower')#, cmap=plt.cm.viridis) plt.xlabel('RA') plt.ylabel('Dec') plt.colorbar()
notebooks/synthetic-images/synthetic-images.ipynb
adrn/tutorials
cc0-1.0
6. Calculate polarization angle and fraction for quiver plot Note that rotating Stokes Q and I maps requires changing signs of both. Here we assume that the Stokes q and u maps were calculated defining the y/declination axis as vertical, such that Q is positive for polarization vectors along the x/right-ascention axis.
# First, we plot the background image fig = plt.figure(figsize=(8,16)) i_plot = fig.add_subplot(111, projection=wcs) i_plot.imshow(np.log10(convolved_image+1e-3), vmin=-1, vmax=1, origin='lower') # ranges of the axis xx0, xx1 = i_plot.get_xlim() yy0, yy1 = i_plot.get_ylim() # binning factor factor = [64, 66] # re-binned number of points in each axis nx_new = convolved_image.shape[1] // factor[0] ny_new = convolved_image.shape[0] // factor[1] # These are the positions of the quivers X,Y = np.meshgrid(np.linspace(xx0,xx1,nx_new,endpoint=True), np.linspace(yy0,yy1,ny_new,endpoint=True)) # bin the data I_bin = convolved_image.reshape(nx_new, factor[0], ny_new, factor[1]).sum(3).sum(1) Q_bin = convolved_image_q.reshape(nx_new, factor[0], ny_new, factor[1]).sum(3).sum(1) U_bin = convolved_image_u.reshape(nx_new, factor[0], ny_new, factor[1]).sum(3).sum(1) # polarization angle psi = 0.5*np.arctan2(U_bin, Q_bin) # polarization fraction frac = np.sqrt(Q_bin**2+U_bin**2)/I_bin # mask for low signal area mask = I_bin < 0.1 frac[mask] = 0 psi[mask] = 0 pixX = frac*np.cos(psi) # X-vector pixY = frac*np.sin(psi) # Y-vector # keyword arguments for quiverplots quiveropts = dict(headlength=0, headwidth=1, pivot='middle') i_plot.quiver(X, Y, pixX, pixY, scale=8, **quiveropts)
notebooks/synthetic-images/synthetic-images.ipynb
adrn/tutorials
cc0-1.0
Status Codes 200 -- everything went okay, and the result has been returned (if any) 301 -- the server is redirecting you to a different endpoint. This can happen when a company switches domain names, or an endpoint name is changed. 401 -- the server thinks you're not authenticated. This happens when you don't send the right credentials to access an API (we'll talk about this in a later mission). 400 -- the server thinks you made a bad request. This can happen when you don't send along the right data, among other things. 403 -- the resource you're trying to access is forbidden -- you don't have the right permissions to see it. 404 -- the resource you tried to access wasn't found on the server.
response = requests.get("http://api.open-notify.org/iss-now.json") response.status_code
python-intro/Untitled1.ipynb
caromedellin/Python-notes
mit
Query Parameters A 400 status code indicates a bad request, in this case it means that we need to add some parameters to the request.
# Set up the parameters we want to pass to the API. # This is the latitude and longitude of New York City. parameters = {"lat": 40.71, "lon": -74} # Make a get request with the parameters. response = requests.get("http://api.open-notify.org/iss-pass.json", params=parameters) # Print the content of the response (the data the server returned) print(response.content) # This gets the same data as the command above response = requests.get("http://api.open-notify.org/iss-pass.json?lat=40.71&lon=-74") print(response.content)
python-intro/Untitled1.ipynb
caromedellin/Python-notes
mit
Now we add the type_I_migration effect, and set the appropriate disk parameters. Note that we chose code units of AU for all the distances above. We require The disk scale height in code units (here AU), 1 code unit from the central star ($h_1$) The disk surface density 1 code unit from the central star ($\Sigma_1$) The disk surface density exponent ($\alpha$), assuming a power law $\Sigma(r) = \Sigma_1 r^{-\alpha}$, where $r$ is the radial distance from the star in code units The disk flaring index ($\beta$), assuming a power-law scale height $h(r) = h_1 r^\beta$
rebx = reboundx.Extras(sim) mig = rebx.load_force("type_I_migration") rebx.add_force(mig) mig.params["tIm_scale_height_1"] = 0.03 mig.params["tIm_surface_density_1"] = ((1000* u.g /u.cm**2).to(u.Msun/u.AU**2)).value #transformed from g/cm^2 to code units mig.params["tIm_surface_density_exponent"] = 1 mig.params["tIm_flaring_index"] = 0.25
ipython_examples/TypeIMigration.ipynb
dtamayo/reboundx
gpl-3.0
We can also add an inner disk edge (ide) to halt migration. This is an artificial prescription for halting the planet at ide_position (in code units, here AU). We also have to set the 'width' of the inner disk edge in code units. This is the width of the region in which the migration torque flips sign, so the planet will stop within this distance scale of the inner disk edge's location. Here we set the width to the scale height of the disk at the inner disk edge:
mig.params["ide_position"] = 0.1 mig.params["ide_width"] = mig.params["tIm_scale_height_1"]*mig.params["ide_position"]**mig.params["tIm_flaring_index"] print('Planet will stop within {0:.3f} AU of the inner disk edge at {1} AU'.format(mig.params["ide_width"], mig.params["ide_position"]))
ipython_examples/TypeIMigration.ipynb
dtamayo/reboundx
gpl-3.0
We set the timestep to 5% of the orbital period at the inner disk edge to make sure we always resolve the orbit
sim.integrator = 'whfast' sim.dt = mig.params["ide_position"]**(3/2)/20
ipython_examples/TypeIMigration.ipynb
dtamayo/reboundx
gpl-3.0
We now integrate the system
times = np.linspace(0, 4e3, 1000) a_integration = np.zeros((1000)) for i, t in enumerate(times): sim.integrate(t) a_integration[i] = ps[1].a
ipython_examples/TypeIMigration.ipynb
dtamayo/reboundx
gpl-3.0
and compare to the analytical predictions
h0 = mig.params["tIm_scale_height_1"] sd0 = mig.params["tIm_surface_density_1"] alpha = mig.params["tIm_surface_density_exponent"] = 1 # Combining Eqs 3.6 and 3.3 of Pichierri et al. 2018 tau_tilde = h0**2 / ((2.7+1.1*alpha)*ps[1].m*sd0*(np.sqrt(sim.G)))
ipython_examples/TypeIMigration.ipynb
dtamayo/reboundx
gpl-3.0
The analytical solution is obtained by solving the ODE for a circular orbit. With the chosen surface profile and flaring index we have: $$\dot{a} = -\frac{1}{\tilde{\tau}}$$ and $$a(t) = a_0\left(1-\frac{t}{\tilde{\tau}}\right)$$
a_analytical = a0*np.maximum(1 - (times/tau_tilde), mig.params["ide_position"]) plt.plot(times*0.001, a_integration, label = 'Numerical evolution', c = 'green', linewidth = 4, alpha = 0.6) plt.plot(times*0.001, a_analytical, label = 'Analytical prediction', c = 'brown', linestyle = "dashed", linewidth = 1) plt.xlim(np.min(times)*0.001, np.max(times)*0.001) plt.xlabel('time [kyr]') plt.ylabel('Semi-major axis [AU]') plt.legend() plt.ylim(0,1)
ipython_examples/TypeIMigration.ipynb
dtamayo/reboundx
gpl-3.0
Generate data We generate a very simple dataset: three almost linearly separable gaussian blobs in 2D.
n_samples = 10000 n_classes = 3 n_features = 2 # centers - number of classes # n_features - dimension of the data X, y_int = make_blobs(n_samples=n_samples, centers=n_classes, n_features=n_features, \ cluster_std=0.5, random_state=0) # No need to convert the features and targets to the 32-bit format as in plain theano. # labels need to be one-hot encoded (binary vector of size N for N classes) y = np_utils.to_categorical(y_int, n_classes) # visualize the data for better understanding def plot_2d_blobs(dataset): X, y = dataset axis('equal') scatter(X[:, 0], X[:, 1], c=y, alpha=0.1, edgecolors='none') plot_2d_blobs((X, y_int))
snippets/keras/keras_hello_world.ipynb
bzamecnik/ml-playground
mit
Split the data into training and test set No validation set since we won't tune any hyperparamers today.
# split the data into training, validation and test sets X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0) X_train.shape, X_test.shape, y_train.shape, y_test.shape
snippets/keras/keras_hello_world.ipynb
bzamecnik/ml-playground
mit
Create the model Plain neural network with a single input, hidden and output layer. The input size matches the number of our features. The output size matches the number of classes (due to one-hot encoding). We chose 3 neurons in the hidden layer.
# the model is just a sequence of transformations - layer weights, activations, etc. model = Sequential() # weights from input to hidden layer - linear transform model.add(Dense(3, input_dim=n_features)) # basic non-linearity model.add(Activation("tanh")) # weights from hidden to output layer model.add(Dense(n_classes)) # nonlinearity suitable for a classifier model.add(Activation("softmax")) # - loss function suitable for multi-class classification # - plain stochastic gradient descent with mini-batches model.compile(loss='categorical_crossentropy', optimizer='sgd')
snippets/keras/keras_hello_world.ipynb
bzamecnik/ml-playground
mit
Train the model We train the model in 5 epochs with mini-batches of size 32 using plain SGD. The progress is nicely printed on the console. A nice thing over theanets is that the progressbar is overwritten and not just appending each row, this saves visual space and avoids cluttering.
model.fit(X_train, y_train, nb_epoch=5, batch_size=32);
snippets/keras/keras_hello_world.ipynb
bzamecnik/ml-playground
mit
Evaluate the model Since we have multi-class classification problem the basic metric is accuracy. The keras model allows to compute it for us. Otherwise we can grab for sklearn. Also while using the model for predictions the progress is printed.
def evaluate_accuracy(X, y, label): _, accuracy = model.evaluate(X_train, y_train, show_accuracy=True) print('training accuracy:', 100 * accuracy, '%') evaluate_accuracy(X_train, X_train, 'training') evaluate_accuracy(X_test, X_test, 'test') y_test_pred = model.predict_classes(X_test) plot_2d_blobs((X_test, y_test_pred))
snippets/keras/keras_hello_world.ipynb
bzamecnik/ml-playground
mit
Expected Output : <table> <tr> <td> **result** </td> <td> [[-2.15657382] [ 2.95891446] [-1.08926781] [-0.84538042]] </td> </tr> </table> 1.2 - Computing the sigmoid Great! You just implemented a linear function. Tensorflow offers a variety of commonly used neural network functions like tf.sigmoid and tf.softmax. For this exercise lets compute the sigmoid function of an input. You will do this exercise using a placeholder variable x. When running the session, you should use the feed dictionary to pass in the input z. In this exercise, you will have to (i) create a placeholder x, (ii) define the operations needed to compute the sigmoid using tf.sigmoid, and then (iii) run the session. Exercise : Implement the sigmoid function below. You should use the following: tf.placeholder(tf.float32, name = "...") tf.sigmoid(...) sess.run(..., feed_dict = {x: z}) Note that there are two typical ways to create and use sessions in tensorflow: Method 1: ```python sess = tf.Session() Run the variables initialization (if needed), run the operations result = sess.run(..., feed_dict = {...}) sess.close() # Close the session **Method 2:**python with tf.Session() as sess: # run the variables initialization (if needed), run the operations result = sess.run(..., feed_dict = {...}) # This takes care of closing the session for you :) ```
# GRADED FUNCTION: sigmoid def sigmoid(z): """ Computes the sigmoid of z Arguments: z -- input value, scalar or vector Returns: results -- the sigmoid of z """ ### START CODE HERE ### ( approx. 4 lines of code) # Create a placeholder for x. Name it 'x'. x = tf.placeholder(tf.float32, name="x") # compute sigmoid(x) sigmoid = tf.sigmoid(x) # Create a session, and run it. Please use the method 2 explained above. # You should use a feed_dict to pass z's value to x. with tf.Session() as sess: # Run session and call the output "result" result = result = sess.run(sigmoid, feed_dict = {x: z}) ### END CODE HERE ### return result print ("sigmoid(0) = " + str(sigmoid(0))) print ("sigmoid(12) = " + str(sigmoid(12)))
Improving Deep Neural networks- Hyperparameter Tuning - Regularization and Optimization/Tensorflow Tutorial.ipynb
anukarsh1/deep-learning-coursera
mit
Expected Output : <table> <tr> <td> **cost** </td> <td> [ 1.00538719 1.03664088 0.41385433 0.39956614] </td> </tr> </table> 1.4 - Using One Hot encodings Many times in deep learning you will have a y vector with numbers ranging from 0 to C-1, where C is the number of classes. If C is for example 4, then you might have the following y vector which you will need to convert as follows: <img src="images/onehot.png" style="width:600px;height:150px;"> This is called a "one hot" encoding, because in the converted representation exactly one element of each column is "hot" (meaning set to 1). To do this conversion in numpy, you might have to write a few lines of code. In tensorflow, you can use one line of code: tf.one_hot(labels, depth, axis) Exercise: Implement the function below to take one vector of labels and the total number of classes $C$, and return the one hot encoding. Use tf.one_hot() to do this.
# GRADED FUNCTION: one_hot_matrix def one_hot_matrix(labels, C): """ Creates a matrix where the i-th row corresponds to the ith class number and the jth column corresponds to the jth training example. So if example j had a label i. Then entry (i,j) will be 1. Arguments: labels -- vector containing the labels C -- number of classes, the depth of the one hot dimension Returns: one_hot -- one hot matrix """ ### START CODE HERE ### # Create a tf.constant equal to C (depth), name it 'C'. (approx. 1 line) C = tf.constant(C, name='C') # Use tf.one_hot, be careful with the axis (approx. 1 line) one_hot_matrix = tf.one_hot(indices=labels, depth=C, axis=0) # Create the session (approx. 1 line) sess = tf.Session() # Run the session (approx. 1 line) one_hot = sess.run(one_hot_matrix) # Close the session (approx. 1 line). See method 1 above. sess.close() ### END CODE HERE ### return one_hot labels = np.array([1,2,3,0,2,1]) one_hot = one_hot_matrix(labels, C=4) print ("one_hot = " + str(one_hot))
Improving Deep Neural networks- Hyperparameter Tuning - Regularization and Optimization/Tensorflow Tutorial.ipynb
anukarsh1/deep-learning-coursera
mit
Text classification for SMS spam detection
import os with open(os.path.join("datasets", "smsspam", "SMSSpamCollection")) as f: lines = [line.strip().split("\t") for line in f.readlines()] text = [x[1] for x in lines] y = [x[0] == "ham" for x in lines] text[:10] y[:10] type(text) type(y) from sklearn.cross_validation import train_test_split text_train, text_test, y_train, y_test = train_test_split(text, y, random_state=42) from sklearn.feature_extraction.text import CountVectorizer vectorizer = CountVectorizer() vectorizer.fit(text_train) X_train = vectorizer.transform(text_train) X_test = vectorizer.transform(text_test) print(len(vectorizer.vocabulary_)) X_train.shape print(vectorizer.get_feature_names()[:20]) print(vectorizer.get_feature_names()[3000:3020]) print(X_train.shape) print(X_test.shape)
notebooks/03.5 Case Study - SMS Spam Detection.ipynb
mhdella/scipy_2015_sklearn_tutorial
cc0-1.0
Training a Classifier on Text Features We can now train a classifier, for instance a logistic regression classifier which is a fast baseline for text classification tasks:
from sklearn.linear_model import LogisticRegression clf = LogisticRegression() clf clf.fit(X_train, y_train)
notebooks/03.5 Case Study - SMS Spam Detection.ipynb
mhdella/scipy_2015_sklearn_tutorial
cc0-1.0
Perform a simple Diffusion Pseudotime analysis on raw data, as in Haghverdi et al. (2016). No preprocessing, only logarthmize the raw counts. Note: The following function is also available as sc.datasets.paul15().
adata = sc.datasets.paul15() sc.pp.log1p(adata) # logarithmize data sc.pp.neighbors(adata, n_neighbors=20, use_rep='X', method='gauss') sc.tl.diffmap(adata) sc.tl.dpt(adata, n_branchings=1, n_dcs=10)
170502_paul15/paul15.ipynb
theislab/scanpy_usage
bsd-3-clause
Diffusion Pseudotime (DPT) analysis detects the branch of granulocyte/macrophage progenitors (GMP), and the branch of megakaryocyte/erythrocyte progenitors (MEP). There are two small further subgroups (segments 0 and 2).
sc.pl.diffmap(adata, color=['dpt_pseudotime', 'dpt_groups', 'paul15_clusters'])
170502_paul15/paul15.ipynb
theislab/scanpy_usage
bsd-3-clause
With this, we reproduced the analysis of Haghverdi et al. (2016, Suppl. Note 4 and Suppl. Figure N4).
adata.write(results_file)
170502_paul15/paul15.ipynb
theislab/scanpy_usage
bsd-3-clause