markdown stringlengths 0 37k | code stringlengths 1 33.3k | path stringlengths 8 215 | repo_name stringlengths 6 77 | license stringclasses 15
values |
|---|---|---|---|---|
Review
Let's remind ourselves what we've learned.
Exercise. We explore the Census's Business Dynamics Statistics, a huge collection of data about firms. We've extracted a small piece of one of their databases that includes these variables for 2013:
Size: size category of firms based on number of employees
Firms: num... | data = {'Size': ['a) 1 to 4', 'b) 5 to 9', 'c) 10 to 19', 'd) 20 to 49', 'e) 50 to 99',
'f) 100 to 249', 'g) 250 to 499', 'h) 500 to 999', 'i) 1000 to 2499',
'j) 2500 to 4999', 'k) 5000 to 9999', 'l) 10000+'],
'Firms': [2846416, 1020772, 598153, 373345, 115544, 63845,
... | Code/notebooks/bootcamp_pandas_adv1-clean.ipynb | NYUDataBootcamp/Materials | mit |
Introducing kNN | knn3scores = cross_val_score(knn3, XTrain, yTrain, cv = 5)
print knn3scores
print "Mean of scores KNN3:", knn3scores.mean()
knn99scores = cross_val_score(knn99, XTrain, yTrain, cv = 5)
print knn99scores
print "Mean of scores KNN99:", knn99scores.mean()
XTrain, XTest, yTrain, yTest = train_test_split(X, y, random_st... | model_optimization/messy_modelling.ipynb | nslatysheva/data_science_blogging | gpl-3.0 |
Next up, let's split the dataset into a training and test set. The training set will be used to develop and tune our predictive models. The test will be completely left alone until the very end, at which point you'll run your finished models on it. Having a test set will allow you to get a good estimate of how well our... | from sklearn.cross_validation import train_test_split
# Split into training and test sets
XTrain, XTest, yTrain, yTest = train_test_split(X, y, random_state=1) | model_optimization/messy_modelling.ipynb | nslatysheva/data_science_blogging | gpl-3.0 |
We are first going to try to predict spam emails with a random forest classifier. Chapter 8 of the Introduction to Statistical Learning book provides a truly excellent introduction to theory behind random forests. Briefly, random forests build a collection of classification trees, which each try to predict classes by r... | from sklearn.ensemble import RandomForestClassifier
from sklearn import metrics
rf = RandomForestClassifier()
rf.fit(XTrain, yTrain)
rf_predictions = rf.predict(XTest)
print (metrics.classification_report(yTest, rf_predictions))
print ("Overall Accuracy:", round(metrics.accuracy_score(yTest, rf_predictions),2)) | model_optimization/messy_modelling.ipynb | nslatysheva/data_science_blogging | gpl-3.0 |
An overall accuracy of 0.95 is very good for a start, but keep in mind that this is a heavily idealized dataset. Next up, we are going to learn how to pick the best parameters for the random forest algorithm (as well as for an SVM and logistic regression classifier) in order to get better models with (hopefully!) impro... | n_estimators = np.array([5, 100])
max_features = np.array([10, 50]) | model_optimization/messy_modelling.ipynb | nslatysheva/data_science_blogging | gpl-3.0 |
We can manually write a small loop to test out how well the different combinations of these fare (later, we'll find out better ways to do this): | from itertools import product
# get grid of all possible combinations of hp values
hp_combinations = list(itertools.product(n_estimators, max_features))
for hp_combo in range(len(hp_combinations)):
print (hp_combinations[hp_combo])
# Train and output accuracies
rf = RandomForestClassifier(n_esti... | model_optimization/messy_modelling.ipynb | nslatysheva/data_science_blogging | gpl-3.0 |
Pandas Series and DataFrame objects
There are two main data structures in pandas:
- Series (1 dimensional data)
- Dataframes (2 dimensional data)
- There are other, lesser used data structures used for higher dimensional data, but are less frequently used
- Panel (3 dimensional data) - panel will be removed from... | series1 = pd.Series([1,2,3,4])
print(series1) | 10 - Pandas Crash Course.ipynb | blakeflei/IntroScientificPythonWithJupyter | bsd-3-clause |
Dataframes use the IPython display method to look pretty, but will show just fine when printed also. (There's a way to make all of the dataframes print pretty via the IPython.display.display method, but this isn't necessary to view the values): | df1 = pd.DataFrame([[1,2,3,4],[10,20,30,40]])
print(df1)
df1 | 10 - Pandas Crash Course.ipynb | blakeflei/IntroScientificPythonWithJupyter | bsd-3-clause |
Indices can be named: | # Rename the columns
df1.columns = ['A','B','C','D']
df1.index = ['zero','one']
df1
# Create the dataframe with the columns
df1 = pd.DataFrame([[1,2,3,4],[10,20,30,40]], columns=['A','B','C',"D"], index=['zero','one'])
df1 | 10 - Pandas Crash Course.ipynb | blakeflei/IntroScientificPythonWithJupyter | bsd-3-clause |
Data Input Output | df1 = pd.DataFrame(np.random.randn(5,4), columns = ['A','B','C','D'], index=['zero','one','two','three','four'])
print(df1) | 10 - Pandas Crash Course.ipynb | blakeflei/IntroScientificPythonWithJupyter | bsd-3-clause |
CSV Files | df1.to_csv('datafiles/pandas_df1.csv')
!ls datafiles
df2 = pd.read_csv('datafiles/pandas_df1.csv', index_col=0)
print(df2) | 10 - Pandas Crash Course.ipynb | blakeflei/IntroScientificPythonWithJupyter | bsd-3-clause |
hdf5 files | df1.to_hdf('datafiles/pandas_df1.h5', 'df')
!ls datafiles
df2 = pd.read_hdf('datafiles/pandas_df1.h5', 'df')
print(df2) | 10 - Pandas Crash Course.ipynb | blakeflei/IntroScientificPythonWithJupyter | bsd-3-clause |
Data types
Show the datatypes of each column: | df2.dtypes | 10 - Pandas Crash Course.ipynb | blakeflei/IntroScientificPythonWithJupyter | bsd-3-clause |
We can create dataframes of multiple datatypes: | col1 = range(6)
col2 = np.random.rand(6)
col3 = ['zero','one','two','three','four','five']
col4 = ['blue', 'cow','blue', 'cow','blue', 'cow']
df_types = pd.DataFrame( {'integers': col1, 'floats': col2, 'words': col3, 'cow color': col4} )
print(df_types)
df_types.dtypes | 10 - Pandas Crash Course.ipynb | blakeflei/IntroScientificPythonWithJupyter | bsd-3-clause |
We can also set the 'cow color' column to a category: | df_types['cow color'] = df_types['cow color'].astype("category")
df_types.dtypes | 10 - Pandas Crash Course.ipynb | blakeflei/IntroScientificPythonWithJupyter | bsd-3-clause |
Indexing and Setting Data
Pandas does a lot of different operations, here are the meat and potatoes. The following describes the indexing of data, but setting the data is as simple as a reassignment. | time_stamps = pd.date_range(start='2000-01-01', end='2000-01-20', freq='D') # Define index of time stamps
df1 = pd.DataFrame(np.random.randn(20,4), columns = ['A','B','C','D'], index=time_stamps)
print(df1) | 10 - Pandas Crash Course.ipynb | blakeflei/IntroScientificPythonWithJupyter | bsd-3-clause |
Head and Tail
Print the beginning and ending entries of a pandas data structure | df1.head(3) # Show the first n rows, default is 5
df1.tail() # Show the last n rows | 10 - Pandas Crash Course.ipynb | blakeflei/IntroScientificPythonWithJupyter | bsd-3-clause |
We can also separate the metadata (labels, etc) from the data, yielding a numpy-like output. | df1.columns
df1.values | 10 - Pandas Crash Course.ipynb | blakeflei/IntroScientificPythonWithJupyter | bsd-3-clause |
Indexing Data
Pandas provides the means to index data via named columns, or as numpy like indices. Indexing is [row, column], just as it was in numpy.
Data is visible via column: | df1['A'].head() # df1.A.head() is equivalent | 10 - Pandas Crash Course.ipynb | blakeflei/IntroScientificPythonWithJupyter | bsd-3-clause |
Note that tab completion is enabled for column names: | df1.A | 10 - Pandas Crash Course.ipynb | blakeflei/IntroScientificPythonWithJupyter | bsd-3-clause |
<div>
<img style="float: left;" src="images/10-01_column-tab.png" width=30%>
</div>
We can specify row ranges: | df1[:2] | 10 - Pandas Crash Course.ipynb | blakeflei/IntroScientificPythonWithJupyter | bsd-3-clause |
Label based indexing (.loc)
Slice based on the labels. | df1.loc[:'2000-01-5',"A"] # Note that this includes the upper index | 10 - Pandas Crash Course.ipynb | blakeflei/IntroScientificPythonWithJupyter | bsd-3-clause |
Integer based indexing (.iloc)
Slice based on the index number. | df1.iloc[:3,0] # Note that this does not include the upper index like numpy | 10 - Pandas Crash Course.ipynb | blakeflei/IntroScientificPythonWithJupyter | bsd-3-clause |
Fast single element label indexing (.at) - fast .loc
Intended for fast, single indexes. | index_timestamp = pd.Timestamp('2000-01-03') # Create a timestamp object to index
df1.at[index_timestamp,"A"] # Index using timestamp (vs string) | 10 - Pandas Crash Course.ipynb | blakeflei/IntroScientificPythonWithJupyter | bsd-3-clause |
Fast single element label indexing (.iat) - fast .iloc
Intended for fast, single indexes. | df1.iat[3,0] | 10 - Pandas Crash Course.ipynb | blakeflei/IntroScientificPythonWithJupyter | bsd-3-clause |
Logical indexing
A condition is used to select the values within a slice or the entire Pandas object. Using a conditional statement, a true/false DataFrame is produced: | df1.head()>0.5 | 10 - Pandas Crash Course.ipynb | blakeflei/IntroScientificPythonWithJupyter | bsd-3-clause |
That matrix can then be used to index the DataFrame: | df1[df1>0.5].head() # Note that the values that were 'False' are 'NaN' | 10 - Pandas Crash Course.ipynb | blakeflei/IntroScientificPythonWithJupyter | bsd-3-clause |
Logical indexing via isin
It's also possible to filter via the index value: | df_types
bool_series = df_types['cow color'].isin(['blue'])
print(bool_series) # Show the logical indexing
df_types[bool_series] # Index where the values are true | 10 - Pandas Crash Course.ipynb | blakeflei/IntroScientificPythonWithJupyter | bsd-3-clause |
Sorting by column | df_types.sort_values(by="floats") | 10 - Pandas Crash Course.ipynb | blakeflei/IntroScientificPythonWithJupyter | bsd-3-clause |
Dealing with Missing Data
By convention, pandas uses the NaN value to represent missing data. There are a few functions surrounding the handling of NaN values: | df_nan = pd.DataFrame(np.random.rand(6,2), columns = ['A','B'])
df_nan
df_nan['B'] = df_nan[df_nan['B']>0.5] # Prints NaN Where ['B'] <= 0.5
print(df_nan) | 10 - Pandas Crash Course.ipynb | blakeflei/IntroScientificPythonWithJupyter | bsd-3-clause |
Print a logical DataFrame where NaN is located: | df_nan.isnull() | 10 - Pandas Crash Course.ipynb | blakeflei/IntroScientificPythonWithJupyter | bsd-3-clause |
Drop all rows with NaN: | df_nan.dropna(how = 'any') | 10 - Pandas Crash Course.ipynb | blakeflei/IntroScientificPythonWithJupyter | bsd-3-clause |
Replace NaN entries: | df_nan.fillna(value = -1) | 10 - Pandas Crash Course.ipynb | blakeflei/IntroScientificPythonWithJupyter | bsd-3-clause |
Concatenating and Merging Data
Bringing together DataFrames or Series objects:
Concatenate | df1 = pd.DataFrame(np.zeros([3,3], dtype=np.int))
df1
df2 = pd.concat([df1, df1], axis=0)
df2 = df2.reset_index(drop=True) # Renumber indexing
df2 | 10 - Pandas Crash Course.ipynb | blakeflei/IntroScientificPythonWithJupyter | bsd-3-clause |
Append
Adding an additional group after the first group: | newdf = pd.DataFrame({0: [1], 1:[1], 2:[1]})
print(newdf)
df3 = df2.append(newdf, ignore_index=True)
df3 | 10 - Pandas Crash Course.ipynb | blakeflei/IntroScientificPythonWithJupyter | bsd-3-clause |
SQL-like merging
Pandas can do structured query language (SQL) like merges of data: | left = pd.DataFrame({'numbers': ['K0', 'K1', 'K2', 'K3'],
'English': ['one', 'two', 'three', 'four'],
'Spanish': ['uno', 'dos', 'tres', 'quatro'],
'German': ['erste', 'zweite','dritte','vierte']})
left
right = pd.DataFrame({'numbers': ['K0', 'K1', 'K2', 'K3'],
'French': ['un', 'deux', 'trois', 'quatre'... | 10 - Pandas Crash Course.ipynb | blakeflei/IntroScientificPythonWithJupyter | bsd-3-clause |
Grouping Operations
Often, there is a need to summarize the data or change the output of the data to make it easier to work with, especially for categorical data types. | dfg = pd.DataFrame({'A': ['clogs','sandals','jellies']*2,
'B': ['socks','footies']*3,
'C': [1,1,1,3,2,2],
'D': np.random.rand(6)})
dfg | 10 - Pandas Crash Course.ipynb | blakeflei/IntroScientificPythonWithJupyter | bsd-3-clause |
Pivot Table
Without changing the data in any way, summarize the output in a different format. Specify the indicies, columns, and values: | dfg.pivot_table(index=['A','B'], columns=['C'], values='D') | 10 - Pandas Crash Course.ipynb | blakeflei/IntroScientificPythonWithJupyter | bsd-3-clause |
Stacking
Column labels can be brought into the rows. | dfg.stack() | 10 - Pandas Crash Course.ipynb | blakeflei/IntroScientificPythonWithJupyter | bsd-3-clause |
Groupby
Groupby groups values, creating a Python object to which functions can be applied: | dfg.groupby(['B']).count()
dfg.groupby(['A']).mean() | 10 - Pandas Crash Course.ipynb | blakeflei/IntroScientificPythonWithJupyter | bsd-3-clause |
Operations on Pandas Data Objects
Wether it's the entire data frame or a series within a single dataframe, there are a variety of methods that can be applied. Here's a list of a few helpful ones:
Simple statistics (mean, stdev, etc). | dfg['D'].mean() | 10 - Pandas Crash Course.ipynb | blakeflei/IntroScientificPythonWithJupyter | bsd-3-clause |
Rotation
Note that the values rotated out leave NaN behind: | dfg['D']
dfg_Ds = dfg['D'].shift(2)
dfg_Ds | 10 - Pandas Crash Course.ipynb | blakeflei/IntroScientificPythonWithJupyter | bsd-3-clause |
Add, subtract, multiply, divide:
Operations are element-wise: | dfg['D'].div(dfg_Ds )
| 10 - Pandas Crash Course.ipynb | blakeflei/IntroScientificPythonWithJupyter | bsd-3-clause |
Histogram | dfg
dfg['C'].value_counts() | 10 - Pandas Crash Course.ipynb | blakeflei/IntroScientificPythonWithJupyter | bsd-3-clause |
Describe
Excluding NaN values, print some descriptive statistics about the collection of values. | df_types.describe() | 10 - Pandas Crash Course.ipynb | blakeflei/IntroScientificPythonWithJupyter | bsd-3-clause |
Transpose
Exchange the rows and columns (flip about the diagonal): | df_types.T | 10 - Pandas Crash Course.ipynb | blakeflei/IntroScientificPythonWithJupyter | bsd-3-clause |
Applying Any Function to Pandas Data Objects
Pandas objects have methods that allow function to be applied with greater control, namely the .apply function: | def f(x): # Define function
return x + 1
dfg['C'].apply(f) | 10 - Pandas Crash Course.ipynb | blakeflei/IntroScientificPythonWithJupyter | bsd-3-clause |
Lambda functions may also be used | dfg['C'].apply(lambda x: x + 1) | 10 - Pandas Crash Course.ipynb | blakeflei/IntroScientificPythonWithJupyter | bsd-3-clause |
String functions:
Pandas has access to string methods: | dfg['A'].str.title() # Make the first letter uppercase | 10 - Pandas Crash Course.ipynb | blakeflei/IntroScientificPythonWithJupyter | bsd-3-clause |
Plotting
Pandas exposes the matplotlib library for use. | n = 100
X = np.linspace(0, 5, n)
Y1,Y2 = np.log((X)**2+2), np.sin(X)+2
dfp = pd.DataFrame({'X' : X, 'Y1': Y1, 'Y2': Y2})
dfp.head()
dfp.plot(x = 'X')
plt.show() | 10 - Pandas Crash Course.ipynb | blakeflei/IntroScientificPythonWithJupyter | bsd-3-clause |
Matplotlib styles are available too: | style_name = 'classic'
plt.style.use(style_name)
dfp.plot(x = 'X')
plt.title('Log($x^2$) and Sine', fontsize=16)
plt.xlabel('X Label', fontsize=16)
plt.ylabel('Y Label', fontsize=16)
plt.show()
mpl.rcdefaults() # Reset matplotlib rc defaults | 10 - Pandas Crash Course.ipynb | blakeflei/IntroScientificPythonWithJupyter | bsd-3-clause |
Simplifación de datos
Cargar datos | #Path para linux
path = '../Recursos/indian_liver_patient.csv'
#Path para Windows
#path = '..\Recursos\indian_liver_patient.csv'
dataset = pd.read_csv(path,delimiter=',',header=0)
# Eliminación de Missing values
dataset=dataset.dropna()
# Transformación de valores a binario
dataset["Gender"] = pd.Categorical.from_... | LAB2/src/Practica2.ipynb | asharel/ml | gpl-3.0 |
Mutual information | mi_regr = FS.mutual_info_regression(train_X, train_Y)
print(mi_regr)
indice_regr=np.argsort(mi_regr)[::-1]
print(indice_regr)
| LAB2/src/Practica2.ipynb | asharel/ml | gpl-3.0 |
Como en cada ejecución de información mutua los valores varian bastante, debido a los bajos resultados obtenidos, hemos decidido realizar el calculo 100 veces y trabajar con la media de los calculos, donde vemos que ya mas o menos se obtienen resultados similares, o con cambios de posicion pequeños.
Algunos ejemplos d... | mi_regr = [ 0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,]
for i in range(1, 100):
mi_regr = mi_regr+ FS.mutual_info_regression(train_X, train_Y)
mi_regr=mi_regr/100
print(mi_regr)
names=train_X.axes[1]
print (names)
indice_regr=np.argsort(mi_regr)[::-1]
print(indice_regr)
#print(names)
names[indice_regr] | LAB2/src/Practica2.ipynb | asharel/ml | gpl-3.0 |
En el caso de realizar la media vemos que los valores no varian tanto de una ejecucion a otra:<br>
ej1:3 5 0 6 2 4 9 1 8 7<br>
ej2:3 0 5 6 2 4 9 8 1 7<br>
ej3:3 0 5 6 2 4 9 8 1 7<br>
ej4:3 5 6 0 2 4 9 8 1 7<br>
ej5:5 6 3 0 2 4 9 8 1 7<br>
Y nos a partir de aqui vamos a usar los valores de ejecución 2 y 3 que son los mi... | indice_regr = [ 3, 0, 5, 6, 2, 4, 9, 8, 1, 7]
regr_var=names[indice_regr]
regr_var
plt.figure(figsize=(8,6))
plt.subplot(121)
plt.scatter(dataset[dataset.Dataset==0].Direct_Bilirubin,dataset[dataset.Dataset==0].Age, color='red')
plt.scatter(dataset[dataset.Dataset==1].Direct_Bilirubin,dataset[dataset.Dataset==1].Age,... | LAB2/src/Practica2.ipynb | asharel/ml | gpl-3.0 |
Como se ve en las graficas, aunque las primeras dos variables (Direct_Bilirubin y Age) no separan bien los elementos, algo de esperar por los resultados tan bajos obtenidos, si estan mejor separadas que con dos de las tres ultimas variables (Total_Protiens y Albumin) | def specificity(y_true, y_pred):
tn, fp, fn, tp = metrics.confusion_matrix(y_true, y_pred).ravel()
return tn/(tn+fp)
#Con todas las variables 10
modelo_lr = LogisticRegression()
modelo_lr.fit(X=train_X,y=train_Y)
predicion = modelo_lr.predict(test_X)
print('Regresion logistica con todas las variables\n')
pr... | LAB2/src/Practica2.ipynb | asharel/ml | gpl-3.0 |
Chi2 | #EN chi^2 no se pueden usar valores negativos, por lo que no se pueden usar los datos normalizados
chi = FS.chi2(X = train_X, y = train_Y)[0]
print(chi)
indice_chi=np.argsort(chi)[::-1]
print(indice_chi)
print(names[indice_chi])
chi_var=names[indice_chi]
plt.figure()
plt.scatter(dataset[dataset.Dataset==0].Aspartate_... | LAB2/src/Practica2.ipynb | asharel/ml | gpl-3.0 |
PCA
Según se ha visto es teoría, PCA es un método que varía según la escala que apliquemos, así pues, el análisis que realizaremos será con datos normalizados. Para ello usaremos el objeto preprocessing de la librería skleanr. | from sklearn.decomposition.pca import PCA
from sklearn import preprocessing
X_scaled = preprocessing.scale(train_X)
pca = PCA()
pca.fit(X_scaled)
# Representamos los resultados de PCA
plt.plot(pca.explained_variance_)
plt.ylabel("eigenvalues")
plt.xlabel("position")
plt.show()
print ("Eigenvalues\n",pca.explained_var... | LAB2/src/Practica2.ipynb | asharel/ml | gpl-3.0 |
Según hemos visto con las pruebas anteriores, podemos ver como alcanzamos la mejor precisión quitando las variables Albumin_and_Globulin_Ratio, Albumin, Total_Protiens, Aspartate_Aminotransferase, Alamine_Aminotransferase, Alkaline_Phosphotase. Hasta ahora ha sido el mejor resultado que hemos obtenido entre todos los m... | from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA
lda = LDA()
lda.fit(train_X,train_Y)
print("Porcentaje explicado:", lda.explained_variance_ratio_)
ldaRes = lda.explained_variance_ratio_
indice_lda=np.argsort(ldaRes)[::-1]
print(indice_lda)
print(names[indice_lda])
pca_var=names[indice_lda... | LAB2/src/Practica2.ipynb | asharel/ml | gpl-3.0 |
Task #3
Given the variables:
stock_index = "SP500"
price = 300
Use .format() to print the following string:
The SP500 is at 300 today. | stock_index = "SP500"
price = 300
print("The {} is at {} today.".format(stock_index,price)) | ml-for-finance/python-for-financial-analysis-and-algorithmic-trading/01-Python-Crash-Course/Python Crash Course Exercises - Solutions.ipynb | Almaz-KG/MachineLearning | apache-2.0 |
Task #5
Given strings with this form where the last source value is always separated by two dashes --
"PRICE:345.324:SOURCE--QUANDL"
Create a function called source_finder() that returns the source. For example, the above string passed into the function would return "QUANDL" | def source_finder(s):
return s.split('--')[-1]
source_finder("PRICE:345.324:SOURCE--QUANDL") | ml-for-finance/python-for-financial-analysis-and-algorithmic-trading/01-Python-Crash-Course/Python Crash Course Exercises - Solutions.ipynb | Almaz-KG/MachineLearning | apache-2.0 |
Task #5
Create a function called price_finder that returns True if the word 'price' is in a string. Your function should work even if 'Price' is capitalized or next to punctuation ('price!') | def price_finder(s):
return 'price' in s.lower()
price_finder("What is the price?")
price_finder("DUDE, WHAT IS PRICE!!!")
price_finder("The price is 300") | ml-for-finance/python-for-financial-analysis-and-algorithmic-trading/01-Python-Crash-Course/Python Crash Course Exercises - Solutions.ipynb | Almaz-KG/MachineLearning | apache-2.0 |
Task #6
Create a function called count_price() that counts the number of times the word "price" occurs in a string. Account for capitalization and if the word price is next to punctuation. | def count_price(s):
count = 0
for word in s.lower().split():
# Need to use in, can't use == or will get error with punctuation
if 'price' in word:
count += 1
# Note the indentation!
return count
# Simpler Alternative
def count_price(s):
return s.lower().coun... | ml-for-finance/python-for-financial-analysis-and-algorithmic-trading/01-Python-Crash-Course/Python Crash Course Exercises - Solutions.ipynb | Almaz-KG/MachineLearning | apache-2.0 |
Task #7
Create a function called avg_price that takes in a list of stock price numbers and calculates the average (Sum of the numbers divided by the number of elements in the list). It should return a float. | def avg_price(stocks):
return sum(stocks)/len(stocks) # Python 2 users should multiply numerator by 1.0
avg_price([3,4,5]) | ml-for-finance/python-for-financial-analysis-and-algorithmic-trading/01-Python-Crash-Course/Python Crash Course Exercises - Solutions.ipynb | Almaz-KG/MachineLearning | apache-2.0 |
シングルマシンシミュレーション
現在、デフォルトで次のようにオンになっています。 | evaluate() | site/ja/federated/tutorials/simulations.ipynb | tensorflow/docs-l10n | apache-2.0 |
1. Load and examine the FITS file
Here we begin with a 2-dimensional data that were stored in FITS format from some simulations. We have Stokes I, Q, and U maps. We we'll first load a FITS file and examine the header. | file_i = download_file(
'http://data.astropy.org/tutorials/synthetic-images/synchrotron_i_lobe_0700_150MHz_sm.fits',
cache=True)
hdulist = fits.open(file_i)
hdulist.info()
hdu = hdulist['NN_EMISSIVITY_I_LOBE_150.0MHZ']
hdu.header | notebooks/synthetic-images/synthetic-images.ipynb | adrn/tutorials | cc0-1.0 |
We can see this FITS file, which was created in yt, has x and y coordinate in physical units (cm). We want to convert it into sky coordinates. Before we proceed, let's find out the range of the data and plot a histogram. | print(hdu.data.max())
print(hdu.data.min())
np.seterr(divide='ignore') #suppress the warnings raised by taking log10 of data with zeros
plt.hist(np.log10(hdu.data.flatten()), range=(-3, 2), bins=100); | notebooks/synthetic-images/synthetic-images.ipynb | adrn/tutorials | cc0-1.0 |
Once we know the range of the data, we can do a visualization with the proper range (vmin and vmax). | fig = plt.figure(figsize=(6,12))
fig.add_subplot(111)
# We plot it in log-scale and add a small number to avoid nan values.
plt.imshow(np.log10(hdu.data+1E-3), vmin=-1, vmax=1, origin='lower') | notebooks/synthetic-images/synthetic-images.ipynb | adrn/tutorials | cc0-1.0 |
2. Set up astrometry coordinates
From the header, we know that the x and y axes are in centimeter. However, in an observation we usually have RA and Dec. To convert physical units to sky coordinates, we will need to make some assumptions about where the object is located, i.e. the distance to the object and the central... | # distance to the object
dist_obj = 200*u.Mpc
# We have the RA in hh:mm:ss and DEC in dd:mm:ss format.
# We will use Skycoord to convert them into degrees later.
ra_obj = '19h59m28.3566s'
dec_obj = '+40d44m02.096s' | notebooks/synthetic-images/synthetic-images.ipynb | adrn/tutorials | cc0-1.0 |
Here we convert the pixel scale from cm to degree by dividing the distance to the object. | cdelt1 = ((hdu.header['CDELT1']*u.cm/dist_obj.to('cm'))*u.rad).to('deg')
cdelt2 = ((hdu.header['CDELT2']*u.cm/dist_obj.to('cm'))*u.rad).to('deg')
print(cdelt1, cdelt2) | notebooks/synthetic-images/synthetic-images.ipynb | adrn/tutorials | cc0-1.0 |
Use astropy.wcs.WCS to prepare a FITS header. | w = WCS(naxis=2)
# reference pixel coordinate
w.wcs.crpix = [hdu.data.shape[0]/2,hdu.data.shape[1]/2]
# sizes of the pixel in degrees
w.wcs.cdelt = [-cdelt1.base, cdelt2.base]
# converting ra and dec into degrees
c = SkyCoord(ra_obj, dec_obj)
w.wcs.crval = [c.ra.deg, c.dec.deg]
# the units of the axes are in degree... | notebooks/synthetic-images/synthetic-images.ipynb | adrn/tutorials | cc0-1.0 |
Now we can convert the WCS coordinate into header and update the hdu. | wcs_header = w.to_header()
hdu.header.update(wcs_header) | notebooks/synthetic-images/synthetic-images.ipynb | adrn/tutorials | cc0-1.0 |
Let's take a look at the header. CDELT1, CDELT2, CUNIT1, CUNIT2, CRVAL1, and CRVAL2 are in sky coordinates now. | hdu.header
wcs = WCS(hdu.header)
fig = plt.figure(figsize=(6,12))
fig.add_subplot(111, projection=wcs)
plt.imshow(np.log10(hdu.data+1e-3), vmin=-1, vmax=1, origin='lower')
plt.xlabel('RA')
plt.ylabel('Dec') | notebooks/synthetic-images/synthetic-images.ipynb | adrn/tutorials | cc0-1.0 |
Now we have the sky coordinate for the image!
3. Prepare a Point Spread Function (PSF)
Simple PSFs are included in astropy.convolution.kernel. We'll use astropy.convolution.Gaussian2DKernel here.
First we need to set the telescope resolution. For a 2D Gaussian, we can calculate sigma in pixels by using our pixel scale ... | # assume our telescope has 1 arcsecond resolution
telescope_resolution = 1*u.arcsecond
# calculate the sigma in pixels.
# since cdelt is in degrees, we use _.to('deg')
sigma = telescope_resolution.to('deg')/cdelt2
# By default, the Gaussian kernel will go to 4 sigma
# in each direction
psf = Gaussian2DKernel(sigma)
... | notebooks/synthetic-images/synthetic-images.ipynb | adrn/tutorials | cc0-1.0 |
3.a How to do this without astropy kernels
Maybe your PSF is more complicated. Here's an alternative way to do this, using a 2D Lorentzian | # set FWHM and psf grid
telescope_resolution = 1*u.arcsecond
gamma = telescope_resolution.to('deg')/cdelt2
x_grid = np.outer(np.linspace(-gamma*4,gamma*4,int(8*gamma)),np.ones(int(8*gamma)))
r_grid = np.sqrt(x_grid**2 + np.transpose(x_grid**2))
lorentzian = Lorentz1D(fwhm=2*gamma)
# extrude a 2D azimuthally symmetric ... | notebooks/synthetic-images/synthetic-images.ipynb | adrn/tutorials | cc0-1.0 |
4. Convolve image with PSF
Here we use astropy.convolution.convolve_fft to convolve image. This routine uses fourier transform for faster calculation. Especially since our data is $2^n$ sized, which makes it particually fast. Using a fft, however, causes boundary effects. We'll need to specify how we want to handle the... | convolved_image = convolve_fft(hdu.data, psf, boundary='wrap')
# Put a psf at the corner of the image
delta_x_psf=100 # number of pixels from the edges
xmin, xmax = -psf.shape[1]-delta_x_psf, -delta_x_psf
ymin, ymax = delta_x_psf, delta_x_psf+psf.shape[0]
convolved_image[xmin:xmax, ymin:ymax] = psf.array/psf.array.max... | notebooks/synthetic-images/synthetic-images.ipynb | adrn/tutorials | cc0-1.0 |
Now let's take a look at the convolved image. | wcs = WCS(hdu.header)
fig = plt.figure(figsize=(8,12))
i_plot = fig.add_subplot(111, projection=wcs)
plt.imshow(np.log10(convolved_image+1e-3), vmin=-1, vmax=1.0, origin='lower')#, cmap=plt.cm.viridis)
plt.xlabel('RA')
plt.ylabel('Dec')
plt.colorbar() | notebooks/synthetic-images/synthetic-images.ipynb | adrn/tutorials | cc0-1.0 |
5. Convolve Stokes Q and U images | hdulist.info()
file_q = download_file(
'http://data.astropy.org/tutorials/synthetic-images/synchrotron_q_lobe_0700_150MHz_sm.fits',
cache=True)
hdulist = fits.open(file_q)
hdu_q = hdulist['NN_EMISSIVITY_Q_LOBE_150.0MHZ']
file_u = download_file(
'http://data.astropy.org/tutorials/synthetic-images/synchrot... | notebooks/synthetic-images/synthetic-images.ipynb | adrn/tutorials | cc0-1.0 |
Let's plot the Q and U images. | wcs = WCS(hdu.header)
fig = plt.figure(figsize=(16,12))
fig.add_subplot(121, projection=wcs)
plt.imshow(convolved_image_q, cmap='seismic', vmin=-0.5, vmax=0.5, origin='lower')#, cmap=plt.cm.viridis)
plt.xlabel('RA')
plt.ylabel('Dec')
plt.colorbar()
fig.add_subplot(122, projection=wcs)
plt.imshow(convolved_image_u, cma... | notebooks/synthetic-images/synthetic-images.ipynb | adrn/tutorials | cc0-1.0 |
6. Calculate polarization angle and fraction for quiver plot
Note that rotating Stokes Q and I maps requires changing signs of both. Here we assume that the Stokes q and u maps were calculated defining the y/declination axis as vertical, such that Q is positive for polarization vectors along the x/right-ascention axis. | # First, we plot the background image
fig = plt.figure(figsize=(8,16))
i_plot = fig.add_subplot(111, projection=wcs)
i_plot.imshow(np.log10(convolved_image+1e-3), vmin=-1, vmax=1, origin='lower')
# ranges of the axis
xx0, xx1 = i_plot.get_xlim()
yy0, yy1 = i_plot.get_ylim()
# binning factor
factor = [64, 66]
# re-bi... | notebooks/synthetic-images/synthetic-images.ipynb | adrn/tutorials | cc0-1.0 |
Status Codes
200 -- everything went okay, and the result has been returned (if any)
301 -- the server is redirecting you to a different endpoint. This can happen when a company switches domain names, or an endpoint name is changed.
401 -- the server thinks you're not authenticated. This happens when you don't send the ... | response = requests.get("http://api.open-notify.org/iss-now.json")
response.status_code | python-intro/Untitled1.ipynb | caromedellin/Python-notes | mit |
Query Parameters
A 400 status code indicates a bad request, in this case it means that we need to add some parameters to the request. | # Set up the parameters we want to pass to the API.
# This is the latitude and longitude of New York City.
parameters = {"lat": 40.71, "lon": -74}
# Make a get request with the parameters.
response = requests.get("http://api.open-notify.org/iss-pass.json", params=parameters)
# Print the content of the response (the d... | python-intro/Untitled1.ipynb | caromedellin/Python-notes | mit |
Now we add the type_I_migration effect, and set the appropriate disk parameters. Note that we chose code units of AU for all the distances above. We require
The disk scale height in code units (here AU), 1 code unit from the central star ($h_1$)
The disk surface density 1 code unit from the central star ($\Sigma_1$) ... | rebx = reboundx.Extras(sim)
mig = rebx.load_force("type_I_migration")
rebx.add_force(mig)
mig.params["tIm_scale_height_1"] = 0.03
mig.params["tIm_surface_density_1"] = ((1000* u.g /u.cm**2).to(u.Msun/u.AU**2)).value #transformed from g/cm^2 to code units
mig.params["tIm_surface_density_exponent"] = 1
mig.p... | ipython_examples/TypeIMigration.ipynb | dtamayo/reboundx | gpl-3.0 |
We can also add an inner disk edge (ide) to halt migration. This is an artificial prescription for halting the planet at ide_position (in code units, here AU).
We also have to set the 'width' of the inner disk edge in code units. This is the width of the region in which the migration torque flips sign, so the planet w... | mig.params["ide_position"] = 0.1
mig.params["ide_width"] = mig.params["tIm_scale_height_1"]*mig.params["ide_position"]**mig.params["tIm_flaring_index"]
print('Planet will stop within {0:.3f} AU of the inner disk edge at {1} AU'.format(mig.params["ide_width"], mig.params["ide_position"])) | ipython_examples/TypeIMigration.ipynb | dtamayo/reboundx | gpl-3.0 |
We set the timestep to 5% of the orbital period at the inner disk edge to make sure we always resolve the orbit | sim.integrator = 'whfast'
sim.dt = mig.params["ide_position"]**(3/2)/20 | ipython_examples/TypeIMigration.ipynb | dtamayo/reboundx | gpl-3.0 |
We now integrate the system | times = np.linspace(0, 4e3, 1000)
a_integration = np.zeros((1000))
for i, t in enumerate(times):
sim.integrate(t)
a_integration[i] = ps[1].a | ipython_examples/TypeIMigration.ipynb | dtamayo/reboundx | gpl-3.0 |
and compare to the analytical predictions | h0 = mig.params["tIm_scale_height_1"]
sd0 = mig.params["tIm_surface_density_1"]
alpha = mig.params["tIm_surface_density_exponent"] = 1
# Combining Eqs 3.6 and 3.3 of Pichierri et al. 2018
tau_tilde = h0**2 / ((2.7+1.1*alpha)*ps[1].m*sd0*(np.sqrt(sim.G))) | ipython_examples/TypeIMigration.ipynb | dtamayo/reboundx | gpl-3.0 |
The analytical solution is obtained by solving the ODE for a circular orbit. With the chosen surface profile and flaring index we have:
$$\dot{a} = -\frac{1}{\tilde{\tau}}$$
and
$$a(t) = a_0\left(1-\frac{t}{\tilde{\tau}}\right)$$ |
a_analytical = a0*np.maximum(1 - (times/tau_tilde), mig.params["ide_position"])
plt.plot(times*0.001, a_integration, label = 'Numerical evolution', c = 'green', linewidth = 4, alpha = 0.6)
plt.plot(times*0.001, a_analytical, label = 'Analytical prediction', c = 'brown', linestyle = "dashed", linewidth = 1)
plt.xlim(... | ipython_examples/TypeIMigration.ipynb | dtamayo/reboundx | gpl-3.0 |
Generate data
We generate a very simple dataset: three almost linearly separable gaussian blobs in 2D. | n_samples = 10000
n_classes = 3
n_features = 2
# centers - number of classes
# n_features - dimension of the data
X, y_int = make_blobs(n_samples=n_samples, centers=n_classes, n_features=n_features, \
cluster_std=0.5, random_state=0)
# No need to convert the features and targets to the 32-bit format as in plain t... | snippets/keras/keras_hello_world.ipynb | bzamecnik/ml-playground | mit |
Split the data into training and test set
No validation set since we won't tune any hyperparamers today. | # split the data into training, validation and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
X_train.shape, X_test.shape, y_train.shape, y_test.shape | snippets/keras/keras_hello_world.ipynb | bzamecnik/ml-playground | mit |
Create the model
Plain neural network with a single input, hidden and output layer.
The input size matches the number of our features. The output size matches the number of classes (due to one-hot encoding).
We chose 3 neurons in the hidden layer. | # the model is just a sequence of transformations - layer weights, activations, etc.
model = Sequential()
# weights from input to hidden layer - linear transform
model.add(Dense(3, input_dim=n_features))
# basic non-linearity
model.add(Activation("tanh"))
# weights from hidden to output layer
model.add(Dense(n_classes)... | snippets/keras/keras_hello_world.ipynb | bzamecnik/ml-playground | mit |
Train the model
We train the model in 5 epochs with mini-batches of size 32 using plain SGD.
The progress is nicely printed on the console. A nice thing over theanets is that the progressbar is overwritten and not just appending each row, this saves visual space and avoids cluttering. | model.fit(X_train, y_train, nb_epoch=5, batch_size=32); | snippets/keras/keras_hello_world.ipynb | bzamecnik/ml-playground | mit |
Evaluate the model
Since we have multi-class classification problem the basic metric is accuracy.
The keras model allows to compute it for us. Otherwise we can grab for sklearn.
Also while using the model for predictions the progress is printed. | def evaluate_accuracy(X, y, label):
_, accuracy = model.evaluate(X_train, y_train, show_accuracy=True)
print('training accuracy:', 100 * accuracy, '%')
evaluate_accuracy(X_train, X_train, 'training')
evaluate_accuracy(X_test, X_test, 'test')
y_test_pred = model.predict_classes(X_test)
plot_2d_blobs((X_test, ... | snippets/keras/keras_hello_world.ipynb | bzamecnik/ml-playground | mit |
Expected Output :
<table>
<tr>
<td>
**result**
</td>
<td>
[[-2.15657382]
[ 2.95891446]
[-1.08926781]
[-0.84538042]]
</td>
</tr>
</table>
1.2 - Computing the sigmoid
Great! You just implemented a linear function. Tensorflow offers a variety of commonly used neural network functions like tf.sigmoid and tf.softma... | # GRADED FUNCTION: sigmoid
def sigmoid(z):
"""
Computes the sigmoid of z
Arguments:
z -- input value, scalar or vector
Returns:
results -- the sigmoid of z
"""
### START CODE HERE ### ( approx. 4 lines of code)
# Create a placeholder for x. Name it 'x'.
x = tf.pl... | Improving Deep Neural networks- Hyperparameter Tuning - Regularization and Optimization/Tensorflow Tutorial.ipynb | anukarsh1/deep-learning-coursera | mit |
Expected Output :
<table>
<tr>
<td>
**cost**
</td>
<td>
[ 1.00538719 1.03664088 0.41385433 0.39956614]
</td>
</tr>
</table>
1.4 - Using One Hot encodings
Many times in deep learning you will have a y vector with numbers ranging from 0 to C-1, where C i... | # GRADED FUNCTION: one_hot_matrix
def one_hot_matrix(labels, C):
"""
Creates a matrix where the i-th row corresponds to the ith class number and the jth column
corresponds to the jth training example. So if example j had a label i. Then entry (i,j)
will be 1.
... | Improving Deep Neural networks- Hyperparameter Tuning - Regularization and Optimization/Tensorflow Tutorial.ipynb | anukarsh1/deep-learning-coursera | mit |
Text classification for SMS spam detection | import os
with open(os.path.join("datasets", "smsspam", "SMSSpamCollection")) as f:
lines = [line.strip().split("\t") for line in f.readlines()]
text = [x[1] for x in lines]
y = [x[0] == "ham" for x in lines]
text[:10]
y[:10]
type(text)
type(y)
from sklearn.cross_validation import train_test_split
text_train,... | notebooks/03.5 Case Study - SMS Spam Detection.ipynb | mhdella/scipy_2015_sklearn_tutorial | cc0-1.0 |
Training a Classifier on Text Features
We can now train a classifier, for instance a logistic regression classifier which is a fast baseline for text classification tasks: | from sklearn.linear_model import LogisticRegression
clf = LogisticRegression()
clf
clf.fit(X_train, y_train) | notebooks/03.5 Case Study - SMS Spam Detection.ipynb | mhdella/scipy_2015_sklearn_tutorial | cc0-1.0 |
Perform a simple Diffusion Pseudotime analysis on raw data, as in Haghverdi et al. (2016). No preprocessing, only logarthmize the raw counts.
Note: The following function is also available as sc.datasets.paul15(). | adata = sc.datasets.paul15()
sc.pp.log1p(adata) # logarithmize data
sc.pp.neighbors(adata, n_neighbors=20, use_rep='X', method='gauss')
sc.tl.diffmap(adata)
sc.tl.dpt(adata, n_branchings=1, n_dcs=10) | 170502_paul15/paul15.ipynb | theislab/scanpy_usage | bsd-3-clause |
Diffusion Pseudotime (DPT) analysis detects the branch of granulocyte/macrophage progenitors (GMP), and the branch of megakaryocyte/erythrocyte progenitors (MEP). There are two small further subgroups (segments 0 and 2). | sc.pl.diffmap(adata, color=['dpt_pseudotime', 'dpt_groups', 'paul15_clusters']) | 170502_paul15/paul15.ipynb | theislab/scanpy_usage | bsd-3-clause |
With this, we reproduced the analysis of Haghverdi et al. (2016, Suppl. Note 4 and Suppl. Figure N4). | adata.write(results_file) | 170502_paul15/paul15.ipynb | theislab/scanpy_usage | bsd-3-clause |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.