markdown stringlengths 0 37k | code stringlengths 1 33.3k | path stringlengths 8 215 | repo_name stringlengths 6 77 | license stringclasses 15
values |
|---|---|---|---|---|
La columna Score_gda si tiene alguna importancia para nosotros.
Es la que va a decirnos quรฉ tan fiable es cada entrada del dataset.
Seguramente la gente que sepa de biologรญa entenderรก mucho mรกs sobre la relevancia de este parรกmetro, pero por ahora nos quedamos con la idea de que mientras mayor sea, indica una mayor fia... | # En general vamos a acceder a las columnas como dataframe['columna']
# entonces para sacar la Score_gda de disgenet_data podemos hacer:
# disgenet_data['Score_gda']
#
# Para sacar solo los valores de adentro podemos hacer:
score_values = disgenet_data['Score_gda'].values
print(score_values)
print(f"# de entradas: {le... | python/Extras/Pandas_DataFrames/braindiseases.ipynb | fifabsas/talleresfifabsas | mit |
Fijense que extraimos todos los datos y lo guardo en un array de numpy!!!
Acรก ya estamos como queremos, porque vimos en la clase un montรณn de cosas que se pueden hacer con los arrays estos.
Por ejemplo, podemos ver su distribuciรณn con un histograma: | _ = plt.hist(disgenet_data['Score_gda'].values) | python/Extras/Pandas_DataFrames/braindiseases.ipynb | fifabsas/talleresfifabsas | mit |
Vemos que hay una bocha de datos que son reee poco fiables.
Algo facil entonces que podesmos querer hacer es filtrar los datos y quedarnos solamente con aquellos que tienen un score bien alto, por ejemplo mayor que 0.7 | # Aca vamos a usar la funciรณn loc (locate) de pandas.
# basicamente uno le da una condicion y retorna las filas que cumplen la condicion esa
disgenet_cortado = disgenet_data.loc[disgenet_data['Score_gda'] > 0.7]
disgenet_cortado | python/Extras/Pandas_DataFrames/braindiseases.ipynb | fifabsas/talleresfifabsas | mit |
Fijense cรณmo se recortรณ los datos!!!
Pasamos de tener 28571 entradas a solamente 1982.
A esto se le llama en la jerga popular como "hachar los datos" o bien "pasarle con la motosierra"
Luego de destruir el 90\% de los datos, una pregunta mรกs que razonable es: ยฟQuรฉ cosas quedaron vivas?
Para eso podemos ver, por ejempl... | # Aca vamos a usar la funcion 'unique', para que solo muestre las asociaciones distintas
# que aparecen, y no repita cien veces cada una
print( disgenet_cortado['Association_Type'].unique() ) | python/Extras/Pandas_DataFrames/braindiseases.ipynb | fifabsas/talleresfifabsas | mit |
Tarea para ustedes es verificar que efectivamente hemos perdido algunos tipos de asociaciรณn en el recorte de datos (de paso pueden ver cuรกles perdimos, y explicarnos si tiene sentido haber perdido esas, porque nosotros no tenemos ni idea).
Para finalizar podemos ver como se distribuyen los tipos de asociaciรณn.
Osea, de... | disgenet_cortado.groupby("Association_Type").count()
# Si solo queremos ver una columna del output:
disgenet_cortado.groupby("Association_Type")['Gene'].count()
# Si lo quieren ver en forma de grafico de barras. Aquรญ esta para ustedes:
asoc_types = disgenet_cortado.groupby("Association_Type")['Gene'].count()
ax = aso... | python/Extras/Pandas_DataFrames/braindiseases.ipynb | fifabsas/talleresfifabsas | mit |
2. Load data
2.1 Read data and mask arr<=0.0 | geo = gdal.Open('data\Hazard_AUS__1000.grd')
arr = geo.ReadAsArray()
arr = ma.masked_less_equal(arr, 0.0, copy=True) | ex22-Visualize GAR Global Flood Hazard Map with Python.ipynb | royalosyin/Python-Practical-Application-on-Climate-Variability-Studies | mit |
2.2 Prepare coordinates | x_coords = np.arange(geo.RasterXSize)
y_coords = np.arange(geo.RasterYSize)
(upper_left_x, x_size, x_rotation, upper_left_y, y_rotation, y_size) = geo.GetGeoTransform()
x_coords = x_coords * x_size + upper_left_x + (x_size / 2) # add half the cell size
y_coords = y_coords * y_size + upper_left_y + (y_size / 2) # to ... | ex22-Visualize GAR Global Flood Hazard Map with Python.ipynb | royalosyin/Python-Practical-Application-on-Climate-Variability-Studies | mit |
3. Visualize | fig = plt.figure(figsize=(9, 15))
ax = fig.add_subplot(1, 1, 1)
m = Basemap(projection='cyl', resolution='i',
llcrnrlon=min(x_coords), llcrnrlat=min(y_coords),
urcrnrlon=max(x_coords), urcrnrlat=max(y_coords))
x, y = m(*np.meshgrid(x_coords, y_coords))
#m.arcgisimage(service='World_Terrain_Base', xpixe... | ex22-Visualize GAR Global Flood Hazard Map with Python.ipynb | royalosyin/Python-Practical-Application-on-Climate-Variability-Studies | mit |
Distribuiรงรฃo de Veรญculos com base no Ano de Registro | # Crie um Plot com a Distribuiรงรฃo de Veรญculos com base no Ano de Registro
# Salvando o plot
fig.savefig("plots/Analise1/vehicle-distribution.png") | Cap09/Mini-Projeto2/Mini-Projeto2 - Analise1.ipynb | dsacademybr/PythonFundamentos | gpl-3.0 |
Variaรงรฃo da faixa de preรงo pelo tipo de veรญculo | # Crie um Boxplot para avaliar os outliers
# Salvando o plot
fig.savefig("plots/Analise1/price-vehicleType-boxplot.png") | Cap09/Mini-Projeto2/Mini-Projeto2 - Analise1.ipynb | dsacademybr/PythonFundamentos | gpl-3.0 |
Contagem total de veรญculos ร venda conforme o tipo de veรญculo | # Crie um Count Plot que mostre o nรบmero de veรญculos pertencentes a cada categoria
# Salvando o plot
g.savefig("plots/Analise1/count-vehicleType.png") | Cap09/Mini-Projeto2/Mini-Projeto2 - Analise1.ipynb | dsacademybr/PythonFundamentos | gpl-3.0 |
ํ์ด์ฌ์ ๊ณ์ฐ๊ธฐ์ฒ๋ผ ํ์ฉํ ์๋ ์๋ค. | 2 + 3
a = 2 + 3
a + 1
42 - 15.3
100 * 11
7 / 2 | previous/notes2017/W01/GongSu03_Python_DataTypes_1.ipynb | liganega/Gongsu-DataSci | gpl-3.0 |
์ฃผ์:
ํ์ด์ฌ2์์๋ ๋๋์
์ฐ์ฐ์(/)๋ ์ ์ ์๋ฃํ์ธ ๊ฒฝ์ฐ ๋ชซ์ ๊ณ์ฐํ๋ค. ๋ฐ๋ฉด์ ๋ถ๋์์์ ์ด ์ฌ์ฉ๋๋ฉด ๋ถ๋์์์ ์ ๋ฆฌํดํ๋ค.
ํ์ด์ฌ3์์๋ ๋๋์
์ฐ์ฐ์(/)๋ ๋ฌด์กฐ๊ฑด ๋ถ๋์์์ ์ ๋ฆฌํดํ๋ค.
In [22]: 7 / 2
Out[22]: 3.5 | 7.0 / 2 | previous/notes2017/W01/GongSu03_Python_DataTypes_1.ipynb | liganega/Gongsu-DataSci | gpl-3.0 |
๋๋จธ์ง๋ฅผ ๊ณ์ฐํ๋ ์ฐ์ฐ์๋ % ์ด๋ค. | 7%5 | previous/notes2017/W01/GongSu03_Python_DataTypes_1.ipynb | liganega/Gongsu-DataSci | gpl-3.0 |
๊ธฐ๋ณธ ์๋ฃํ
ํ์ด์ฌ์๋ 8๊ฐ์ ์๋ฃํ์ด ๋ฏธ๋ฆฌ ์ ์ธ๋์ด ์๋ค. ๊ทธ์ค ๋ค ๊ฐ๋ ๋จ์์๋ฃํ์ด๋ฉฐ, ๋๋จธ์ง ๋ค ๊ฐ๋ ์ปฌ๋ ์
์๋ฃํ(๋ชจ์ ์๋ฃํ)์ด๋ค.
๋จ์ ์๋ฃํ
ํ๋์ ๊ฐ๋ง์ ๋์์ผ๋ก ํ๋ค๋ ์๋ฏธ์์ ๋จ์ ์๋ฃํ์ด๋ค. ์ฆ, ์ ์ ํ๋, ๋ถ๋์์์ ํ๋, ๋ถ๋ฆฌ์ธ ๊ฐ ํ๋, ๋ฌธ์์ด ํ๋ ๋ฑ๋ฑ.
์ ์(int)
๋ถ๋์์์ (float)
๋ถ๋ฆฌ์ธ ๊ฐ(bool)
๋ฌธ์์ด(str)
์ปฌ๋ ์
์๋ฃํ
์ฌ๋ฌ ๊ฐ์ ๊ฐ๋ค์ ํ๋๋ก ๋ฌถ์ด์ ๋ค๋ฃฌ๋ค๋ ์๋ฏธ์์ ์ปฌ๋ ์
์๋ฃํ์ด๋ค.
๋ฆฌ์คํธ(list)
ํํ(tuple)
์งํฉ(set)
์ฌ์ (dictionary)
์ฌ๊ธฐ์๋ ๋จ์ ์๋ฃํ์ ์๊ฐํ๊ณ , ์ปฌ๋ ์
์... | new_float = 4.0
print(new_float) | previous/notes2017/W01/GongSu03_Python_DataTypes_1.ipynb | liganega/Gongsu-DataSci | gpl-3.0 |
์ฐ์ต
์ด(second) ๋จ์์ ์ซ์๋ฅผ ๋ฐ์์ ์ผ(day) ๋จ์์ ๊ฐ์ผ๋ก ๋๋๋ ค์ฃผ๋ seconds2days(n) ํจ์๋ฅผ ์ ์ํ๋ผ. ์
๋ ฅ๊ฐ์ int ๋๋ float ์ผ ์ ์์ผ๋ฉฐ ๋ฆฌํด๊ฐ์ float ์๋ฃํ์ด์ด์ผ ํ๋ค.
ํ์ฉ ์:
In [ ]: seconds2days(43200)
Out[ ]: 0.5
๊ฒฌ๋ณธ๋ต์: | # ํ๋ฃจ๋ ์๋ ์ซ์๋งํผ์ ์ด๋ก ์ด๋ฃจ์ด์ง๋ค.
# ํ๋ฃจ = 24์๊ฐ * 60๋ถ * 60์ด.
daysec = 60 * 60 * 24
# ์ด์ ์ด๋ฅผ ์ผ ๋จ์๋ก ๋ณ๊ฒฝํ ์ ์๋ค.
def seconds2days(sec):
""" sec์ ์ผ ๋จ์๋ก ๋ณ๊ฒฝํ๋ ํจ์.
๊ฐ์ ํ๋ณํ์ ์ฃผ์ํ ๊ฒ"""
return (float(sec) / daysec)
seconds2days(43200) | previous/notes2017/W01/GongSu03_Python_DataTypes_1.ipynb | liganega/Gongsu-DataSci | gpl-3.0 |
ํ์ด์ฌ3์ ๊ฒฝ์ฐ์๋ ์๋์ ๊ฐ์ด ์ ์ํด๋ ๋๋ค.
def seconds2days(sec):
return (sec / daysec)
์ฐ์ต
๋ณ์ ๊ธธ์ด๊ฐ ๊ฐ๊ฐ a, b, c์ธ ์ง๊ฐ์ก๋ฉด์ฒด์ ํ๋ฉด์ ์ ๊ณ์ฐํด์ฃผ๋ ํจ์ box_surface(a, b, c)๋ฅผ ์ ์ํ๋ผ.
์๋ฅผ ๋ค์ด, ๋ฐ์ค๋ฅผ ํ์ธํธ์น ํ๊ณ ์ ํ ๋ ํ์ํ ํ์ธํธ์ ์์ ๊ณ์ฐํ๋ ๋ฌธ์ ์ด๋ค.
ํ์ฉ ์:
In [ ]: box_surface(1, 1, 1)
Out[ ]: 6
In [ ]: box_surface(2, 2, 3)
Out[ ]: 32
๊ฒฌ๋ณธ๋ต์: | def box_surface(a, b, c):
""" ๊ฐ ๋ณ์ ๊ธธ์ด๊ฐ ๊ฐ๊ฐ a, b, c์ธ ๋ฐ์ค์ ํ๋ฉด์ ์ ๋ฆฌํดํ๋ ํจ์.
ํํธ: 6๊ฐ์ ๋ฉด์ ํฉ์ ๊ตฌํ๋ฉด ๋๋ค"""
s1, s2, s3 = a * b, b * c, c * a
return 2 * (s1 + s2 + s3)
box_surface(1, 1, 1)
box_surface(2, 2, 3) | previous/notes2017/W01/GongSu03_Python_DataTypes_1.ipynb | liganega/Gongsu-DataSci | gpl-3.0 |
Define the problem
First construct the model. | M = 50
m = np.zeros((M, 1))
m[10:15,:] = 1.0
m[15:27,:] = -0.3
m[27:35,:] = 2.1 | NumPy.ipynb | kwinkunks/axb | apache-2.0 |
Now we make the kernel matrix G, which represents convolution. | N = 20
L = 100
alpha = 0.8
x = np.arange(0, M, 1) * L/(M-1)
dx = L/(M-1)
r = np.arange(0, N, 1) * L/(N-1)
G = np.zeros((N, M))
for j in range(M):
for k in range(N):
G[k,j] = dx * np.exp(-alpha * np.abs(r[k] - x[j])**2)
plt.imshow(G, cmap='viridis', interpolation='none')
# Compute data
d = G.dot(m)
# Or... | NumPy.ipynb | kwinkunks/axb | apache-2.0 |
Noise-free: minimum norm | # Minimum norm solution in Python < 3.5
m_est = G.T.dot(la.inv(G.dot(G.T)).dot(d))
d_pred = G.dot(m_est)
# Or, in Python 3.5
m_est = G.T @ la.inv(G @ G.T) @ d
d_pred = G @ m_est
plot_all(m, d, m_est, d_pred) | NumPy.ipynb | kwinkunks/axb | apache-2.0 |
Solve with LAPACK | m_est = la.lstsq(G, d)[0]
d_pred = G.dot(m_est)
# Or, in Python 3.5
d_pred = G @ m_est
plot_all(m, d, m_est, d_pred) | NumPy.ipynb | kwinkunks/axb | apache-2.0 |
With noise: damped least squares | # Add noise.
dc = G.dot(m) # Python < 3.5
dc = G @ m # Python 3.5
# Add to the data.
s = 1
d = dc + s * np.random.random(dc.shape)
# Use the second form.
I = np.eye(N)
ยต = 2.5 # We can use Unicode symbols in Python 3, just be careful
m_est = G.T.dot(la.inv(G.dot(G.T) + ยต * I)).dot(d)
d_pred = G.dot(m_est)
# Or... | NumPy.ipynb | kwinkunks/axb | apache-2.0 |
With noise: damped least squares with first derivative regularization | convmtx([1, -1], 5)
W = convmtx([1,-1], M)[:,:-1] # Skip last column | NumPy.ipynb | kwinkunks/axb | apache-2.0 |
Now we solve:
$$ \hat{\mathbf{m}} = (\mathbf{G}^\mathrm{T} \mathbf{G} + \mu \mathbf{W}^\mathrm{T} \mathbf{W})^{-1} \mathbf{G}^\mathrm{T} \mathbf{d} \ \ $$ | m_est = la.inv(G.T.dot(G) + ยต*W.T.dot(W)).dot(G.T.dot(d))
d_pred = G.dot(m_est)
# Or, in Python 3.5
m_est = la.inv(G.T @ G + ยต * W.T @ W) @ G.T @ d
d_pred = G @ m_est
plot_all(m, d, m_est, d_pred) | NumPy.ipynb | kwinkunks/axb | apache-2.0 |
Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dict... | import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
words = list(set(text))
... | tv-script-generation/dlnd_tv_script_generation.ipynb | Bismarrck/deep-learning | mit |
Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to token... | def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
puncs = {".": "Period", ",": "Comma", "\"": "QuotationMark",
";": "Semicolon", "!": "Exc... | tv-script-generation/dlnd_tv_script_generation.ipynb | Bismarrck/deep-learning | mit |
Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following tuple (Inpu... | def get_inputs():
"""
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
"""
# TODO: Implement Function
inputs = tf.placeholder(tf.int32, [None, None], name="input")
targets = tf.placeholder(tf.int32, [None, None], name="targets")
... | tv-script-generation/dlnd_tv_script_generation.ipynb | Bismarrck/deep-learning | mit |
Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the follo... | lstm_layers = 2
def get_init_cell(batch_size, rnn_size):
"""
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
"""
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
# Stack up multiple... | tv-script-generation/dlnd_tv_script_generation.ipynb | Bismarrck/deep-learning | mit |
Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence. | def get_embed(input_data, vocab_size, embed_dim):
"""
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
"""
# TODO: Implement Fun... | tv-script-generation/dlnd_tv_script_generation.ipynb | Bismarrck/deep-learning | mit |
Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number... | def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
"""
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logi... | tv-script-generation/dlnd_tv_script_generation.ipynb | Bismarrck/deep-learning | mit |
Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- Th... | def get_batches(int_text, batch_size, seq_length):
"""
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
"""
# TODO: Implement Func... | tv-script-generation/dlnd_tv_script_generation.ipynb | Bismarrck/deep-learning | mit |
Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set embed_dim to the size of the embedding.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_e... | # Number of Epochs
num_epochs = 200
# Batch Size
batch_size = 128
# RNN Size
rnn_size = 2000
# Sequence Length
seq_length = 50
# Learning Rate
learning_rate = 0.0002
# Show stats for every n number of batches
show_every_n_batches = 5
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
save_dir = '.... | tv-script-generation/dlnd_tv_script_generation.ipynb | Bismarrck/deep-learning | mit |
Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTen... | def get_tensors(loaded_graph):
"""
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
"""
# TODO: Implement Function
input_ ... | tv-script-generation/dlnd_tv_script_generation.ipynb | Bismarrck/deep-learning | mit |
Choose Word
Implement the pick_word() function to select the next word using probabilities. | def pick_word(probabilities, int_to_vocab):
"""
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
"""
# TODO: Implement Function
... | tv-script-generation/dlnd_tv_script_generation.ipynb | Bismarrck/deep-learning | mit |
0.1 Directory Set up & Display Image | datadir = ''
objname = '2016HO3'
def plotfits(imno):
img = fits.open(datadir+objname+'_{0:02d}.fits'.format(imno))[0].data
f = plt.figure(figsize=(10,12))
im = plt.imshow(img, cmap='hot')
im = plt.imshow(img[480:560, 460:540], cmap='hot')
plt.clim(1800, 2800)
plt.colorbar(im, fraction=0.046, ... | Sessions/Session05/Day3/Introduction to Astrometry.ipynb | LSSTC-DSFP/LSSTC-DSFP-Sessions | mit |
1. Centroiding on Images
Write a text file with image centers.
Write code to open each image and extract centroid position from previous exercise.
Save results in a text file. | centers = np.array([[502,501], [502,501]])
np.savetxt('centers.txt', centers, fmt='%i')
centers = np.loadtxt('centers.txt', dtype='int')
searchr = 5 | Sessions/Session05/Day3/Introduction to Astrometry.ipynb | LSSTC-DSFP/LSSTC-DSFP-Sessions | mit |
1.1 Center of Mass | def cent_weight(n):
"""
Assigns centroid weights
"""
wghts=np.zeros((n),np.float)
for i in range(n):
wghts[i]=float(i-n/2)+0.5
return wghts
def calc_CoM(psf, weights):
"""
Finds Center of Mass of image
"""
cent=np.zeros((2),np.float)
temp=sum(sum(psf) - min(sum(psf) ... | Sessions/Session05/Day3/Introduction to Astrometry.ipynb | LSSTC-DSFP/LSSTC-DSFP-Sessions | mit |
2. Identifying Stars in the Field
Ex 1. Write code to identify stars in the field.
One way to do it would be:
Create a new image using an mapping arc sinh that captures the full dynamic range effectively.
Locate lower and upper bounds that should include only stars.
Refine the parameters to optimize the extraction of s... | no = 1
image = fits.open(datadir+objname+'_{0:02d}.fits'.format(no))[0].data | Sessions/Session05/Day3/Introduction to Astrometry.ipynb | LSSTC-DSFP/LSSTC-DSFP-Sessions | mit |
1.a. Create a new image using an mapping arc sinh that captures the full dynamic range effectively. Consider Gaussian smoothing to get rid of inhomogineties in the image. | ## Some functions you may want to use
import skimage.exposure as skie
from scipy.ndimage import gaussian_filter
### code here ###
### code here ###
### code here ###
### code here ###
### code here ###
### code here ### | Sessions/Session05/Day3/Introduction to Astrometry.ipynb | LSSTC-DSFP/LSSTC-DSFP-Sessions | mit |
1.b. Create a new image that is scaled between the lower and upper limits for displaying the star map.
Search the arcsinh-stretched original image for local maxima and catalog those brighter than a threshold that is adjusted based on the image. | ## Consider using
import skimage.morphology as morph
### code here ###
### code here ###
### code here ###
### code here ###
### code here ### | Sessions/Session05/Day3/Introduction to Astrometry.ipynb | LSSTC-DSFP/LSSTC-DSFP-Sessions | mit |
Plot image with identified stars and target | f = plt.figure(figsize=(10,12))
plt.imshow(opt_img, cmap='hot')
plt.colorbar(fraction=0.046, pad=0.04)
plt.scatter(x2, y2, s=80, facecolors='none', edgecolors='r')
plt.scatter(502.01468185, 501.00082137, s=80, facecolors='none', edgecolors='y' )
plt.show() | Sessions/Session05/Day3/Introduction to Astrometry.ipynb | LSSTC-DSFP/LSSTC-DSFP-Sessions | mit |
3. Converting pixel coordinates to WCS | def load_wcs_from_file(filename, xx, yy):
# Load the FITS hdulist using astropy.io.fits
hdulist = fits.open(filename)
# Parse the WCS keywords in the primary HDU
w = wcs.WCS(hdulist[0].header)
# Print out the "name" of the WCS, as defined in the FITS header
print(w.wcs.name)
# Print out a... | Sessions/Session05/Day3/Introduction to Astrometry.ipynb | LSSTC-DSFP/LSSTC-DSFP-Sessions | mit |
Find position of Asteroid in WCS | wparams, scoords = load_wcs_from_file(datadir+objname+'_{0:02d}.fits'.format(1), x2, y2)
print(scoords)
wparams, tcoords = load_wcs_from_file(datadir+objname+'_{0:02d}.fits'.format(1), np.array([centlist[0][0]]), np.array([centlist[0][1]]))
print(tcoords) | Sessions/Session05/Day3/Introduction to Astrometry.ipynb | LSSTC-DSFP/LSSTC-DSFP-Sessions | mit |
3. Matching
3.1 Get astrometric catalog | job = Gaia.launch_job_async("SELECT * \
FROM gaiadr1.gaia_source \
WHERE CONTAINS(POINT('ICRS',gaiadr1.gaia_source.ra,gaiadr1.gaia_source.dec),CIRCLE('ICRS', 193.34, 33.86, 0.08))=1;" \
, dump_to_file=True)
print (job)
r = job.get_results()
print (r['source_id'], r['ra'], r['dec'])
print(type(r['ra'])) | Sessions/Session05/Day3/Introduction to Astrometry.ipynb | LSSTC-DSFP/LSSTC-DSFP-Sessions | mit |
3.2 Perform Match
Convert Gaia WCS coordinates to pixels | ra = np.array(r['ra'])
dec = np.array(r['dec'])
xpix, ypix = ### fill in one line here ### | Sessions/Session05/Day3/Introduction to Astrometry.ipynb | LSSTC-DSFP/LSSTC-DSFP-Sessions | mit |
Plot Gaia stars over identified stars in image | f = plt.figure(figsize=(20,22))
plt.imshow(opt_img, cmap='hot')
plt.colorbar(fraction=0.046, pad=0.04)
plt.scatter(x2, y2, s=80, facecolors='none', edgecolors='r')
plt.scatter(xpix, ypix, s=80, facecolors='none', edgecolors='g')
#plt.scatter(xpix[17], ypix[17], s=80, facecolors='none', edgecolors='y')
plt.imshow(opt_im... | Sessions/Session05/Day3/Introduction to Astrometry.ipynb | LSSTC-DSFP/LSSTC-DSFP-Sessions | mit |
Ex. 2 Find the amount of shift needed.
Match catalogue stars to identified stars and measure amount of shift needed to overlay FoV stars to catalogue.
E.g. Find closest star to one of the Gaia stars near the center of image. Find magnitude of shift. Shift all other Gaia stars and see whether resulting difference is sma... | ### code here ###
### code here ###
### code here ###
### code here ###
### code here ###
### code here ###
### code here ###
### code here ###
### code here ###
### code here ###
### code here ###
### code here ###
### code here ###
### code here ###
### code here ### | Sessions/Session05/Day3/Introduction to Astrometry.ipynb | LSSTC-DSFP/LSSTC-DSFP-Sessions | mit |
Shift | targshifted = centlist[0] + np.array([xshift, yshift]) | Sessions/Session05/Day3/Introduction to Astrometry.ipynb | LSSTC-DSFP/LSSTC-DSFP-Sessions | mit |
Convert shifted coordinate into WCS | wparams, tscoords = load_wcs_from_file(datadir+objname+'_{0:02d}.fits'.format(1), np.array([targshifted[0][0][0]]), np.array([targshifted[0][0][1]])) | Sessions/Session05/Day3/Introduction to Astrometry.ipynb | LSSTC-DSFP/LSSTC-DSFP-Sessions | mit |
Runtime comparison
Let's see how the runtimes of these two algorithms compare.
We expect variable elimination to outperform enumeration by a large margin as we reduce the number of repetitive calculations significantly. | %%timeit
enumeration_ask('Burglary', dict(JohnCalls=True, MaryCalls=True), burglary).show_approx()
%%timeit
elimination_ask('Burglary', dict(JohnCalls=True, MaryCalls=True), burglary).show_approx() | probability.ipynb | jo-tez/aima-python | mit |
We observe that variable elimination was faster than enumeration as we had expected but the gain in speed is not a lot, in fact it is just about 30% faster.
<br>
This happened because the bayesian network in question is pretty small, with just 5 nodes, some of which aren't even required in the inference process.
For mo... | psource(BayesNode.sample) | probability.ipynb | jo-tez/aima-python | mit |
Image IO
Reading and writing images is easy. ANTsPy has some included data which we will use. | fname1 = ants.get_ants_data('r16')
fname2 = ants.get_ants_data('r64')
print(fname1)
img1 = ants.image_read(fname1)
img2 = ants.image_read(fname2)
print(img1) | tutorials/10minTutorial.ipynb | ANTsX/ANTsPy | apache-2.0 |
You can read also convert numpy arrays to ANTsImage types.. Here's an example of an fMRI image (an image with "components") | arr_4d = np.random.randn(70,70,70,10).astype('float32')
img_fmri = ants.from_numpy(arr_4d, has_components=True)
print(img_fmri) | tutorials/10minTutorial.ipynb | ANTsX/ANTsPy | apache-2.0 |
Once you have an ANTsImage type, it basically acts as a numpy array: | # clone
img = ants.image_read(fname1)
img2 = img.clone()
# convert to numpy
img_arr = img.numpy()
# create another image with same properties but different data
img2 = img.new_image_like(img_arr*2)
# save to file
# img.to_file(...)
# many useful things:
img.median()
img.std()
img.argmin()
img.argmax()
img.flatten()... | tutorials/10minTutorial.ipynb | ANTsX/ANTsPy | apache-2.0 |
Segmentation
This module includes Atropos segmentation, Joint Label Fusion, cortical thickness estimation, and prior-based segmentation.
Atropos segmentation: | img = ants.image_read(ants.get_ants_data('r16'))
img = ants.resample_image(img, (64,64), 1, 0)
mask = ants.get_mask(img)
img_seg = ants.atropos(a=img, m='[0.2,1x1]', c='[2,0]',
i='kmeans[3]', x=mask)
print(img_seg.keys())
ants.plot(img_seg['segmentation']) | tutorials/10minTutorial.ipynb | ANTsX/ANTsPy | apache-2.0 |
Cortical thickness: | img = ants.image_read( ants.get_ants_data('r16') ,2)
mask = ants.get_mask( img ).threshold_image( 1, 2 )
segs=ants.atropos( a = img, m = '[0.2,1x1]', c = '[2,0]', i = 'kmeans[3]', x = mask )
thickimg = ants.kelly_kapowski(s=segs['segmentation'], g=segs['probabilityimages'][1],
w=segs['proba... | tutorials/10minTutorial.ipynb | ANTsX/ANTsPy | apache-2.0 |
Registration
This module includes the main ANTs registration interface, from which all registration algorithms can be run - along with various functions for evaluating registration algorithms or resampling/reorienting images or applying specific transformations to images.
SyN registration: | fixed = ants.image_read( ants.get_ants_data('r16') ).resample_image((64,64),1,0)
moving = ants.image_read( ants.get_ants_data('r64') ).resample_image((64,64),1,0)
fixed.plot(overlay=moving, title='Before Registration')
mytx = ants.registration(fixed=fixed , moving=moving, type_of_transform='SyN' )
print(mytx)
warped_mo... | tutorials/10minTutorial.ipynb | ANTsX/ANTsPy | apache-2.0 |
You can also use the transforms output from registration and apply them directly to the image: | mywarpedimage = ants.apply_transforms(fixed=fixed, moving=moving,
transformlist=mytx['fwdtransforms'])
mywarpedimage.plot() | tutorials/10minTutorial.ipynb | ANTsX/ANTsPy | apache-2.0 |
Other utilities
N3 and N4 bias correction: | image = ants.image_read( ants.get_ants_data('r16') )
image_n4 = ants.n4_bias_field_correction(image)
ants.plot( image_n4 ) | tutorials/10minTutorial.ipynb | ANTsX/ANTsPy | apache-2.0 |
<h4>Classification Overview</h4>
<ul>
<li>Predict a binary class as output based on given features.
</li>
<li>Examples: Do we need to follow up on a customer review? Is this transaction fraudulent or valid one? Are there signs of onset of a medical condition or disease? Is this considered junk food or not?</li>
<li>L... | # Sigmoid or logistic function
# For any x, output is bounded to 0 & 1.
def sigmoid_func(x):
return 1.0/(1 + math.exp(-x))
sigmoid_func(10)
sigmoid_func(-100)
sigmoid_func(0)
# Sigmoid function example
x = pd.Series(np.arange(-8, 8, 0.5))
y = x.map(sigmoid_func)
x.head()
fig = plt.figure(figsize = (12, 8))
pl... | 17-09-27-AWS Machine Learning A Complete Guide With Python/07 - Binary Classification/01 - ml_logistic_cost_example.ipynb | arcyfelix/Courses | apache-2.0 |
Example Dataset - Hours spent and Exam Results:
https://en.wikipedia.org/wiki/Logistic_regression
Sigmoid function produces an output between 0 and 1 no. Input closer to 0 produces and output of 0.5 probability. Negative input produces value less than 0.5 while positive input produces value greater than 0.5 | data_path = r'..\Data\ClassExamples\HoursExam\HoursExamResult.csv'
df = pd.read_csv(data_path) | 17-09-27-AWS Machine Learning A Complete Guide With Python/07 - Binary Classification/01 - ml_logistic_cost_example.ipynb | arcyfelix/Courses | apache-2.0 |
Input Feature: Hours<br>
Output: Pass (1 = pass, 0 = fail) | df.head()
# optimal weights given in the wiki dataset
def straight_line(x):
return 1.5046 * x - 4.0777
# How does weight affect outcome
def straight_line_weight(weight1, x):
return weight1 * x - 4.0777
# Generate probability by running feature thru the linear model and then thru sigmoid function
y_vals = d... | 17-09-27-AWS Machine Learning A Complete Guide With Python/07 - Binary Classification/01 - ml_logistic_cost_example.ipynb | arcyfelix/Courses | apache-2.0 |
At 2.7 hours of study time, we hit 0.5 probability. So, any student who spent 2.7 hours or more would have a higher probability of passing the exam.
In the above example,<br>
1. Top right quadrant = true positive. pass got classified correctly as pass
2. Bottom left quadrant = true negative. fail got classified correc... | weights = [0, 1, 2]
y_at_weight = {}
for w in weights:
y_calculated = []
y_at_weight[w] = y_calculated
for x in df.Hours:
y_calculated.append(sigmoid_func(straight_line_weight(w, x)))
y_sig_vals = y_vals.map(sigmoid_func)
fig = plt.figure(figsize = (12, 8))
plt.scatter(x = df.Hours,
... | 17-09-27-AWS Machine Learning A Complete Guide With Python/07 - Binary Classification/01 - ml_logistic_cost_example.ipynb | arcyfelix/Courses | apache-2.0 |
Logistic Regression Cost/Loss Function<br> | # Cost Function
z = pd.Series(np.linspace(0.000001, 0.999999, 100))
ypositive = -z.map(math.log)
ynegative = -z.map(lambda x: math.log(1-x))
fig = plt.figure(figsize = (12, 8))
plt.plot(z,
ypositive,
label = 'Loss curve for positive example')
plt.plot(z,
ynegative,
label = 'Loss c... | 17-09-27-AWS Machine Learning A Complete Guide With Python/07 - Binary Classification/01 - ml_logistic_cost_example.ipynb | arcyfelix/Courses | apache-2.0 |
Cost function is a log curve<br>
1. positive example correctly classified as positive is given a lower loss/cost
2. positive example incorrectly classified as negative is given a higher loss/cost
3. Negative example correctly classified as negative is given a lower loss/cost
4. Negative example incorrectly classifed as... | def compute_logisitic_cost(y_actual, y_predicted):
y_pos_cost = y_predicted[y_actual == 1]
y_neg_cost = y_predicted[y_actual == 0]
positive_cost = (-y_pos_cost.map(math.log)).sum()
negative_cost = -y_neg_cost.map(lambda x: math.log(1 - x)).sum()
return positive_cost + negative_cost
# Example o... | 17-09-27-AWS Machine Learning A Complete Guide With Python/07 - Binary Classification/01 - ml_logistic_cost_example.ipynb | arcyfelix/Courses | apache-2.0 |
์ฃผ์: ํ์ด์ฌ 3์ ๊ฒฝ์ฐ input() ํจ์๋ฅผ raw_input() ๋์ ์ ์ฌ์ฉํด์ผ ํ๋ค.
์ ์ฝ๋๋ ์ ์๋ค์ ์ ๊ณฑ์ ๊ณ์ฐํ๋ ํ๋ก๊ทธ๋จ์ด๋ค.
ํ์ง๋ง ์ฌ์ฉ์๊ฐ ๊ฒฝ์ฐ์ ๋ฐ๋ผ ์ ์ ์ด์ธ์ ๊ฐ์ ์
๋ ฅํ๋ฉด ์์คํ
์ด ๋ค์ด๋๋ค.
์ด์ ๋ํ ํด๊ฒฐ์ฑ
์ ๋ค๋ฃจ๊ณ ์ ํ๋ค.
์ค๋ฅ ์์
๋จผ์ ์ค๋ฅ์ ๋ค์ํ ์์ ๋ฅผ ์ดํด๋ณด์.
๋ค์ ์ฝ๋๋ค์ ๋ชจ๋ ์ค๋ฅ๋ฅผ ๋ฐ์์ํจ๋ค.
์์ : 0์ผ๋ก ๋๋๊ธฐ ์ค๋ฅ
python
4.6/0
์ค๋ฅ ์ค๋ช
: 0์ผ๋ก ๋๋ ์ ์๋ค.
์์ : ๋ฌธ๋ฒ ์ค๋ฅ
python
sentence = 'I am a sentence
์ค๋ฅ ์ค๋ช
: ๋ฌธ์์ด ์ ๋์ ๋ฐ์ดํ๊ฐ ์ง์ด ๋ง์์ผ ํ๋ค.
* ์... | sentence = 'I am a sentence | previous/notes2017/W03/GongSu06_Errors_and_Exception_Handling.ipynb | liganega/Gongsu-DataSci | gpl-3.0 |
์ค๋ฅ๋ฅผ ํ์ธํ๋ ๋ฉ์์ง๊ฐ ์ฒ์ ๋ณผ ๋๋ ๋งค์ฐ ์์ํ๋ค.
์ ์ค๋ฅ ๋ฉ์์ง๋ฅผ ๊ฐ๋จํ๊ฒ ์ดํด๋ณด๋ฉด ๋ค์๊ณผ ๊ฐ๋ค.
File "<ipython-input-37-a6097ed4dc2e>", line 1
1๋ฒ ์ค์์ ์ค๋ฅ ๋ฐ์
sentence = 'I am a sentence
^
์ค๋ฅ ๋ฐ์ ์์น ๋ช
์
SyntaxError: EOL while scanning string literal
์ค๋ฅ ์ข
๋ฅ ํ์: ๋ฌธ๋ฒ ์ค๋ฅ(SyntaxError)
์์
์๋ ์์ ๋ 0์ผ๋ก ๋๋ ๋ ๋ฐ์ํ๋ ์ค๋ฅ๋ฅผ ๋ํ๋ธ๋ค.
์ค๋ฅ์ ... | a = 0
4/a | previous/notes2017/W03/GongSu06_Errors_and_Exception_Handling.ipynb | liganega/Gongsu-DataSci | gpl-3.0 |
์ค๋ฅ์ ์ข
๋ฅ
์์ ์์ ๋ค์ ํตํด ์ดํด ๋ณด์๋ฏ์ด ๋ค์ํ ์ข
๋ฅ์ ์ค๋ฅ๊ฐ ๋ฐ์ํ๋ฉฐ,
์ฝ๋๊ฐ ๊ธธ์ด์ง๊ฑฐ๋ ๋ณต์กํด์ง๋ฉด ์ค๋ฅ๊ฐ ๋ฐ์ํ ๊ฐ๋ฅ์ฑ์ ์ ์ฐจ ์ปค์ง๋ค.
์ค๋ฅ์ ์ข
๋ฅ๋ฅผ ํ์
ํ๋ฉด ์ด๋์ ์ ์ค๋ฅ๊ฐ ๋ฐ์ํ์๋์ง๋ฅผ ๋ณด๋ค ์ฝ๊ฒ ํ์
ํ์ฌ
์ฝ๋๋ฅผ ์์ ํ ์ ์๊ฒ ๋๋ค.
๋ฐ๋ผ์ ์ฝ๋์ ๋ฐ์์์ธ์ ๋ฐ๋ก ์์๋ผ ์ ์์ด์ผ ํ๋ฉฐ ์ด๋ฅผ ์ํด์๋ ์ค๋ฅ ๋ฉ์์ง๋ฅผ
์ ๋๋ก ํ์ธํ ์ ์์ด์ผ ํ๋ค.
ํ์ง๋ง ์ฌ๊ธฐ์๋ ์ธ๊ธ๋ ์์ ์ ๋์ ์์ค๋ง ๋ค๋ฃจ๊ณ ๋์ด๊ฐ๋ค.
์ฝ๋ฉ์ ํ๋ค ๋ณด๋ฉด ์ด์ฐจํผ ๋ค์ํ ์ค๋ฅ์ ๋ง์ฃผ์น๊ฒ ๋ ํ
๋ฐ ๊ทธ๋๋ง๋ค
์ค์ค๋ก ์ค๋ฅ์ ๋ด์ฉ๊ณผ ์์ธ์ ํ์ธํด ๋๊ฐ๋ ๊ณผ์ ์ ํตํด
๋ณด๋ค ๋ง์ ๊ฒฝํ์ ์... | from __future__ import print_function
number_to_square = raw_input("A number please")
# number_to_square ๋ณ์์ ์๋ฃํ์ด ๋ฌธ์์ด(str)์์ ์ฃผ์ํ๋ผ.
# ๋ฐ๋ผ์ ์ฐ์ฐ์ ํ๊ณ ์ถ์ผ๋ฉด ์ ์ํ(int)์ผ๋ก ํ๋ณํ์ ๋จผ์ ํด์ผ ํ๋ค.
number = int(number_to_square)
print("์ ๊ณฑ์ ๊ฒฐ๊ณผ๋", number**2, "์
๋๋ค.") | previous/notes2017/W03/GongSu06_Errors_and_Exception_Handling.ipynb | liganega/Gongsu-DataSci | gpl-3.0 |
3.2๋ฅผ ์
๋ ฅํ์ ๋ ์ค๋ฅ๊ฐ ๋ฐ์ํ๋ ์ด์ ๋ int() ํจ์๊ฐ ์ ์ ๋ชจ์์ ๋ฌธ์์ด๋ง
์ฒ๋ฆฌํ ์ ์๊ธฐ ๋๋ฌธ์ด๋ค.
์ฌ์ค ์ ์๋ค์ ์ ๊ณฑ์ ๊ณ์ฐํ๋ ํ๋ก๊ทธ๋จ์ ์์ฑํ์์ง๋ง ๊ฒฝ์ฐ์ ๋ฐ๋ผ
์ ์ ์ด์ธ์ ๊ฐ์ ์
๋ ฅํ๋ ๊ฒฝ์ฐ๊ฐ ๋ฐ์ํ๊ฒ ๋๋ฉฐ, ์ด๋ฐ ๊ฒฝ์ฐ๋ฅผ ๋๋นํด์ผ ํ๋ค.
์ฆ, ์ค๋ฅ๊ฐ ๋ฐ์ํ ๊ฒ์ ๋ฏธ๋ฆฌ ์์ํด์ผ ํ๋ฉฐ, ์ด๋ป๊ฒ ๋์ฒํด์ผ ํ ์ง ์ค๋นํด์ผ ํ๋๋ฐ,
try ... except ...๋ฌธ์ ์ด์ฉํ์ฌ ์์ธ๋ฅผ ์ฒ๋ฆฌํ๋ ๋ฐฉ์์ ํ์ฉํ ์ ์๋ค. | number_to_square = raw_input("A number please:")
try:
number = int(number_to_square)
print("์ ๊ณฑ์ ๊ฒฐ๊ณผ๋", number ** 2, "์
๋๋ค.")
except:
print("์ ์๋ฅผ ์
๋ ฅํด์ผ ํฉ๋๋ค.")
| previous/notes2017/W03/GongSu06_Errors_and_Exception_Handling.ipynb | liganega/Gongsu-DataSci | gpl-3.0 |
์ค๋ฅ ์ข
๋ฅ์ ๋ง์ถ์ด ๋ค์ํ ๋์ฒ๋ฅผ ํ๊ธฐ ์ํด์๋ ์ค๋ฅ์ ์ข
๋ฅ๋ฅผ ๋ช
์ํ์ฌ ์์ธ์ฒ๋ฆฌ๋ฅผ ํ๋ฉด ๋๋ค.
์๋ ์ฝ๋๋ ์
๋ ฅ ๊ฐ์ ๋ฐ๋ผ ๋ค๋ฅธ ์ค๋ฅ๊ฐ ๋ฐ์ํ๊ณ ๊ทธ์ ์์ํ๋ ๋ฐฉ์์ผ๋ก ์์ธ์ฒ๋ฆฌ๋ฅผ ์คํํ๋ค.
๊ฐ ์ค๋ฅ(ValueError)์ ๊ฒฝ์ฐ | number_to_square = raw_input("A number please: ")
try:
number = int(number_to_square)
a = 5/(number - 4)
print("๊ฒฐ๊ณผ๋", a, "์
๋๋ค.")
except ValueError:
print("์ ์๋ฅผ ์
๋ ฅํด์ผ ํฉ๋๋ค.")
except ZeroDivisionError:
print("4๋ ๋นผ๊ณ ํ์ธ์.") | previous/notes2017/W03/GongSu06_Errors_and_Exception_Handling.ipynb | liganega/Gongsu-DataSci | gpl-3.0 |
0์ผ๋ก ๋๋๊ธฐ ์ค๋ฅ(ZeroDivisionError)์ ๊ฒฝ์ฐ | number_to_square = raw_input("A number please: ")
try:
number = int(number_to_square)
a = 5/(number - 4)
print("๊ฒฐ๊ณผ๋", a, "์
๋๋ค.")
except ValueError:
print("์ ์๋ฅผ ์
๋ ฅํด์ผ ํฉ๋๋ค.")
except ZeroDivisionError:
print("4๋ ๋นผ๊ณ ํ์ธ์.") | previous/notes2017/W03/GongSu06_Errors_and_Exception_Handling.ipynb | liganega/Gongsu-DataSci | gpl-3.0 |
์ฃผ์: ์ด์ ๊ฐ์ด ๋ฐ์ํ ์ ์์ธ๋ฅผ ๊ฐ๋ฅํ ํ ๋ชจ๋ ์ผ๋ํ๋ ํ๋ก๊ทธ๋จ์ ๊ตฌํํด์ผ ํ๋ ์ผ์
๋งค์ฐ ์ด๋ ค์ด ์ผ์ด๋ค.
์์ ๋ณด์๋ฏ์ด ์ค๋ฅ์ ์ข
๋ฅ๋ฅผ ์ ํํ ์ ํ์๊ฐ ๋ฐ์ํ๋ค.
๋ค์ ์์ ์์ ๋ณด๋ฏ์ด ์ค๋ฅ์ ์ข
๋ฅ๋ฅผ ํ๋ฆฌ๊ฒ ๋ช
์ํ๋ฉด ์์ธ ์ฒ๋ฆฌ๊ฐ ์ ๋๋ก ์๋ํ์ง ์๋๋ค. | try:
a = 1/0
except ValueError:
print("This program stops here.") | previous/notes2017/W03/GongSu06_Errors_and_Exception_Handling.ipynb | liganega/Gongsu-DataSci | gpl-3.0 |
์ค๋ฅ์ ๋ํ ๋ณด๋ค ์์ธํ ์ ๋ณด
ํ์ด์ฌ์์ ๋ค๋ฃจ๋ ์ค๋ฅ์ ๋ํ ๋ณด๋ค ์์ธํ ์ ๋ณด๋ ์๋ ์ฌ์ดํธ๋ค์ ์์ธํ๊ฒ ์๋ด๋์ด ์๋ค.
ํ์ด์ฌ ๊ธฐ๋ณธ ๋ด์ฅ ์ค๋ฅ ์ ๋ณด ๋ฌธ์:
https://docs.python.org/3.4/library/exceptions.html
ํ์ด์ฌ ์์ธ์ฒ๋ฆฌ ์ ๋ณด ๋ฌธ์:
https://docs.python.org/3.4/tutorial/errors.html
์ฐ์ต๋ฌธ์
์ฐ์ต
์๋ ์ฝ๋๋ 100์ ์
๋ ฅํ ๊ฐ์ผ๋ก ๋๋๋ ํจ์์ด๋ค.
๋ค๋ง 0์ ์
๋ ฅํ ๊ฒฝ์ฐ 0์ผ๋ก ๋๋๊ธฐ ์ค๋ฅ(ZeroDivisionError)๊ฐ ๋ฐ์ํ๋ค. | from __future__ import print_function
number_to_square = raw_input("A number to divide 100: ")
number = int(number_to_square)
print("100์ ์
๋ ฅํ ๊ฐ์ผ๋ก ๋๋ ๊ฒฐ๊ณผ๋", 100/number, "์
๋๋ค.") | previous/notes2017/W03/GongSu06_Errors_and_Exception_Handling.ipynb | liganega/Gongsu-DataSci | gpl-3.0 |
์๋ ๋ด์ฉ์ด ์ถฉ์กฑ๋๋๋ก ์ ์ฝ๋๋ฅผ ์์ ํ๋ผ.
๋๋์
์ด ๋ถ๋์์์ ์ผ๋ก ๊ณ์ฐ๋๋๋ก ํ๋ค.
0์ด ์๋ ์ซ์๊ฐ ์
๋ ฅ๋ ๊ฒฝ์ฐ 100์ ๊ทธ ์ซ์๋ก ๋๋๋ค.
0์ด ์
๋ ฅ๋ ๊ฒฝ์ฐ 0์ด ์๋ ์ซ์๋ฅผ ์
๋ ฅํ๋ผ๊ณ ์ ๋ฌํ๋ค.
์ซ์๊ฐ ์๋ ๊ฐ์ด ์
๋ ฅ๋ ๊ฒฝ์ฐ ์ซ์๋ฅผ ์
๋ ฅํ๋ผ๊ณ ์ ๋ฌํ๋ค.
๊ฒฌ๋ณธ๋ต์: | number_to_square = raw_input("A number to divide 100: ")
try:
number = float(number_to_square)
print("100์ ์
๋ ฅํ ๊ฐ์ผ๋ก ๋๋ ๊ฒฐ๊ณผ๋", 100/number, "์
๋๋ค.")
except ZeroDivisionError:
raise ZeroDivisionError('0์ด ์๋ ์ซ์๋ฅผ ์
๋ ฅํ์ธ์.')
except ValueError:
raise ValueError('์ซ์๋ฅผ ์
๋ ฅํ์ธ์.')
number_to_square = raw_input("A n... | previous/notes2017/W03/GongSu06_Errors_and_Exception_Handling.ipynb | liganega/Gongsu-DataSci | gpl-3.0 |
Define the input string: | data = 'hello world'
print(data) | NLP/01-One_hot_encoding/notebooks/One_hot_encoding.ipynb | rahulremanan/python_tutorial | mit |
Define universe of possible input values: | alphabet = 'abcdefghijklmnopqrstuvwxyz ' | NLP/01-One_hot_encoding/notebooks/One_hot_encoding.ipynb | rahulremanan/python_tutorial | mit |
Define a mapping of characters to corresponding integers: | char_to_int = dict((c, i) for i, c in enumerate(alphabet))
int_to_char = dict((i, c) for i, c in enumerate(alphabet)) | NLP/01-One_hot_encoding/notebooks/One_hot_encoding.ipynb | rahulremanan/python_tutorial | mit |
Integer encoding of the input data: | integer_encoded = [char_to_int[char] for char in data]
print(integer_encoded) | NLP/01-One_hot_encoding/notebooks/One_hot_encoding.ipynb | rahulremanan/python_tutorial | mit |
One hot encoding: | onehot_encoded = list()
for value in integer_encoded:
letter = [0 for _ in range(len(alphabet))]
letter[value] = 1
onehot_encoded.append(letter)
print(onehot_encoded) | NLP/01-One_hot_encoding/notebooks/One_hot_encoding.ipynb | rahulremanan/python_tutorial | mit |
Dencoding one hot encoded data -- First character: | inverted = int_to_char[np.argmax(onehot_encoded[0])]
print(inverted) | NLP/01-One_hot_encoding/notebooks/One_hot_encoding.ipynb | rahulremanan/python_tutorial | mit |
Dencoding one hot encoded data -- Entire one-hot encoded input: | decoded = list()
for i in range(len(onehot_encoded)):
decoded_char = int_to_char[np.argmax(onehot_encoded[i])]
decoded.append(decoded_char)
print (''.join([str(item) for item in decoded])) | NLP/01-One_hot_encoding/notebooks/One_hot_encoding.ipynb | rahulremanan/python_tutorial | mit |
Part 02a -- One hot encoding using sci-kit learn:
Importing libraries: | from numpy import array
from numpy import argmax
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import OneHotEncoder | NLP/01-One_hot_encoding/notebooks/One_hot_encoding.ipynb | rahulremanan/python_tutorial | mit |
Define example : | data = ['cold',
'cold',
'warm',
'cold',
'hot',
'hot',
'warm',
'cold',
'warm',
'hot']
values = array(data)
print(values) | NLP/01-One_hot_encoding/notebooks/One_hot_encoding.ipynb | rahulremanan/python_tutorial | mit |
Integer encoding: | label_encoder = LabelEncoder()
label_encoded = label_encoder.fit_transform(values)
print(label_encoded) | NLP/01-One_hot_encoding/notebooks/One_hot_encoding.ipynb | rahulremanan/python_tutorial | mit |
Binary encoding: | onehot_encoder = OneHotEncoder(sparse=False)
label_encoded = label_encoded.reshape(len(label_encoded), 1)
onehot_encoded = onehot_encoder.fit_transform(label_encoded)
print(onehot_encoded) | NLP/01-One_hot_encoding/notebooks/One_hot_encoding.ipynb | rahulremanan/python_tutorial | mit |
Invert first example: | inverted = label_encoder.inverse_transform([argmax(onehot_encoded[0, :])]) | NLP/01-One_hot_encoding/notebooks/One_hot_encoding.ipynb | rahulremanan/python_tutorial | mit |
Output the decoded example: | print(inverted) | NLP/01-One_hot_encoding/notebooks/One_hot_encoding.ipynb | rahulremanan/python_tutorial | mit |
Part 02b -- One-hot encode using keras:
Importing libraries: | from numpy import array
from numpy import argmax
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import OneHotEncoder
from keras.utils import to_categorical | NLP/01-One_hot_encoding/notebooks/One_hot_encoding.ipynb | rahulremanan/python_tutorial | mit |
Define the variable: | data = ['cold', 'cold', 'warm', 'cold', 'hot', 'hot', 'warm', 'cold', 'warm', 'hot']
values = array(data)
print(values) | NLP/01-One_hot_encoding/notebooks/One_hot_encoding.ipynb | rahulremanan/python_tutorial | mit |
Integer encoding: | label_encoder = LabelEncoder()
label_encoded = label_encoder.fit_transform(values)
print(label_encoded)
# one hot encode
encoded = to_categorical(label_encoded)
print(encoded)
# invert encoding
label_encoded = argmax(encoded[0])
inverted = label_encoder.inverse_transform(label_encoded)
print(inverted) | NLP/01-One_hot_encoding/notebooks/One_hot_encoding.ipynb | rahulremanan/python_tutorial | mit |
Part 02c -- One-hot encode using keras for numerical categories: | from numpy import array
from numpy import argmax
from keras.utils import to_categorical
# define example
data = [1, 3, 2, 0, 3, 2, 2, 1, 0, 1]
data = array(data)
print(data)
# one hot encode
encoded = to_categorical(data)
print(encoded)
# invert encoding
inverted = argmax(encoded[0])
print(inverted) | NLP/01-One_hot_encoding/notebooks/One_hot_encoding.ipynb | rahulremanan/python_tutorial | mit |
Linear models
Linear models are useful when little data is available or for very large feature spaces as in text classification. In addition, they form a good case study for regularization.
Linear models for regression
All linear models for regression learn a coefficient parameter coef_ and an offset intercept_ to make... | from sklearn.datasets import make_regression
from sklearn.model_selection import train_test_split
X, y, true_coefficient = make_regression(n_samples=200, n_features=30, n_informative=10, noise=100, coef=True, random_state=5)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=5, train_size=60, test_... | notebooks/17.In_Depth-Linear_Models.ipynb | amueller/scipy-2017-sklearn | cc0-1.0 |
Lasso (L1 penalty)
The Lasso estimator is useful to impose sparsity on the coefficient. In other words, it is to be prefered if we believe that many of the features are not relevant. This is done via the so-called l1 penalty.
$$ \text{min}_{w, b} \sum_i \frac{1}{2} || w^\mathsf{T}x_i + b - y_i||^2 + \alpha ||w||_1$$ | from sklearn.linear_model import Lasso
lasso_models = {}
training_scores = []
test_scores = []
for alpha in [30, 10, 1, .01]:
lasso = Lasso(alpha=alpha).fit(X_train, y_train)
training_scores.append(lasso.score(X_train, y_train))
test_scores.append(lasso.score(X_test, y_test))
lasso_models[alpha] = las... | notebooks/17.In_Depth-Linear_Models.ipynb | amueller/scipy-2017-sklearn | cc0-1.0 |
Similar to the Ridge/Lasso separation, you can set the penalty parameter to 'l1' to enforce sparsity of the coefficients (similar to Lasso) or 'l2' to encourage smaller coefficients (similar to Ridge).
Multi-class linear classification | from sklearn.datasets import make_blobs
plt.figure()
X, y = make_blobs(random_state=42)
plt.scatter(X[:, 0], X[:, 1], c=plt.cm.spectral(y / 2.));
from sklearn.svm import LinearSVC
linear_svm = LinearSVC().fit(X, y)
print(linear_svm.coef_.shape)
print(linear_svm.intercept_.shape)
plt.scatter(X[:, 0], X[:, 1], c=plt.cm... | notebooks/17.In_Depth-Linear_Models.ipynb | amueller/scipy-2017-sklearn | cc0-1.0 |
Points are classified in a one-vs-rest fashion (aka one-vs-all), where we assign a test point to the class whose model has the highest confidence (in the SVM case, highest distance to the separating hyperplane) for the test point.
<div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>
Use Log... | from sklearn.datasets import load_digits
from sklearn.linear_model import LogisticRegression
digits = load_digits()
X_digits, y_digits = digits.data, digits.target
# split the dataset, apply grid-search
# %load solutions/17A_logreg_grid.py
# %load solutions/17B_learning_curve_alpha.py | notebooks/17.In_Depth-Linear_Models.ipynb | amueller/scipy-2017-sklearn | cc0-1.0 |
You can apply the replace() method multiple times: | # Create a variable that stores the strong called 'apple'
a = 'apple'
# Create a copy of a with the ps, l, and e removed and reassign the value of a
a = a.replace('p','').replace('l','').replace('e','')
print(a) | winter2017/econ129/python/Econ129_Class_03_Complete.ipynb | letsgoexploring/teaching | mit |
Now we have the tools to solve the email problem. | # Original character string
string = '"Carl Friedrich Gauss" <approximatelynormal@email.com>, "Leonhard Euler" <e@email.com>, "Bernhard Riemann" <zeta@email.com>'
# Remove <, >, and " from string and overwrite and print the result
string = string.replace('<','').replace('>','').replace('"<"','')
# Create a new variab... | winter2017/econ129/python/Econ129_Class_03_Complete.ipynb | letsgoexploring/teaching | mit |
A related problem might be to extract only the email address from the orginal string. To do this, we can use replace() method to remove the '<', '>', and ',' characters. Then we use the split() method to break the string apart at the spaces. The we loop over the resulting list of strings and take only the strings... | string = '"Carl Friedrich Gauss" <approximatelynormal@email.com>, "Leonhard Euler" <e@email.com>, "Bernhard Riemann" <zeta@email.com>'
string = string.replace('<','').replace('>','').replace('"<"','').replace(',','')
for s in string.split():
if '@' in s:
print(s) | winter2017/econ129/python/Econ129_Class_03_Complete.ipynb | letsgoexploring/teaching | mit |
Numpy
NumPy is a powerful Python module for scientific computing. Among other things, NumPy defines an N-dimensional array object that is especially convenient to use for plotting functions and for simulating and storing time series data. NumPy also defines many useful mathematical functions like, for example, the sine... | import numpy as np | winter2017/econ129/python/Econ129_Class_03_Complete.ipynb | letsgoexploring/teaching | mit |
NumPy arrays
A NumPy ndarray is a homogeneous multidimensional array. Here, homogeneous means that all of the elements of the array have the same type. An nadrray is a table of numbers (like a matrix but with possibly more dimensions) indexed by a tuple of positive integers. The dimensions of NumPy arrays are called ax... | # Create a variable called a1 equal to a numpy array containing the numbers 1 through 5
a1 = np.array([1,2,3,4,5])
print(a1)
# Find the type of a1
print(type(a1))
# find the shape of a1
print(np.shape(a1))
# Use ndim to find the rank or number of dimensions of a1
print(np.ndim(a1))
# Create a variable called a2 equ... | winter2017/econ129/python/Econ129_Class_03_Complete.ipynb | letsgoexploring/teaching | mit |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.