markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
|---|---|---|---|---|
La columna Score_gda si tiene alguna importancia para nosotros.
Es la que va a decirnos quรฉ tan fiable es cada entrada del dataset.
Seguramente la gente que sepa de biologรญa entenderรก mucho mรกs sobre la relevancia de este parรกmetro, pero por ahora nos quedamos con la idea de que mientras mayor sea, indica una mayor fiabilidad de la asignaciรณn gen-enfermedad correspondiente.
El describe nos da un poco de informaciรณn, pero uno podrรญa querer extraerla para verla mas en detalle.
Veamos como extraemos los valores de una columna:
|
# En general vamos a acceder a las columnas como dataframe['columna']
# entonces para sacar la Score_gda de disgenet_data podemos hacer:
# disgenet_data['Score_gda']
#
# Para sacar solo los valores de adentro podemos hacer:
score_values = disgenet_data['Score_gda'].values
print(score_values)
print(f"# de entradas: {len(score_values)}")
print(f"Tipo: {type(score_values)}")
|
python/Extras/Pandas_DataFrames/braindiseases.ipynb
|
fifabsas/talleresfifabsas
|
mit
|
Fijense que extraimos todos los datos y lo guardo en un array de numpy!!!
Acรก ya estamos como queremos, porque vimos en la clase un montรณn de cosas que se pueden hacer con los arrays estos.
Por ejemplo, podemos ver su distribuciรณn con un histograma:
|
_ = plt.hist(disgenet_data['Score_gda'].values)
|
python/Extras/Pandas_DataFrames/braindiseases.ipynb
|
fifabsas/talleresfifabsas
|
mit
|
Vemos que hay una bocha de datos que son reee poco fiables.
Algo facil entonces que podesmos querer hacer es filtrar los datos y quedarnos solamente con aquellos que tienen un score bien alto, por ejemplo mayor que 0.7
|
# Aca vamos a usar la funciรณn loc (locate) de pandas.
# basicamente uno le da una condicion y retorna las filas que cumplen la condicion esa
disgenet_cortado = disgenet_data.loc[disgenet_data['Score_gda'] > 0.7]
disgenet_cortado
|
python/Extras/Pandas_DataFrames/braindiseases.ipynb
|
fifabsas/talleresfifabsas
|
mit
|
Fijense cรณmo se recortรณ los datos!!!
Pasamos de tener 28571 entradas a solamente 1982.
A esto se le llama en la jerga popular como "hachar los datos" o bien "pasarle con la motosierra"
Luego de destruir el 90\% de los datos, una pregunta mรกs que razonable es: ยฟQuรฉ cosas quedaron vivas?
Para eso podemos ver, por ejemplo quรฉ tipos de asociaciones sobrevivieron la purga.
|
# Aca vamos a usar la funcion 'unique', para que solo muestre las asociaciones distintas
# que aparecen, y no repita cien veces cada una
print( disgenet_cortado['Association_Type'].unique() )
|
python/Extras/Pandas_DataFrames/braindiseases.ipynb
|
fifabsas/talleresfifabsas
|
mit
|
Tarea para ustedes es verificar que efectivamente hemos perdido algunos tipos de asociaciรณn en el recorte de datos (de paso pueden ver cuรกles perdimos, y explicarnos si tiene sentido haber perdido esas, porque nosotros no tenemos ni idea).
Para finalizar podemos ver como se distribuyen los tipos de asociaciรณn.
Osea, de quรฉ asociacion hay mรกs o menos.
Esto se logra fรกcil con la funciรณn groupby
|
disgenet_cortado.groupby("Association_Type").count()
# Si solo queremos ver una columna del output:
disgenet_cortado.groupby("Association_Type")['Gene'].count()
# Si lo quieren ver en forma de grafico de barras. Aquรญ esta para ustedes:
asoc_types = disgenet_cortado.groupby("Association_Type")['Gene'].count()
ax = asoc_types.plot.bar()
# Aca le roto un poco los ticks,
for tick in ax.get_xticklabels():
tick.set_ha('right')
tick.set_rotation(50)
|
python/Extras/Pandas_DataFrames/braindiseases.ipynb
|
fifabsas/talleresfifabsas
|
mit
|
2. Load data
2.1 Read data and mask arr<=0.0
|
geo = gdal.Open('data\Hazard_AUS__1000.grd')
arr = geo.ReadAsArray()
arr = ma.masked_less_equal(arr, 0.0, copy=True)
|
ex22-Visualize GAR Global Flood Hazard Map with Python.ipynb
|
royalosyin/Python-Practical-Application-on-Climate-Variability-Studies
|
mit
|
2.2 Prepare coordinates
|
x_coords = np.arange(geo.RasterXSize)
y_coords = np.arange(geo.RasterYSize)
(upper_left_x, x_size, x_rotation, upper_left_y, y_rotation, y_size) = geo.GetGeoTransform()
x_coords = x_coords * x_size + upper_left_x + (x_size / 2) # add half the cell size
y_coords = y_coords * y_size + upper_left_y + (y_size / 2) # to centre the point
|
ex22-Visualize GAR Global Flood Hazard Map with Python.ipynb
|
royalosyin/Python-Practical-Application-on-Climate-Variability-Studies
|
mit
|
3. Visualize
|
fig = plt.figure(figsize=(9, 15))
ax = fig.add_subplot(1, 1, 1)
m = Basemap(projection='cyl', resolution='i',
llcrnrlon=min(x_coords), llcrnrlat=min(y_coords),
urcrnrlon=max(x_coords), urcrnrlat=max(y_coords))
x, y = m(*np.meshgrid(x_coords, y_coords))
#m.arcgisimage(service='World_Terrain_Base', xpixels = 3500, dpi=500, verbose= True)
cs = m.contourf(x, y, arr, cmap='RdBu_r')
m.drawcoastlines()
m.drawrivers()
m.drawstates()
cb = m.colorbar(cs, pad="1%", size="3%",)
plt.title('Flood Hazard 1000 years (cm)')
|
ex22-Visualize GAR Global Flood Hazard Map with Python.ipynb
|
royalosyin/Python-Practical-Application-on-Climate-Variability-Studies
|
mit
|
Distribuiรงรฃo de Veรญculos com base no Ano de Registro
|
# Crie um Plot com a Distribuiรงรฃo de Veรญculos com base no Ano de Registro
# Salvando o plot
fig.savefig("plots/Analise1/vehicle-distribution.png")
|
Cap09/Mini-Projeto2/Mini-Projeto2 - Analise1.ipynb
|
dsacademybr/PythonFundamentos
|
gpl-3.0
|
Variaรงรฃo da faixa de preรงo pelo tipo de veรญculo
|
# Crie um Boxplot para avaliar os outliers
# Salvando o plot
fig.savefig("plots/Analise1/price-vehicleType-boxplot.png")
|
Cap09/Mini-Projeto2/Mini-Projeto2 - Analise1.ipynb
|
dsacademybr/PythonFundamentos
|
gpl-3.0
|
Contagem total de veรญculos ร venda conforme o tipo de veรญculo
|
# Crie um Count Plot que mostre o nรบmero de veรญculos pertencentes a cada categoria
# Salvando o plot
g.savefig("plots/Analise1/count-vehicleType.png")
|
Cap09/Mini-Projeto2/Mini-Projeto2 - Analise1.ipynb
|
dsacademybr/PythonFundamentos
|
gpl-3.0
|
ํ์ด์ฌ์ ๊ณ์ฐ๊ธฐ์ฒ๋ผ ํ์ฉํ ์๋ ์๋ค.
|
2 + 3
a = 2 + 3
a + 1
42 - 15.3
100 * 11
7 / 2
|
previous/notes2017/W01/GongSu03_Python_DataTypes_1.ipynb
|
liganega/Gongsu-DataSci
|
gpl-3.0
|
์ฃผ์:
ํ์ด์ฌ2์์๋ ๋๋์
์ฐ์ฐ์(/)๋ ์ ์ ์๋ฃํ์ธ ๊ฒฝ์ฐ ๋ชซ์ ๊ณ์ฐํ๋ค. ๋ฐ๋ฉด์ ๋ถ๋์์์ ์ด ์ฌ์ฉ๋๋ฉด ๋ถ๋์์์ ์ ๋ฆฌํดํ๋ค.
ํ์ด์ฌ3์์๋ ๋๋์
์ฐ์ฐ์(/)๋ ๋ฌด์กฐ๊ฑด ๋ถ๋์์์ ์ ๋ฆฌํดํ๋ค.
In [22]: 7 / 2
Out[22]: 3.5
|
7.0 / 2
|
previous/notes2017/W01/GongSu03_Python_DataTypes_1.ipynb
|
liganega/Gongsu-DataSci
|
gpl-3.0
|
๋๋จธ์ง๋ฅผ ๊ณ์ฐํ๋ ์ฐ์ฐ์๋ % ์ด๋ค.
|
7%5
|
previous/notes2017/W01/GongSu03_Python_DataTypes_1.ipynb
|
liganega/Gongsu-DataSci
|
gpl-3.0
|
๊ธฐ๋ณธ ์๋ฃํ
ํ์ด์ฌ์๋ 8๊ฐ์ ์๋ฃํ์ด ๋ฏธ๋ฆฌ ์ ์ธ๋์ด ์๋ค. ๊ทธ์ค ๋ค ๊ฐ๋ ๋จ์์๋ฃํ์ด๋ฉฐ, ๋๋จธ์ง ๋ค ๊ฐ๋ ์ปฌ๋ ์
์๋ฃํ(๋ชจ์ ์๋ฃํ)์ด๋ค.
๋จ์ ์๋ฃํ
ํ๋์ ๊ฐ๋ง์ ๋์์ผ๋ก ํ๋ค๋ ์๋ฏธ์์ ๋จ์ ์๋ฃํ์ด๋ค. ์ฆ, ์ ์ ํ๋, ๋ถ๋์์์ ํ๋, ๋ถ๋ฆฌ์ธ ๊ฐ ํ๋, ๋ฌธ์์ด ํ๋ ๋ฑ๋ฑ.
์ ์(int)
๋ถ๋์์์ (float)
๋ถ๋ฆฌ์ธ ๊ฐ(bool)
๋ฌธ์์ด(str)
์ปฌ๋ ์
์๋ฃํ
์ฌ๋ฌ ๊ฐ์ ๊ฐ๋ค์ ํ๋๋ก ๋ฌถ์ด์ ๋ค๋ฃฌ๋ค๋ ์๋ฏธ์์ ์ปฌ๋ ์
์๋ฃํ์ด๋ค.
๋ฆฌ์คํธ(list)
ํํ(tuple)
์งํฉ(set)
์ฌ์ (dictionary)
์ฌ๊ธฐ์๋ ๋จ์ ์๋ฃํ์ ์๊ฐํ๊ณ , ์ปฌ๋ ์
์๋ฃํ์ ์ดํ์ ๋ค๋ฃฌ๋ค.
์ ์(int)
์ผ๋ฐ์ ์ผ๋ก ์๊ณ ์๋ ์ ์(์์ฐ์, 0, ์์ ์ ์)๋ค์ ์๋ฃํ์ ๋ํ๋ด๋ฉด ๋ง์
, ๋บ์
, ๊ณฑ์
, ๋๋์
๋ฑ์ ์ผ๋ฐ ์ฐ์ฐ์ด ๊ฐ๋ฅํ๋ค.
์ฃผ์: ์ ์๋ค์ ๋๋์
์ ๊ฒฐ๊ณผ๋ ๋ถ๋์์์ ์ด๋ค.
ํ์ด์ฌ3์์ ์ฒ๋ผ ์ ์๋ค์ ๋๋์
์ด ๋ถ๋์์์ ์ด ๋๋๋ก ํ๋ ค๋ฉด ์๋ ๋ช
๋ น์ด๋ฅผ ๋จผ์ ์คํํ๋ฉด ๋๋ค.
์ต์ ๋ฒ์ ผ์ธ ํ์ด์ฌ3๊ณผ์ ํธํ์ฑ์ ์ํด ํ์ํ ์ ์๋ค.
from __future__ import division
In [4]: 8 / 5
Out[4]: 1.6
์ ๋ช
๋ น์ด๋ฅผ ์คํํ ํ์ ๊ธฐ์กด์ ์ ์๋ค์ ๋๋์
์ฐ์ฐ์ ์ํด์๋ ๋ชซ์ ๊ณ์ฐํ๋ ์ฐ์ฐ์์ธ //์ ์ฌ์ฉํ๋ฉด ๋๋ค.
In [5]: 8 // 5
Out[5]: 1
In [5]: 8 % 5
Out[5]: 3
๋ถ๋์์์ (float)
๋ถ๋์์์ ์ ์๋ ์ค์๋ฅผ ์ปดํจํฐ์์ ๋ค๋ฃจ๊ธฐ ์ํด ๊ฐ๋ฐ๋์์ผ๋ ์ค์ ๋ก๋ ์ ๋ฆฌ์ ์ผ๋ถ๋ง์ ๋ค๋ฃฌ๋ค.
๋ฌด๋ฆฌ์์ธ ์์ฃผ์จ pi์ ๊ฒฝ์ฐ์๋ ์ปดํจํฐ์ ํ๊ณ๋ก ์ธํด ์์์ ์ดํ ์ ๋นํ ์๋ฆฌ์์ ๋์ด์ ์ฌ์ฉํ๋ค.
|
new_float = 4.0
print(new_float)
|
previous/notes2017/W01/GongSu03_Python_DataTypes_1.ipynb
|
liganega/Gongsu-DataSci
|
gpl-3.0
|
์ฐ์ต
์ด(second) ๋จ์์ ์ซ์๋ฅผ ๋ฐ์์ ์ผ(day) ๋จ์์ ๊ฐ์ผ๋ก ๋๋๋ ค์ฃผ๋ seconds2days(n) ํจ์๋ฅผ ์ ์ํ๋ผ. ์
๋ ฅ๊ฐ์ int ๋๋ float ์ผ ์ ์์ผ๋ฉฐ ๋ฆฌํด๊ฐ์ float ์๋ฃํ์ด์ด์ผ ํ๋ค.
ํ์ฉ ์:
In [ ]: seconds2days(43200)
Out[ ]: 0.5
๊ฒฌ๋ณธ๋ต์:
|
# ํ๋ฃจ๋ ์๋ ์ซ์๋งํผ์ ์ด๋ก ์ด๋ฃจ์ด์ง๋ค.
# ํ๋ฃจ = 24์๊ฐ * 60๋ถ * 60์ด.
daysec = 60 * 60 * 24
# ์ด์ ์ด๋ฅผ ์ผ ๋จ์๋ก ๋ณ๊ฒฝํ ์ ์๋ค.
def seconds2days(sec):
""" sec์ ์ผ ๋จ์๋ก ๋ณ๊ฒฝํ๋ ํจ์.
๊ฐ์ ํ๋ณํ์ ์ฃผ์ํ ๊ฒ"""
return (float(sec) / daysec)
seconds2days(43200)
|
previous/notes2017/W01/GongSu03_Python_DataTypes_1.ipynb
|
liganega/Gongsu-DataSci
|
gpl-3.0
|
ํ์ด์ฌ3์ ๊ฒฝ์ฐ์๋ ์๋์ ๊ฐ์ด ์ ์ํด๋ ๋๋ค.
def seconds2days(sec):
return (sec / daysec)
์ฐ์ต
๋ณ์ ๊ธธ์ด๊ฐ ๊ฐ๊ฐ a, b, c์ธ ์ง๊ฐ์ก๋ฉด์ฒด์ ํ๋ฉด์ ์ ๊ณ์ฐํด์ฃผ๋ ํจ์ box_surface(a, b, c)๋ฅผ ์ ์ํ๋ผ.
์๋ฅผ ๋ค์ด, ๋ฐ์ค๋ฅผ ํ์ธํธ์น ํ๊ณ ์ ํ ๋ ํ์ํ ํ์ธํธ์ ์์ ๊ณ์ฐํ๋ ๋ฌธ์ ์ด๋ค.
ํ์ฉ ์:
In [ ]: box_surface(1, 1, 1)
Out[ ]: 6
In [ ]: box_surface(2, 2, 3)
Out[ ]: 32
๊ฒฌ๋ณธ๋ต์:
|
def box_surface(a, b, c):
""" ๊ฐ ๋ณ์ ๊ธธ์ด๊ฐ ๊ฐ๊ฐ a, b, c์ธ ๋ฐ์ค์ ํ๋ฉด์ ์ ๋ฆฌํดํ๋ ํจ์.
ํํธ: 6๊ฐ์ ๋ฉด์ ํฉ์ ๊ตฌํ๋ฉด ๋๋ค"""
s1, s2, s3 = a * b, b * c, c * a
return 2 * (s1 + s2 + s3)
box_surface(1, 1, 1)
box_surface(2, 2, 3)
|
previous/notes2017/W01/GongSu03_Python_DataTypes_1.ipynb
|
liganega/Gongsu-DataSci
|
gpl-3.0
|
Define the problem
First construct the model.
|
M = 50
m = np.zeros((M, 1))
m[10:15,:] = 1.0
m[15:27,:] = -0.3
m[27:35,:] = 2.1
|
NumPy.ipynb
|
kwinkunks/axb
|
apache-2.0
|
Now we make the kernel matrix G, which represents convolution.
|
N = 20
L = 100
alpha = 0.8
x = np.arange(0, M, 1) * L/(M-1)
dx = L/(M-1)
r = np.arange(0, N, 1) * L/(N-1)
G = np.zeros((N, M))
for j in range(M):
for k in range(N):
G[k,j] = dx * np.exp(-alpha * np.abs(r[k] - x[j])**2)
plt.imshow(G, cmap='viridis', interpolation='none')
# Compute data
d = G.dot(m)
# Or, in Python 3.5
d = G @ m
|
NumPy.ipynb
|
kwinkunks/axb
|
apache-2.0
|
Noise-free: minimum norm
|
# Minimum norm solution in Python < 3.5
m_est = G.T.dot(la.inv(G.dot(G.T)).dot(d))
d_pred = G.dot(m_est)
# Or, in Python 3.5
m_est = G.T @ la.inv(G @ G.T) @ d
d_pred = G @ m_est
plot_all(m, d, m_est, d_pred)
|
NumPy.ipynb
|
kwinkunks/axb
|
apache-2.0
|
Solve with LAPACK
|
m_est = la.lstsq(G, d)[0]
d_pred = G.dot(m_est)
# Or, in Python 3.5
d_pred = G @ m_est
plot_all(m, d, m_est, d_pred)
|
NumPy.ipynb
|
kwinkunks/axb
|
apache-2.0
|
With noise: damped least squares
|
# Add noise.
dc = G.dot(m) # Python < 3.5
dc = G @ m # Python 3.5
# Add to the data.
s = 1
d = dc + s * np.random.random(dc.shape)
# Use the second form.
I = np.eye(N)
ยต = 2.5 # We can use Unicode symbols in Python 3, just be careful
m_est = G.T.dot(la.inv(G.dot(G.T) + ยต * I)).dot(d)
d_pred = G.dot(m_est)
# Or, in Python 3.5
m_est = G.T @ la.inv(G @ G.T + ยต * I) @ d
d_pred = G @ m_est
plot_all(m, d, m_est, d_pred)
|
NumPy.ipynb
|
kwinkunks/axb
|
apache-2.0
|
With noise: damped least squares with first derivative regularization
|
convmtx([1, -1], 5)
W = convmtx([1,-1], M)[:,:-1] # Skip last column
|
NumPy.ipynb
|
kwinkunks/axb
|
apache-2.0
|
Now we solve:
$$ \hat{\mathbf{m}} = (\mathbf{G}^\mathrm{T} \mathbf{G} + \mu \mathbf{W}^\mathrm{T} \mathbf{W})^{-1} \mathbf{G}^\mathrm{T} \mathbf{d} \ \ $$
|
m_est = la.inv(G.T.dot(G) + ยต*W.T.dot(W)).dot(G.T.dot(d))
d_pred = G.dot(m_est)
# Or, in Python 3.5
m_est = la.inv(G.T @ G + ยต * W.T @ W) @ G.T @ d
d_pred = G @ m_est
plot_all(m, d, m_est, d_pred)
|
NumPy.ipynb
|
kwinkunks/axb
|
apache-2.0
|
Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
|
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
words = list(set(text))
vocab_to_int = {}
int_to_vocab = {}
for i, word in enumerate(words):
vocab_to_int[word] = i
int_to_vocab[i] = word
return vocab_to_int, int_to_vocab
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
|
tv-script-generation/dlnd_tv_script_generation.ipynb
|
Bismarrck/deep-learning
|
mit
|
Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
|
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
puncs = {".": "Period", ",": "Comma", "\"": "QuotationMark",
";": "Semicolon", "!": "ExclamationMark", "?": "QuestionMark",
"(": "LeftParantheses", ")": "RightParantheses", "--": "Dash", "\n": "Return"
}
return {key: "||{}||".format(val) for key, val in puncs.items()}
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
|
tv-script-generation/dlnd_tv_script_generation.ipynb
|
Bismarrck/deep-learning
|
mit
|
Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following tuple (Input, Targets, LearningRate)
|
def get_inputs():
"""
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
"""
# TODO: Implement Function
inputs = tf.placeholder(tf.int32, [None, None], name="input")
targets = tf.placeholder(tf.int32, [None, None], name="targets")
lr = tf.placeholder(tf.float32, name="learning_rate")
return inputs, targets, lr
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_inputs(get_inputs)
|
tv-script-generation/dlnd_tv_script_generation.ipynb
|
Bismarrck/deep-learning
|
mit
|
Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
|
lstm_layers = 2
def get_init_cell(batch_size, rnn_size):
"""
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
"""
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([lstm] * lstm_layers)
# Getting an initial state of all zeros
initial_state = tf.identity(cell.zero_state(batch_size, tf.float32), name="initial_state")
return cell, initial_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_init_cell(get_init_cell)
|
tv-script-generation/dlnd_tv_script_generation.ipynb
|
Bismarrck/deep-learning
|
mit
|
Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
|
def get_embed(input_data, vocab_size, embed_dim):
"""
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
"""
# TODO: Implement Function
embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))
return tf.nn.embedding_lookup(embedding, input_data)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_embed(get_embed)
|
tv-script-generation/dlnd_tv_script_generation.ipynb
|
Bismarrck/deep-learning
|
mit
|
Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
|
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
"""
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
"""
# TODO: Implement Function
inputs = get_embed(input_data, vocab_size, rnn_size)
outputs, final_state = build_rnn(cell, inputs)
predictions = tf.contrib.layers.fully_connected(outputs, vocab_size, activation_fn=None)
return predictions, final_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_nn(build_nn)
|
tv-script-generation/dlnd_tv_script_generation.ipynb
|
Bismarrck/deep-learning
|
mit
|
Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2], [ 7 8], [13 14]]
# Batch of targets
[[ 2 3], [ 8 9], [14 15]]
]
# Second Batch
[
# Batch of Input
[[ 3 4], [ 9 10], [15 16]]
# Batch of targets
[[ 4 5], [10 11], [16 17]]
]
# Third Batch
[
# Batch of Input
[[ 5 6], [11 12], [17 18]]
# Batch of targets
[[ 6 7], [12 13], [18 1]]
]
]
```
Notice that the last target value in the last batch is the first input value of the first batch. In this case, 1. This is a common technique used when creating sequence batches, although it is rather unintuitive.
|
def get_batches(int_text, batch_size, seq_length):
"""
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
"""
# TODO: Implement Function
n_batches = (len(int_text) - 1) // (batch_size * seq_length)
batches = np.zeros((n_batches, 2, batch_size, seq_length))
x = int_text[:]
y = int_text[1:]
for i in range(batch_size):
for j in range(n_batches):
k = (i * n_batches + j) * seq_length
batches[j, 0, i] = x[k: k + seq_length]
batches[j, 1, i] = y[k: k + seq_length]
return batches
# Self check
print(get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3))
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_batches(get_batches)
|
tv-script-generation/dlnd_tv_script_generation.ipynb
|
Bismarrck/deep-learning
|
mit
|
Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set embed_dim to the size of the embedding.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
|
# Number of Epochs
num_epochs = 200
# Batch Size
batch_size = 128
# RNN Size
rnn_size = 2000
# Sequence Length
seq_length = 50
# Learning Rate
learning_rate = 0.0002
# Show stats for every n number of batches
show_every_n_batches = 5
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
save_dir = './save'
|
tv-script-generation/dlnd_tv_script_generation.ipynb
|
Bismarrck/deep-learning
|
mit
|
Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
|
def get_tensors(loaded_graph):
"""
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
"""
# TODO: Implement Function
input_ = loaded_graph.get_tensor_by_name("input:0")
initial_state_ = loaded_graph.get_tensor_by_name("initial_state:0")
final_state_ = loaded_graph.get_tensor_by_name("final_state:0")
probs_ = loaded_graph.get_tensor_by_name("probs:0")
return input_, initial_state_, final_state_, probs_
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_tensors(get_tensors)
|
tv-script-generation/dlnd_tv_script_generation.ipynb
|
Bismarrck/deep-learning
|
mit
|
Choose Word
Implement the pick_word() function to select the next word using probabilities.
|
def pick_word(probabilities, int_to_vocab):
"""
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
"""
# TODO: Implement Function
choice = np.random.choice([int(x) for x in int_to_vocab.keys()], p=probabilities)
return int_to_vocab[choice]
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_pick_word(pick_word)
|
tv-script-generation/dlnd_tv_script_generation.ipynb
|
Bismarrck/deep-learning
|
mit
|
0.1 Directory Set up & Display Image
|
datadir = ''
objname = '2016HO3'
def plotfits(imno):
img = fits.open(datadir+objname+'_{0:02d}.fits'.format(imno))[0].data
f = plt.figure(figsize=(10,12))
im = plt.imshow(img, cmap='hot')
im = plt.imshow(img[480:560, 460:540], cmap='hot')
plt.clim(1800, 2800)
plt.colorbar(im, fraction=0.046, pad=0.04)
plt.savefig("figure{0}.png".format(imno))
plt.show()
numb = 1
plotfits(numb)
|
Sessions/Session05/Day3/Introduction to Astrometry.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
1. Centroiding on Images
Write a text file with image centers.
Write code to open each image and extract centroid position from previous exercise.
Save results in a text file.
|
centers = np.array([[502,501], [502,501]])
np.savetxt('centers.txt', centers, fmt='%i')
centers = np.loadtxt('centers.txt', dtype='int')
searchr = 5
|
Sessions/Session05/Day3/Introduction to Astrometry.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
1.1 Center of Mass
|
def cent_weight(n):
"""
Assigns centroid weights
"""
wghts=np.zeros((n),np.float)
for i in range(n):
wghts[i]=float(i-n/2)+0.5
return wghts
def calc_CoM(psf, weights):
"""
Finds Center of Mass of image
"""
cent=np.zeros((2),np.float)
temp=sum(sum(psf) - min(sum(psf) ))
cent[1]=sum(( sum(psf) - min(sum(psf)) ) * weights)/temp
cent[0]=sum(( sum(psf.T) - min(sum(psf.T)) ) *weights)/temp
return cent
centlist = []
for i, center in enumerate(centers):
image = fits.open(datadir+objname+'_{0:02d}.fits'.format(i+1))[0].data
searchbox = image[center[0]-searchr : center[0]+searchr, center[1]-searchr : center[1]+searchr]
boxlen = len(searchbox)
weights = cent_weight(boxlen)
cen_offset = calc_CoM(searchbox, weights)
centlist.append(center + cen_offset)
print(centlist[0])
|
Sessions/Session05/Day3/Introduction to Astrometry.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
2. Identifying Stars in the Field
Ex 1. Write code to identify stars in the field.
One way to do it would be:
Create a new image using an mapping arc sinh that captures the full dynamic range effectively.
Locate lower and upper bounds that should include only stars.
Refine the parameters to optimize the extraction of stars from background.
|
no = 1
image = fits.open(datadir+objname+'_{0:02d}.fits'.format(no))[0].data
|
Sessions/Session05/Day3/Introduction to Astrometry.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
1.a. Create a new image using an mapping arc sinh that captures the full dynamic range effectively. Consider Gaussian smoothing to get rid of inhomogineties in the image.
|
## Some functions you may want to use
import skimage.exposure as skie
from scipy.ndimage import gaussian_filter
### code here ###
### code here ###
### code here ###
### code here ###
### code here ###
### code here ###
|
Sessions/Session05/Day3/Introduction to Astrometry.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
1.b. Create a new image that is scaled between the lower and upper limits for displaying the star map.
Search the arcsinh-stretched original image for local maxima and catalog those brighter than a threshold that is adjusted based on the image.
|
## Consider using
import skimage.morphology as morph
### code here ###
### code here ###
### code here ###
### code here ###
### code here ###
|
Sessions/Session05/Day3/Introduction to Astrometry.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
Plot image with identified stars and target
|
f = plt.figure(figsize=(10,12))
plt.imshow(opt_img, cmap='hot')
plt.colorbar(fraction=0.046, pad=0.04)
plt.scatter(x2, y2, s=80, facecolors='none', edgecolors='r')
plt.scatter(502.01468185, 501.00082137, s=80, facecolors='none', edgecolors='y' )
plt.show()
|
Sessions/Session05/Day3/Introduction to Astrometry.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
3. Converting pixel coordinates to WCS
|
def load_wcs_from_file(filename, xx, yy):
# Load the FITS hdulist using astropy.io.fits
hdulist = fits.open(filename)
# Parse the WCS keywords in the primary HDU
w = wcs.WCS(hdulist[0].header)
# Print out the "name" of the WCS, as defined in the FITS header
print(w.wcs.name)
# Print out all of the settings that were parsed from the header
w.wcs.print_contents()
# Coordinates of interest.
# Note we've silently assumed a NAXIS=2 image here
targcrd = np.array([centlist[0]], np.float_)
starscrd = np.array([xx, yy], np.float_)
# Convert pixel coordinates to world coordinates
# The second argument is "origin".
world = w.wcs_pix2world(starscrd.T, 0)
return w, world
|
Sessions/Session05/Day3/Introduction to Astrometry.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
Find position of Asteroid in WCS
|
wparams, scoords = load_wcs_from_file(datadir+objname+'_{0:02d}.fits'.format(1), x2, y2)
print(scoords)
wparams, tcoords = load_wcs_from_file(datadir+objname+'_{0:02d}.fits'.format(1), np.array([centlist[0][0]]), np.array([centlist[0][1]]))
print(tcoords)
|
Sessions/Session05/Day3/Introduction to Astrometry.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
3. Matching
3.1 Get astrometric catalog
|
job = Gaia.launch_job_async("SELECT * \
FROM gaiadr1.gaia_source \
WHERE CONTAINS(POINT('ICRS',gaiadr1.gaia_source.ra,gaiadr1.gaia_source.dec),CIRCLE('ICRS', 193.34, 33.86, 0.08))=1;" \
, dump_to_file=True)
print (job)
r = job.get_results()
print (r['source_id'], r['ra'], r['dec'])
print(type(r['ra']))
|
Sessions/Session05/Day3/Introduction to Astrometry.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
3.2 Perform Match
Convert Gaia WCS coordinates to pixels
|
ra = np.array(r['ra'])
dec = np.array(r['dec'])
xpix, ypix = ### fill in one line here ###
|
Sessions/Session05/Day3/Introduction to Astrometry.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
Plot Gaia stars over identified stars in image
|
f = plt.figure(figsize=(20,22))
plt.imshow(opt_img, cmap='hot')
plt.colorbar(fraction=0.046, pad=0.04)
plt.scatter(x2, y2, s=80, facecolors='none', edgecolors='r')
plt.scatter(xpix, ypix, s=80, facecolors='none', edgecolors='g')
#plt.scatter(xpix[17], ypix[17], s=80, facecolors='none', edgecolors='y')
plt.imshow(opt_img, cmap='hot')
plt.show()
|
Sessions/Session05/Day3/Introduction to Astrometry.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
Ex. 2 Find the amount of shift needed.
Match catalogue stars to identified stars and measure amount of shift needed to overlay FoV stars to catalogue.
E.g. Find closest star to one of the Gaia stars near the center of image. Find magnitude of shift. Shift all other Gaia stars and see whether resulting difference is small.
|
### code here ###
### code here ###
### code here ###
### code here ###
### code here ###
### code here ###
### code here ###
### code here ###
### code here ###
### code here ###
### code here ###
### code here ###
### code here ###
### code here ###
### code here ###
|
Sessions/Session05/Day3/Introduction to Astrometry.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
Shift
|
targshifted = centlist[0] + np.array([xshift, yshift])
|
Sessions/Session05/Day3/Introduction to Astrometry.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
Convert shifted coordinate into WCS
|
wparams, tscoords = load_wcs_from_file(datadir+objname+'_{0:02d}.fits'.format(1), np.array([targshifted[0][0][0]]), np.array([targshifted[0][0][1]]))
|
Sessions/Session05/Day3/Introduction to Astrometry.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
Runtime comparison
Let's see how the runtimes of these two algorithms compare.
We expect variable elimination to outperform enumeration by a large margin as we reduce the number of repetitive calculations significantly.
|
%%timeit
enumeration_ask('Burglary', dict(JohnCalls=True, MaryCalls=True), burglary).show_approx()
%%timeit
elimination_ask('Burglary', dict(JohnCalls=True, MaryCalls=True), burglary).show_approx()
|
probability.ipynb
|
jo-tez/aima-python
|
mit
|
We observe that variable elimination was faster than enumeration as we had expected but the gain in speed is not a lot, in fact it is just about 30% faster.
<br>
This happened because the bayesian network in question is pretty small, with just 5 nodes, some of which aren't even required in the inference process.
For more complicated networks, variable elimination will be significantly faster and runtime will reduce not just by a constant factor, but by a polynomial factor proportional to the number of nodes, due to the reduction in repeated calculations.
Approximate Inference in Bayesian Networks
Exact inference fails to scale for very large and complex Bayesian Networks. This section covers implementation of randomized sampling algorithms, also called Monte Carlo algorithms.
|
psource(BayesNode.sample)
|
probability.ipynb
|
jo-tez/aima-python
|
mit
|
Image IO
Reading and writing images is easy. ANTsPy has some included data which we will use.
|
fname1 = ants.get_ants_data('r16')
fname2 = ants.get_ants_data('r64')
print(fname1)
img1 = ants.image_read(fname1)
img2 = ants.image_read(fname2)
print(img1)
|
tutorials/10minTutorial.ipynb
|
ANTsX/ANTsPy
|
apache-2.0
|
You can read also convert numpy arrays to ANTsImage types.. Here's an example of an fMRI image (an image with "components")
|
arr_4d = np.random.randn(70,70,70,10).astype('float32')
img_fmri = ants.from_numpy(arr_4d, has_components=True)
print(img_fmri)
|
tutorials/10minTutorial.ipynb
|
ANTsX/ANTsPy
|
apache-2.0
|
Once you have an ANTsImage type, it basically acts as a numpy array:
|
# clone
img = ants.image_read(fname1)
img2 = img.clone()
# convert to numpy
img_arr = img.numpy()
# create another image with same properties but different data
img2 = img.new_image_like(img_arr*2)
# save to file
# img.to_file(...)
# many useful things:
img.median()
img.std()
img.argmin()
img.argmax()
img.flatten()
img.nonzero()
img.unique()
# do any operations directly on ANTsImage types
img3 = img2 - img
img3 = img2 > img
img3 = img2 / img
img3 = img2 == img
# change any physical properties
img4 = img.clone()
print(img4.spacing)
img4.set_spacing((1,1))
print(img4.spacing)
# test if two images are allclose in values
issame = ants.allclose(img,img2)
# test if two images have same physical space
issame_phys = ants.image_physical_space_consistency(img,img2)
|
tutorials/10minTutorial.ipynb
|
ANTsX/ANTsPy
|
apache-2.0
|
Segmentation
This module includes Atropos segmentation, Joint Label Fusion, cortical thickness estimation, and prior-based segmentation.
Atropos segmentation:
|
img = ants.image_read(ants.get_ants_data('r16'))
img = ants.resample_image(img, (64,64), 1, 0)
mask = ants.get_mask(img)
img_seg = ants.atropos(a=img, m='[0.2,1x1]', c='[2,0]',
i='kmeans[3]', x=mask)
print(img_seg.keys())
ants.plot(img_seg['segmentation'])
|
tutorials/10minTutorial.ipynb
|
ANTsX/ANTsPy
|
apache-2.0
|
Cortical thickness:
|
img = ants.image_read( ants.get_ants_data('r16') ,2)
mask = ants.get_mask( img ).threshold_image( 1, 2 )
segs=ants.atropos( a = img, m = '[0.2,1x1]', c = '[2,0]', i = 'kmeans[3]', x = mask )
thickimg = ants.kelly_kapowski(s=segs['segmentation'], g=segs['probabilityimages'][1],
w=segs['probabilityimages'][2], its=45,
r=0.5, m=1)
print(thickimg)
img.plot(overlay=thickimg, overlay_cmap='jet')
|
tutorials/10minTutorial.ipynb
|
ANTsX/ANTsPy
|
apache-2.0
|
Registration
This module includes the main ANTs registration interface, from which all registration algorithms can be run - along with various functions for evaluating registration algorithms or resampling/reorienting images or applying specific transformations to images.
SyN registration:
|
fixed = ants.image_read( ants.get_ants_data('r16') ).resample_image((64,64),1,0)
moving = ants.image_read( ants.get_ants_data('r64') ).resample_image((64,64),1,0)
fixed.plot(overlay=moving, title='Before Registration')
mytx = ants.registration(fixed=fixed , moving=moving, type_of_transform='SyN' )
print(mytx)
warped_moving = mytx['warpedmovout']
fixed.plot(overlay=warped_moving,
title='After Registration')
|
tutorials/10minTutorial.ipynb
|
ANTsX/ANTsPy
|
apache-2.0
|
You can also use the transforms output from registration and apply them directly to the image:
|
mywarpedimage = ants.apply_transforms(fixed=fixed, moving=moving,
transformlist=mytx['fwdtransforms'])
mywarpedimage.plot()
|
tutorials/10minTutorial.ipynb
|
ANTsX/ANTsPy
|
apache-2.0
|
Other utilities
N3 and N4 bias correction:
|
image = ants.image_read( ants.get_ants_data('r16') )
image_n4 = ants.n4_bias_field_correction(image)
ants.plot( image_n4 )
|
tutorials/10minTutorial.ipynb
|
ANTsX/ANTsPy
|
apache-2.0
|
<h4>Classification Overview</h4>
<ul>
<li>Predict a binary class as output based on given features.
</li>
<li>Examples: Do we need to follow up on a customer review? Is this transaction fraudulent or valid one? Are there signs of onset of a medical condition or disease? Is this considered junk food or not?</li>
<li>Linear Model. Estimated Target = w<sub>0</sub> + w<sub>1</sub>x<sub>1</sub>
+ w<sub>2</sub>x<sub>2</sub> + w<sub>3</sub>x<sub>3</sub>
+ โฆ + w<sub>n</sub>x<sub>n</sub><br>
where, w is the weight and x is the feature
</li>
<li><b>Logistic Regression</b>. Estimated Probability = <b>sigmoid</b>(w<sub>0</sub> + w<sub>1</sub>x<sub>1</sub>
+ w<sub>2</sub>x<sub>2</sub> + w<sub>3</sub>x<sub>3</sub>
+ โฆ + w<sub>n</sub>x<sub>n</sub>)<br>
where, w is the weight and x is the feature
</li>
<li>Linear model output is fed thru a sigmoid or logistic function to produce the probability.</li>
<li>Predicted Value: Probability of a binary outcome. Closer to 1 is positive class, closer to 0 is negative class</li>
<li>Algorithm Used: Logistic Regression. Objective is to find the weights w that maximizes separation between the two classes</li>
<li>Optimization: Stochastic Gradient Descent. Seeks to minimize loss/cost so that predicted value is as close to actual as possible</li>
<li>Cost/Loss Calculation: Logistic loss function</li>
</ul>
|
# Sigmoid or logistic function
# For any x, output is bounded to 0 & 1.
def sigmoid_func(x):
return 1.0/(1 + math.exp(-x))
sigmoid_func(10)
sigmoid_func(-100)
sigmoid_func(0)
# Sigmoid function example
x = pd.Series(np.arange(-8, 8, 0.5))
y = x.map(sigmoid_func)
x.head()
fig = plt.figure(figsize = (12, 8))
plt.plot(x,y)
plt.ylim((-0.2, 1.2))
plt.xlabel('input')
plt.ylabel('sigmoid output')
plt.grid(True)
plt.axvline(x = 0, ymin = 0, ymax = 1, ls = 'dashed')
plt.axhline(y = 0.5, xmin = 0, xmax = 10, ls = 'dashed')
plt.axhline(y = 1.0, xmin = 0, xmax = 10, color = 'r')
plt.axhline(y = 0.0, xmin = 0, xmax = 10, color = 'r')
plt.title('Sigmoid')
|
17-09-27-AWS Machine Learning A Complete Guide With Python/07 - Binary Classification/01 - ml_logistic_cost_example.ipynb
|
arcyfelix/Courses
|
apache-2.0
|
Example Dataset - Hours spent and Exam Results:
https://en.wikipedia.org/wiki/Logistic_regression
Sigmoid function produces an output between 0 and 1 no. Input closer to 0 produces and output of 0.5 probability. Negative input produces value less than 0.5 while positive input produces value greater than 0.5
|
data_path = r'..\Data\ClassExamples\HoursExam\HoursExamResult.csv'
df = pd.read_csv(data_path)
|
17-09-27-AWS Machine Learning A Complete Guide With Python/07 - Binary Classification/01 - ml_logistic_cost_example.ipynb
|
arcyfelix/Courses
|
apache-2.0
|
Input Feature: Hours<br>
Output: Pass (1 = pass, 0 = fail)
|
df.head()
# optimal weights given in the wiki dataset
def straight_line(x):
return 1.5046 * x - 4.0777
# How does weight affect outcome
def straight_line_weight(weight1, x):
return weight1 * x - 4.0777
# Generate probability by running feature thru the linear model and then thru sigmoid function
y_vals = df.Hours.map(straight_line).map(sigmoid_func)
fig = plt.figure(figsize = (12, 8))
plt.scatter(x = df.Hours,
y = y_vals,
color = 'b',
label = 'logistic')
plt.scatter(x = df[df.Pass == 1].Hours,
y = df[df.Pass == 1].Pass,
color = 'g',
label = 'pass')
plt.scatter(x = df[df.Pass == 0].Hours,
y = df[df.Pass == 0].Pass,
color = 'r',
label = 'fail')
plt.title('Hours Spent Reading - Pass Probability')
plt.xlabel('Hours')
plt.ylabel('Pass Probability')
plt.grid(True)
plt.xlim((0,7))
plt.ylim((-0.2,1.5))
plt.axvline(x = 2.75,
ymin = 0,
ymax=1)
plt.axhline(y = 0.5,
xmin = 0,
xmax = 6,
label = 'cutoff at 0.5',
ls = 'dashed')
plt.axvline(x = 2,
ymin = 0,
ymax = 1)
plt.axhline(y = 0.3,
xmin = 0,
xmax = 6,
label = 'cutoff at 0.3',
ls = 'dashed')
plt.axvline(x = 3,
ymin = 0,
ymax=1)
plt.axhline(y = 0.6,
xmin = 0,
xmax = 6,
label='cutoff at 0.6',
ls = 'dashed')
plt.legend()
|
17-09-27-AWS Machine Learning A Complete Guide With Python/07 - Binary Classification/01 - ml_logistic_cost_example.ipynb
|
arcyfelix/Courses
|
apache-2.0
|
At 2.7 hours of study time, we hit 0.5 probability. So, any student who spent 2.7 hours or more would have a higher probability of passing the exam.
In the above example,<br>
1. Top right quadrant = true positive. pass got classified correctly as pass
2. Bottom left quadrant = true negative. fail got classified correctly as fail
3. Top left quadrant = false negative. pass got classified as fail
4. Bottom right quadrant = false positive. fail got classified as pass
Cutoff can be adjusted; instead of 0.5, cutoff could be established at 0.4 or 0.6 depending on the nature of problem and impact of misclassification
|
weights = [0, 1, 2]
y_at_weight = {}
for w in weights:
y_calculated = []
y_at_weight[w] = y_calculated
for x in df.Hours:
y_calculated.append(sigmoid_func(straight_line_weight(w, x)))
y_sig_vals = y_vals.map(sigmoid_func)
fig = plt.figure(figsize = (12, 8))
plt.scatter(x = df.Hours,
y = y_vals,
color = 'b',
label = 'logistic curve')
plt.scatter(x = df[df.Pass==1].Hours,
y = df[df.Pass==1].Pass,
color = 'g',
label = 'pass')
plt.scatter(x = df[df.Pass==0].Hours,
y = df[df.Pass==0].Pass,
color = 'r',
label = 'fail')
plt.scatter(x = df.Hours,
y = y_at_weight[0],
color = 'k',
label = 'at wt 0')
plt.scatter(x = df.Hours,
y = y_at_weight[1],
color = 'm',
label = 'at wt 1')
plt.scatter(x = df.Hours,
y = y_at_weight[2],
color = 'y',
label = 'at wt 2')
plt.xlim((0,8))
plt.ylim((-0.2, 1.5))
plt.axhline(y = 0.5,
xmin = 0,
xmax = 6,
color = 'b',
ls = 'dashed')
plt.axvline(x = 4,
ymin = 0,
ymax = 1,
color = 'm',
ls = 'dashed')
plt.xlabel('Hours')
plt.ylabel('Pass Probability')
plt.grid(True)
plt.title('How weights impact classification - cutoff 0.5')
plt.legend()
|
17-09-27-AWS Machine Learning A Complete Guide With Python/07 - Binary Classification/01 - ml_logistic_cost_example.ipynb
|
arcyfelix/Courses
|
apache-2.0
|
Logistic Regression Cost/Loss Function<br>
|
# Cost Function
z = pd.Series(np.linspace(0.000001, 0.999999, 100))
ypositive = -z.map(math.log)
ynegative = -z.map(lambda x: math.log(1-x))
fig = plt.figure(figsize = (12, 8))
plt.plot(z,
ypositive,
label = 'Loss curve for positive example')
plt.plot(z,
ynegative,
label = 'Loss curve for negative example')
plt.ylabel('Loss')
plt.xlabel('Class')
plt.title('Loss Curve')
plt.legend()
|
17-09-27-AWS Machine Learning A Complete Guide With Python/07 - Binary Classification/01 - ml_logistic_cost_example.ipynb
|
arcyfelix/Courses
|
apache-2.0
|
Cost function is a log curve<br>
1. positive example correctly classified as positive is given a lower loss/cost
2. positive example incorrectly classified as negative is given a higher loss/cost
3. Negative example correctly classified as negative is given a lower loss/cost
4. Negative example incorrectly classifed as positive is given a higher loss/cost
|
def compute_logisitic_cost(y_actual, y_predicted):
y_pos_cost = y_predicted[y_actual == 1]
y_neg_cost = y_predicted[y_actual == 0]
positive_cost = (-y_pos_cost.map(math.log)).sum()
negative_cost = -y_neg_cost.map(lambda x: math.log(1 - x)).sum()
return positive_cost + negative_cost
# Example of how prediction vs actual impact loss
# Prediction is exact opposite of actual. Loss/Cost should be very high
actual = pd.Series([1, 0, 1])
predicted = pd.Series([0.001, .9999, 0.0001])
print('Loss: {0:0.3f}'.format(compute_logisitic_cost(actual, predicted)))
# Prediction is close to actual. Loss/Cost should be very low
y_actual = pd.Series([1, 0, 1])
y_predicted = pd.Series([0.9, 0.1, 0.8])
print('Loss: {0:0.3f}'.format(compute_logisitic_cost(y_actual, y_predicted)))
# Prediction is midpoint. Loss/Cost should be high
y_actual = pd.Series([1, 0, 1])
y_predicted = pd.Series([0.5, 0.5, 0.5])
print('Loss: {0:0.3f}'.format(compute_logisitic_cost(y_actual, y_predicted)))
# Prediction is midpoint. Loss/Cost should be high
y_actual = pd.Series([1, 0, 1])
y_predicted = pd.Series([0.8, 0.4, 0.7])
print('Loss: {0:0.3f}'.format(compute_logisitic_cost(y_actual, y_predicted)))
weight = pd.Series(np.linspace(-1.5, 5, num = 100))
cost = []
cost_at_wt = []
for w1 in weight:
y_calculated = []
for x in df.Hours:
y_calculated.append (sigmoid_func(straight_line_weight(w1, x)))
cost_at_wt.append(compute_logisitic_cost(df.Pass, pd.Series(y_calculated)))
fig = plt.figure(figsize = (12, 8))
plt.scatter(x = weight, y = cost_at_wt)
plt.xlabel('Weight')
plt.ylabel('Cost')
plt.grid(True)
plt.axvline(x = 1.5,
ymin = 0,
ymax = 100,
label = 'Minimal loss')
plt.axhline(y = 6.5,
xmin = 0,
xmax = 6)
plt.title('Finding optimal weights')
plt.legend()
|
17-09-27-AWS Machine Learning A Complete Guide With Python/07 - Binary Classification/01 - ml_logistic_cost_example.ipynb
|
arcyfelix/Courses
|
apache-2.0
|
์ฃผ์: ํ์ด์ฌ 3์ ๊ฒฝ์ฐ input() ํจ์๋ฅผ raw_input() ๋์ ์ ์ฌ์ฉํด์ผ ํ๋ค.
์ ์ฝ๋๋ ์ ์๋ค์ ์ ๊ณฑ์ ๊ณ์ฐํ๋ ํ๋ก๊ทธ๋จ์ด๋ค.
ํ์ง๋ง ์ฌ์ฉ์๊ฐ ๊ฒฝ์ฐ์ ๋ฐ๋ผ ์ ์ ์ด์ธ์ ๊ฐ์ ์
๋ ฅํ๋ฉด ์์คํ
์ด ๋ค์ด๋๋ค.
์ด์ ๋ํ ํด๊ฒฐ์ฑ
์ ๋ค๋ฃจ๊ณ ์ ํ๋ค.
์ค๋ฅ ์์
๋จผ์ ์ค๋ฅ์ ๋ค์ํ ์์ ๋ฅผ ์ดํด๋ณด์.
๋ค์ ์ฝ๋๋ค์ ๋ชจ๋ ์ค๋ฅ๋ฅผ ๋ฐ์์ํจ๋ค.
์์ : 0์ผ๋ก ๋๋๊ธฐ ์ค๋ฅ
python
4.6/0
์ค๋ฅ ์ค๋ช
: 0์ผ๋ก ๋๋ ์ ์๋ค.
์์ : ๋ฌธ๋ฒ ์ค๋ฅ
python
sentence = 'I am a sentence
์ค๋ฅ ์ค๋ช
: ๋ฌธ์์ด ์ ๋์ ๋ฐ์ดํ๊ฐ ์ง์ด ๋ง์์ผ ํ๋ค.
* ์์ ๋ฐ์ดํ๋ผ๋ฆฌ ๋๋ ํฐ ๋ฐ์ดํ๋ผ๋ฆฌ
์์ : ๋ค์ฌ์ฐ๊ธฐ ๋ฌธ๋ฒ ์ค๋ฅ
python
for i in range(3):
j = i * 2
print(i, j)
์ค๋ฅ ์ค๋ช
: 2๋ฒ ์ค๊ณผ 3๋ฒ ์ค์ ๋ค์ฌ์ฐ๊ธฐ ์ ๋๊ฐ ๋์ผํด์ผ ํ๋ค.
์์ : ์๋ฃํ ์ค๋ฅ
```python
new_string = 'cat' - 'dog'
new_string = 'cat' * 'dog'
new_string = 'cat' / 'dog'
new_string = 'cat' + 3
new_string = 'cat' - 3
new_string = 'cat' / 3
```
์ค๋ฅ ์ค๋ช
: ๋ฌธ์์ด ์๋ฃํ๋ผ๋ฆฌ์ ํฉ, ๋ฌธ์์ด๊ณผ ์ ์์ ๊ณฑ์
๋ง ์ ์๋์ด ์๋ค.
์์ : ์ด๋ฆ ์ค๋ฅ
python
print(party)
์ค๋ฅ ์ค๋ช
: ๋ฏธ๋ฆฌ ์ ์ธ๋ ๋ณ์๋ง ์ฌ์ฉํ ์ ์๋ค.
์์ : ์ธ๋ฑ์ค ์ค๋ฅ
python
a_string = 'abcdefg'
a_string[12]
์ค๋ฅ ์ค๋ช
: ์ธ๋ฑ์ค๋ ๋ฌธ์์ด์ ๊ธธ์ด๋ณด๋ค ์์ ์๋ง ์ฌ์ฉํ ์ ์๋ค.
์์ : ๊ฐ ์ค๋ฅ
python
int(a_string)
์ค๋ฅ ์ค๋ช
: int() ํจ์๋ ์ ์๋ก๋ง ๊ตฌ์ฑ๋ ๋ฌธ์์ด๋ง ์ฒ๋ฆฌํ ์ ์๋ค.
์์ : ์์ฑ ์ค๋ฅ
python
print(a_string.len())
์ค๋ฅ ์ค๋ช
: ๋ฌธ์์ด ์๋ฃํ์๋ len() ๋ฉ์๋๊ฐ ์กด์ฌํ์ง ์๋๋ค.
์ฃผ์: len() ์ด๋ผ๋ ํจ์๋ ๋ฌธ์์ด์ ๊ธธ์ด๋ฅผ ํ์ธํ์ง๋ง ๋ฌธ์์ด ๋ฉ์๋๋ ์๋๋ค.
์ดํ์ ๋ค๋ฃฐ ๋ฆฌ์คํธ, ํํ ๋ฑ์ ๋ํด์๋ ์ฌ์ฉํ ์ ์๋ ํจ์์ด๋ค.
์ค๋ฅ ํ์ธ
์์ ์ธ๊ธํ ์ฝ๋๋ค์ ์คํํ๋ฉด ์ค๋ฅ๊ฐ ๋ฐ์ํ๊ณ ์ด๋์ ์ด๋ค ์ค๋ฅ๊ฐ ๋ฐ์ํ์๋๊ฐ์ ๋ํ ์ ๋ณด๋ฅผ
ํ์ด์ฌ ํด์๊ธฐ๊ฐ ๋ฐ๋ก ์๋ ค ์ค๋ค.
์์
|
sentence = 'I am a sentence
|
previous/notes2017/W03/GongSu06_Errors_and_Exception_Handling.ipynb
|
liganega/Gongsu-DataSci
|
gpl-3.0
|
์ค๋ฅ๋ฅผ ํ์ธํ๋ ๋ฉ์์ง๊ฐ ์ฒ์ ๋ณผ ๋๋ ๋งค์ฐ ์์ํ๋ค.
์ ์ค๋ฅ ๋ฉ์์ง๋ฅผ ๊ฐ๋จํ๊ฒ ์ดํด๋ณด๋ฉด ๋ค์๊ณผ ๊ฐ๋ค.
File "<ipython-input-37-a6097ed4dc2e>", line 1
1๋ฒ ์ค์์ ์ค๋ฅ ๋ฐ์
sentence = 'I am a sentence
^
์ค๋ฅ ๋ฐ์ ์์น ๋ช
์
SyntaxError: EOL while scanning string literal
์ค๋ฅ ์ข
๋ฅ ํ์: ๋ฌธ๋ฒ ์ค๋ฅ(SyntaxError)
์์
์๋ ์์ ๋ 0์ผ๋ก ๋๋ ๋ ๋ฐ์ํ๋ ์ค๋ฅ๋ฅผ ๋ํ๋ธ๋ค.
์ค๋ฅ์ ๋ํ ์ ๋ณด๋ฅผ ์ ์ดํด๋ณด๋ฉด์ ์ด๋ค ๋ด์ฉ์ ๋ด๊ณ ์๋์ง ํ์ธํด ๋ณด์์ผ ํ๋ค.
|
a = 0
4/a
|
previous/notes2017/W03/GongSu06_Errors_and_Exception_Handling.ipynb
|
liganega/Gongsu-DataSci
|
gpl-3.0
|
์ค๋ฅ์ ์ข
๋ฅ
์์ ์์ ๋ค์ ํตํด ์ดํด ๋ณด์๋ฏ์ด ๋ค์ํ ์ข
๋ฅ์ ์ค๋ฅ๊ฐ ๋ฐ์ํ๋ฉฐ,
์ฝ๋๊ฐ ๊ธธ์ด์ง๊ฑฐ๋ ๋ณต์กํด์ง๋ฉด ์ค๋ฅ๊ฐ ๋ฐ์ํ ๊ฐ๋ฅ์ฑ์ ์ ์ฐจ ์ปค์ง๋ค.
์ค๋ฅ์ ์ข
๋ฅ๋ฅผ ํ์
ํ๋ฉด ์ด๋์ ์ ์ค๋ฅ๊ฐ ๋ฐ์ํ์๋์ง๋ฅผ ๋ณด๋ค ์ฝ๊ฒ ํ์
ํ์ฌ
์ฝ๋๋ฅผ ์์ ํ ์ ์๊ฒ ๋๋ค.
๋ฐ๋ผ์ ์ฝ๋์ ๋ฐ์์์ธ์ ๋ฐ๋ก ์์๋ผ ์ ์์ด์ผ ํ๋ฉฐ ์ด๋ฅผ ์ํด์๋ ์ค๋ฅ ๋ฉ์์ง๋ฅผ
์ ๋๋ก ํ์ธํ ์ ์์ด์ผ ํ๋ค.
ํ์ง๋ง ์ฌ๊ธฐ์๋ ์ธ๊ธ๋ ์์ ์ ๋์ ์์ค๋ง ๋ค๋ฃจ๊ณ ๋์ด๊ฐ๋ค.
์ฝ๋ฉ์ ํ๋ค ๋ณด๋ฉด ์ด์ฐจํผ ๋ค์ํ ์ค๋ฅ์ ๋ง์ฃผ์น๊ฒ ๋ ํ
๋ฐ ๊ทธ๋๋ง๋ค
์ค์ค๋ก ์ค๋ฅ์ ๋ด์ฉ๊ณผ ์์ธ์ ํ์ธํด ๋๊ฐ๋ ๊ณผ์ ์ ํตํด
๋ณด๋ค ๋ง์ ๊ฒฝํ์ ์๋ ๊ธธ ์ธ์๋ ๋ฌ๋ฆฌ ๋ฐฉ๋ฒ์ด ์๋ค.
์์ธ ์ฒ๋ฆฌ
์ฝ๋์ ๋ฌธ๋ฒ ์ค๋ฅ๊ฐ ํฌํจ๋์ด ์๋ ๊ฒฝ์ฐ ์์ ์คํ๋์ง ์๋๋ค.
๊ทธ๋ ์ง ์์ ๊ฒฝ์ฐ์๋ ์ผ๋จ ์คํ์ด ๋๊ณ ์ค๊ฐ์ ์ค๋ฅ๊ฐ ๋ฐ์ํ๋ฉด ๋ฐ๋ก ๋ฉ์ถฐ๋ฒ๋ฆฐ๋ค.
์ด๋ ๊ฒ ์ค๊ฐ์ ์ค๋ฅ๊ฐ ๋ฐ์ํ ์ ์๋ ๊ฒฝ์ฐ๋ฅผ ๋ฏธ๋ฆฌ ์๊ฐํ์ฌ ๋๋นํ๋ ๊ณผ์ ์
์์ธ ์ฒ๋ฆฌ(exception handling)๋ผ๊ณ ๋ถ๋ฅธ๋ค.
์๋ฅผ ๋ค์ด, ์ค๋ฅ๊ฐ ๋ฐ์ํ๋๋ผ๋ ์ค๋ฅ๋ฐ์ ์ด์ ๊น์ง ์์ฑ๋ ์ ๋ณด๋ค์ ์ ์ฅํ๊ฑฐ๋, ์ค๋ฅ๋ฐ์ ์ด์ ๋ฅผ ์ข ๋ ์์ธํ ๋ค๋ฃจ๊ฑฐ๋, ์๋๋ฉด ์ค๋ฅ๋ฐ์์ ๋ํ ๋ณด๋ค ์์ธํ ์ ๋ณด๋ฅผ ์ฌ์ฉ์์๊ฒ ์๋ ค์ฃผ๊ธฐ ์ํด ์์ธ ์ฒ๋ฆฌ๋ฅผ ์ฌ์ฉํ๋ค.
์์
์๋ ์ฝ๋๋ raw_input() ํจ์๋ฅผ ์ด์ฉํ์ฌ ์ฌ์ฉ์๋ก๋ถํฐ ์ซ์๋ฅผ ์
๋ ฅ๋ฐ์ ๊ทธ ์ซ์์ ์ ๊ณฑ์ ๋ฆฌํดํ๊ณ ์ ํ๋ ๋ด์ฉ์ ๋ด๊ณ ์์ผ๋ฉฐ, ์ฝ๋์๋ ๋ฌธ๋ฒ์ ์ค๋ฅ๊ฐ ์๋ค.
๊ทธ๋ฆฌ๊ณ ์ฝ๋๋ฅผ ์คํํ๋ฉด ์ซ์๋ฅผ ์
๋ ฅํ๋ผ๋ ์ฐฝ์ด ๋์จ๋ค.
์ฌ๊ธฐ์ ์ซ์ 3์ ์
๋ ฅํ๋ฉด ์ ์์ ์ผ๋ก ์๋ํ์ง๋ง
์๋ฅผ ๋ค์ด, 3.2๋ฅผ ์
๋ ฅํ๋ฉด ๊ฐ ์ค๋ฅ(value error)๊ฐ ๋ฐ์ํ๋ค.
|
from __future__ import print_function
number_to_square = raw_input("A number please")
# number_to_square ๋ณ์์ ์๋ฃํ์ด ๋ฌธ์์ด(str)์์ ์ฃผ์ํ๋ผ.
# ๋ฐ๋ผ์ ์ฐ์ฐ์ ํ๊ณ ์ถ์ผ๋ฉด ์ ์ํ(int)์ผ๋ก ํ๋ณํ์ ๋จผ์ ํด์ผ ํ๋ค.
number = int(number_to_square)
print("์ ๊ณฑ์ ๊ฒฐ๊ณผ๋", number**2, "์
๋๋ค.")
|
previous/notes2017/W03/GongSu06_Errors_and_Exception_Handling.ipynb
|
liganega/Gongsu-DataSci
|
gpl-3.0
|
3.2๋ฅผ ์
๋ ฅํ์ ๋ ์ค๋ฅ๊ฐ ๋ฐ์ํ๋ ์ด์ ๋ int() ํจ์๊ฐ ์ ์ ๋ชจ์์ ๋ฌธ์์ด๋ง
์ฒ๋ฆฌํ ์ ์๊ธฐ ๋๋ฌธ์ด๋ค.
์ฌ์ค ์ ์๋ค์ ์ ๊ณฑ์ ๊ณ์ฐํ๋ ํ๋ก๊ทธ๋จ์ ์์ฑํ์์ง๋ง ๊ฒฝ์ฐ์ ๋ฐ๋ผ
์ ์ ์ด์ธ์ ๊ฐ์ ์
๋ ฅํ๋ ๊ฒฝ์ฐ๊ฐ ๋ฐ์ํ๊ฒ ๋๋ฉฐ, ์ด๋ฐ ๊ฒฝ์ฐ๋ฅผ ๋๋นํด์ผ ํ๋ค.
์ฆ, ์ค๋ฅ๊ฐ ๋ฐ์ํ ๊ฒ์ ๋ฏธ๋ฆฌ ์์ํด์ผ ํ๋ฉฐ, ์ด๋ป๊ฒ ๋์ฒํด์ผ ํ ์ง ์ค๋นํด์ผ ํ๋๋ฐ,
try ... except ...๋ฌธ์ ์ด์ฉํ์ฌ ์์ธ๋ฅผ ์ฒ๋ฆฌํ๋ ๋ฐฉ์์ ํ์ฉํ ์ ์๋ค.
|
number_to_square = raw_input("A number please:")
try:
number = int(number_to_square)
print("์ ๊ณฑ์ ๊ฒฐ๊ณผ๋", number ** 2, "์
๋๋ค.")
except:
print("์ ์๋ฅผ ์
๋ ฅํด์ผ ํฉ๋๋ค.")
|
previous/notes2017/W03/GongSu06_Errors_and_Exception_Handling.ipynb
|
liganega/Gongsu-DataSci
|
gpl-3.0
|
์ค๋ฅ ์ข
๋ฅ์ ๋ง์ถ์ด ๋ค์ํ ๋์ฒ๋ฅผ ํ๊ธฐ ์ํด์๋ ์ค๋ฅ์ ์ข
๋ฅ๋ฅผ ๋ช
์ํ์ฌ ์์ธ์ฒ๋ฆฌ๋ฅผ ํ๋ฉด ๋๋ค.
์๋ ์ฝ๋๋ ์
๋ ฅ ๊ฐ์ ๋ฐ๋ผ ๋ค๋ฅธ ์ค๋ฅ๊ฐ ๋ฐ์ํ๊ณ ๊ทธ์ ์์ํ๋ ๋ฐฉ์์ผ๋ก ์์ธ์ฒ๋ฆฌ๋ฅผ ์คํํ๋ค.
๊ฐ ์ค๋ฅ(ValueError)์ ๊ฒฝ์ฐ
|
number_to_square = raw_input("A number please: ")
try:
number = int(number_to_square)
a = 5/(number - 4)
print("๊ฒฐ๊ณผ๋", a, "์
๋๋ค.")
except ValueError:
print("์ ์๋ฅผ ์
๋ ฅํด์ผ ํฉ๋๋ค.")
except ZeroDivisionError:
print("4๋ ๋นผ๊ณ ํ์ธ์.")
|
previous/notes2017/W03/GongSu06_Errors_and_Exception_Handling.ipynb
|
liganega/Gongsu-DataSci
|
gpl-3.0
|
0์ผ๋ก ๋๋๊ธฐ ์ค๋ฅ(ZeroDivisionError)์ ๊ฒฝ์ฐ
|
number_to_square = raw_input("A number please: ")
try:
number = int(number_to_square)
a = 5/(number - 4)
print("๊ฒฐ๊ณผ๋", a, "์
๋๋ค.")
except ValueError:
print("์ ์๋ฅผ ์
๋ ฅํด์ผ ํฉ๋๋ค.")
except ZeroDivisionError:
print("4๋ ๋นผ๊ณ ํ์ธ์.")
|
previous/notes2017/W03/GongSu06_Errors_and_Exception_Handling.ipynb
|
liganega/Gongsu-DataSci
|
gpl-3.0
|
์ฃผ์: ์ด์ ๊ฐ์ด ๋ฐ์ํ ์ ์์ธ๋ฅผ ๊ฐ๋ฅํ ํ ๋ชจ๋ ์ผ๋ํ๋ ํ๋ก๊ทธ๋จ์ ๊ตฌํํด์ผ ํ๋ ์ผ์
๋งค์ฐ ์ด๋ ค์ด ์ผ์ด๋ค.
์์ ๋ณด์๋ฏ์ด ์ค๋ฅ์ ์ข
๋ฅ๋ฅผ ์ ํํ ์ ํ์๊ฐ ๋ฐ์ํ๋ค.
๋ค์ ์์ ์์ ๋ณด๋ฏ์ด ์ค๋ฅ์ ์ข
๋ฅ๋ฅผ ํ๋ฆฌ๊ฒ ๋ช
์ํ๋ฉด ์์ธ ์ฒ๋ฆฌ๊ฐ ์ ๋๋ก ์๋ํ์ง ์๋๋ค.
|
try:
a = 1/0
except ValueError:
print("This program stops here.")
|
previous/notes2017/W03/GongSu06_Errors_and_Exception_Handling.ipynb
|
liganega/Gongsu-DataSci
|
gpl-3.0
|
์ค๋ฅ์ ๋ํ ๋ณด๋ค ์์ธํ ์ ๋ณด
ํ์ด์ฌ์์ ๋ค๋ฃจ๋ ์ค๋ฅ์ ๋ํ ๋ณด๋ค ์์ธํ ์ ๋ณด๋ ์๋ ์ฌ์ดํธ๋ค์ ์์ธํ๊ฒ ์๋ด๋์ด ์๋ค.
ํ์ด์ฌ ๊ธฐ๋ณธ ๋ด์ฅ ์ค๋ฅ ์ ๋ณด ๋ฌธ์:
https://docs.python.org/3.4/library/exceptions.html
ํ์ด์ฌ ์์ธ์ฒ๋ฆฌ ์ ๋ณด ๋ฌธ์:
https://docs.python.org/3.4/tutorial/errors.html
์ฐ์ต๋ฌธ์
์ฐ์ต
์๋ ์ฝ๋๋ 100์ ์
๋ ฅํ ๊ฐ์ผ๋ก ๋๋๋ ํจ์์ด๋ค.
๋ค๋ง 0์ ์
๋ ฅํ ๊ฒฝ์ฐ 0์ผ๋ก ๋๋๊ธฐ ์ค๋ฅ(ZeroDivisionError)๊ฐ ๋ฐ์ํ๋ค.
|
from __future__ import print_function
number_to_square = raw_input("A number to divide 100: ")
number = int(number_to_square)
print("100์ ์
๋ ฅํ ๊ฐ์ผ๋ก ๋๋ ๊ฒฐ๊ณผ๋", 100/number, "์
๋๋ค.")
|
previous/notes2017/W03/GongSu06_Errors_and_Exception_Handling.ipynb
|
liganega/Gongsu-DataSci
|
gpl-3.0
|
์๋ ๋ด์ฉ์ด ์ถฉ์กฑ๋๋๋ก ์ ์ฝ๋๋ฅผ ์์ ํ๋ผ.
๋๋์
์ด ๋ถ๋์์์ ์ผ๋ก ๊ณ์ฐ๋๋๋ก ํ๋ค.
0์ด ์๋ ์ซ์๊ฐ ์
๋ ฅ๋ ๊ฒฝ์ฐ 100์ ๊ทธ ์ซ์๋ก ๋๋๋ค.
0์ด ์
๋ ฅ๋ ๊ฒฝ์ฐ 0์ด ์๋ ์ซ์๋ฅผ ์
๋ ฅํ๋ผ๊ณ ์ ๋ฌํ๋ค.
์ซ์๊ฐ ์๋ ๊ฐ์ด ์
๋ ฅ๋ ๊ฒฝ์ฐ ์ซ์๋ฅผ ์
๋ ฅํ๋ผ๊ณ ์ ๋ฌํ๋ค.
๊ฒฌ๋ณธ๋ต์:
|
number_to_square = raw_input("A number to divide 100: ")
try:
number = float(number_to_square)
print("100์ ์
๋ ฅํ ๊ฐ์ผ๋ก ๋๋ ๊ฒฐ๊ณผ๋", 100/number, "์
๋๋ค.")
except ZeroDivisionError:
raise ZeroDivisionError('0์ด ์๋ ์ซ์๋ฅผ ์
๋ ฅํ์ธ์.')
except ValueError:
raise ValueError('์ซ์๋ฅผ ์
๋ ฅํ์ธ์.')
number_to_square = raw_input("A number to divide 100: ")
try:
number = float(number_to_square)
print("100์ ์
๋ ฅํ ๊ฐ์ผ๋ก ๋๋ ๊ฒฐ๊ณผ๋", 100/number, "์
๋๋ค.")
except ZeroDivisionError:
raise ZeroDivisionError('0์ด ์๋ ์ซ์๋ฅผ ์
๋ ฅํ์ธ์.')
except ValueError:
raise ValueError('์ซ์๋ฅผ ์
๋ ฅํ์ธ์.')
|
previous/notes2017/W03/GongSu06_Errors_and_Exception_Handling.ipynb
|
liganega/Gongsu-DataSci
|
gpl-3.0
|
Define the input string:
|
data = 'hello world'
print(data)
|
NLP/01-One_hot_encoding/notebooks/One_hot_encoding.ipynb
|
rahulremanan/python_tutorial
|
mit
|
Define universe of possible input values:
|
alphabet = 'abcdefghijklmnopqrstuvwxyz '
|
NLP/01-One_hot_encoding/notebooks/One_hot_encoding.ipynb
|
rahulremanan/python_tutorial
|
mit
|
Define a mapping of characters to corresponding integers:
|
char_to_int = dict((c, i) for i, c in enumerate(alphabet))
int_to_char = dict((i, c) for i, c in enumerate(alphabet))
|
NLP/01-One_hot_encoding/notebooks/One_hot_encoding.ipynb
|
rahulremanan/python_tutorial
|
mit
|
Integer encoding of the input data:
|
integer_encoded = [char_to_int[char] for char in data]
print(integer_encoded)
|
NLP/01-One_hot_encoding/notebooks/One_hot_encoding.ipynb
|
rahulremanan/python_tutorial
|
mit
|
One hot encoding:
|
onehot_encoded = list()
for value in integer_encoded:
letter = [0 for _ in range(len(alphabet))]
letter[value] = 1
onehot_encoded.append(letter)
print(onehot_encoded)
|
NLP/01-One_hot_encoding/notebooks/One_hot_encoding.ipynb
|
rahulremanan/python_tutorial
|
mit
|
Dencoding one hot encoded data -- First character:
|
inverted = int_to_char[np.argmax(onehot_encoded[0])]
print(inverted)
|
NLP/01-One_hot_encoding/notebooks/One_hot_encoding.ipynb
|
rahulremanan/python_tutorial
|
mit
|
Dencoding one hot encoded data -- Entire one-hot encoded input:
|
decoded = list()
for i in range(len(onehot_encoded)):
decoded_char = int_to_char[np.argmax(onehot_encoded[i])]
decoded.append(decoded_char)
print (''.join([str(item) for item in decoded]))
|
NLP/01-One_hot_encoding/notebooks/One_hot_encoding.ipynb
|
rahulremanan/python_tutorial
|
mit
|
Part 02a -- One hot encoding using sci-kit learn:
Importing libraries:
|
from numpy import array
from numpy import argmax
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import OneHotEncoder
|
NLP/01-One_hot_encoding/notebooks/One_hot_encoding.ipynb
|
rahulremanan/python_tutorial
|
mit
|
Define example :
|
data = ['cold',
'cold',
'warm',
'cold',
'hot',
'hot',
'warm',
'cold',
'warm',
'hot']
values = array(data)
print(values)
|
NLP/01-One_hot_encoding/notebooks/One_hot_encoding.ipynb
|
rahulremanan/python_tutorial
|
mit
|
Integer encoding:
|
label_encoder = LabelEncoder()
label_encoded = label_encoder.fit_transform(values)
print(label_encoded)
|
NLP/01-One_hot_encoding/notebooks/One_hot_encoding.ipynb
|
rahulremanan/python_tutorial
|
mit
|
Binary encoding:
|
onehot_encoder = OneHotEncoder(sparse=False)
label_encoded = label_encoded.reshape(len(label_encoded), 1)
onehot_encoded = onehot_encoder.fit_transform(label_encoded)
print(onehot_encoded)
|
NLP/01-One_hot_encoding/notebooks/One_hot_encoding.ipynb
|
rahulremanan/python_tutorial
|
mit
|
Invert first example:
|
inverted = label_encoder.inverse_transform([argmax(onehot_encoded[0, :])])
|
NLP/01-One_hot_encoding/notebooks/One_hot_encoding.ipynb
|
rahulremanan/python_tutorial
|
mit
|
Output the decoded example:
|
print(inverted)
|
NLP/01-One_hot_encoding/notebooks/One_hot_encoding.ipynb
|
rahulremanan/python_tutorial
|
mit
|
Part 02b -- One-hot encode using keras:
Importing libraries:
|
from numpy import array
from numpy import argmax
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import OneHotEncoder
from keras.utils import to_categorical
|
NLP/01-One_hot_encoding/notebooks/One_hot_encoding.ipynb
|
rahulremanan/python_tutorial
|
mit
|
Define the variable:
|
data = ['cold', 'cold', 'warm', 'cold', 'hot', 'hot', 'warm', 'cold', 'warm', 'hot']
values = array(data)
print(values)
|
NLP/01-One_hot_encoding/notebooks/One_hot_encoding.ipynb
|
rahulremanan/python_tutorial
|
mit
|
Integer encoding:
|
label_encoder = LabelEncoder()
label_encoded = label_encoder.fit_transform(values)
print(label_encoded)
# one hot encode
encoded = to_categorical(label_encoded)
print(encoded)
# invert encoding
label_encoded = argmax(encoded[0])
inverted = label_encoder.inverse_transform(label_encoded)
print(inverted)
|
NLP/01-One_hot_encoding/notebooks/One_hot_encoding.ipynb
|
rahulremanan/python_tutorial
|
mit
|
Part 02c -- One-hot encode using keras for numerical categories:
|
from numpy import array
from numpy import argmax
from keras.utils import to_categorical
# define example
data = [1, 3, 2, 0, 3, 2, 2, 1, 0, 1]
data = array(data)
print(data)
# one hot encode
encoded = to_categorical(data)
print(encoded)
# invert encoding
inverted = argmax(encoded[0])
print(inverted)
|
NLP/01-One_hot_encoding/notebooks/One_hot_encoding.ipynb
|
rahulremanan/python_tutorial
|
mit
|
Linear models
Linear models are useful when little data is available or for very large feature spaces as in text classification. In addition, they form a good case study for regularization.
Linear models for regression
All linear models for regression learn a coefficient parameter coef_ and an offset intercept_ to make predictions using a linear combination of features:
y_pred = x_test[0] * coef_[0] + ... + x_test[n_features-1] * coef_[n_features-1] + intercept_
The difference between the linear models for regression is what kind of restrictions or penalties are put on coef_ as regularization , in addition to fitting the training data well.
The most standard linear model is the 'ordinary least squares regression', often simply called 'linear regression'. It doesn't put any additional restrictions on coef_, so when the number of features is large, it becomes ill-posed and the model overfits.
Let us generate a simple simulation, to see the behavior of these models.
|
from sklearn.datasets import make_regression
from sklearn.model_selection import train_test_split
X, y, true_coefficient = make_regression(n_samples=200, n_features=30, n_informative=10, noise=100, coef=True, random_state=5)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=5, train_size=60, test_size=140)
print(X_train.shape)
print(y_train.shape)
|
notebooks/17.In_Depth-Linear_Models.ipynb
|
amueller/scipy-2017-sklearn
|
cc0-1.0
|
Lasso (L1 penalty)
The Lasso estimator is useful to impose sparsity on the coefficient. In other words, it is to be prefered if we believe that many of the features are not relevant. This is done via the so-called l1 penalty.
$$ \text{min}_{w, b} \sum_i \frac{1}{2} || w^\mathsf{T}x_i + b - y_i||^2 + \alpha ||w||_1$$
|
from sklearn.linear_model import Lasso
lasso_models = {}
training_scores = []
test_scores = []
for alpha in [30, 10, 1, .01]:
lasso = Lasso(alpha=alpha).fit(X_train, y_train)
training_scores.append(lasso.score(X_train, y_train))
test_scores.append(lasso.score(X_test, y_test))
lasso_models[alpha] = lasso
plt.figure()
plt.plot(training_scores, label="training scores")
plt.plot(test_scores, label="test scores")
plt.xticks(range(4), [30, 10, 1, .01])
plt.legend(loc="best")
plt.figure(figsize=(10, 5))
plt.plot(true_coefficient[coefficient_sorting], "o", label="true", c='b')
for i, alpha in enumerate([30, 10, 1, .01]):
plt.plot(lasso_models[alpha].coef_[coefficient_sorting], "o", label="alpha = %.2f" % alpha, c=plt.cm.summer(i / 3.))
plt.legend(loc="best")
plt.figure(figsize=(10, 5))
plot_learning_curve(LinearRegression(), X, y)
plot_learning_curve(Ridge(alpha=10), X, y)
plot_learning_curve(Lasso(alpha=10), X, y)
|
notebooks/17.In_Depth-Linear_Models.ipynb
|
amueller/scipy-2017-sklearn
|
cc0-1.0
|
Similar to the Ridge/Lasso separation, you can set the penalty parameter to 'l1' to enforce sparsity of the coefficients (similar to Lasso) or 'l2' to encourage smaller coefficients (similar to Ridge).
Multi-class linear classification
|
from sklearn.datasets import make_blobs
plt.figure()
X, y = make_blobs(random_state=42)
plt.scatter(X[:, 0], X[:, 1], c=plt.cm.spectral(y / 2.));
from sklearn.svm import LinearSVC
linear_svm = LinearSVC().fit(X, y)
print(linear_svm.coef_.shape)
print(linear_svm.intercept_.shape)
plt.scatter(X[:, 0], X[:, 1], c=plt.cm.spectral(y / 2.))
line = np.linspace(-15, 15)
for coef, intercept in zip(linear_svm.coef_, linear_svm.intercept_):
plt.plot(line, -(line * coef[0] + intercept) / coef[1])
plt.ylim(-10, 15)
plt.xlim(-10, 8);
|
notebooks/17.In_Depth-Linear_Models.ipynb
|
amueller/scipy-2017-sklearn
|
cc0-1.0
|
Points are classified in a one-vs-rest fashion (aka one-vs-all), where we assign a test point to the class whose model has the highest confidence (in the SVM case, highest distance to the separating hyperplane) for the test point.
<div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>
Use LogisticRegression to classify the digits data set, and grid-search the C parameter.
</li>
<li>
How do you think the learning curves above change when you increase or decrease alpha?
Try changing the alpha parameter in ridge and lasso, and see if your intuition was correct.
</li>
</ul>
</div>
|
from sklearn.datasets import load_digits
from sklearn.linear_model import LogisticRegression
digits = load_digits()
X_digits, y_digits = digits.data, digits.target
# split the dataset, apply grid-search
# %load solutions/17A_logreg_grid.py
# %load solutions/17B_learning_curve_alpha.py
|
notebooks/17.In_Depth-Linear_Models.ipynb
|
amueller/scipy-2017-sklearn
|
cc0-1.0
|
You can apply the replace() method multiple times:
|
# Create a variable that stores the strong called 'apple'
a = 'apple'
# Create a copy of a with the ps, l, and e removed and reassign the value of a
a = a.replace('p','').replace('l','').replace('e','')
print(a)
|
winter2017/econ129/python/Econ129_Class_03_Complete.ipynb
|
letsgoexploring/teaching
|
mit
|
Now we have the tools to solve the email problem.
|
# Original character string
string = '"Carl Friedrich Gauss" <approximatelynormal@email.com>, "Leonhard Euler" <e@email.com>, "Bernhard Riemann" <zeta@email.com>'
# Remove <, >, and " from string and overwrite and print the result
string = string.replace('<','').replace('>','').replace('"<"','')
# Create a new variable called string_formatted with the commas replaced by the new line character '\n'
string_formatted = string.replace(', ','\n')
# Print string_formatted
print(string_formatted)
|
winter2017/econ129/python/Econ129_Class_03_Complete.ipynb
|
letsgoexploring/teaching
|
mit
|
A related problem might be to extract only the email address from the orginal string. To do this, we can use replace() method to remove the '<', '>', and ',' characters. Then we use the split() method to break the string apart at the spaces. The we loop over the resulting list of strings and take only the strings with '@' characters in them.
|
string = '"Carl Friedrich Gauss" <approximatelynormal@email.com>, "Leonhard Euler" <e@email.com>, "Bernhard Riemann" <zeta@email.com>'
string = string.replace('<','').replace('>','').replace('"<"','').replace(',','')
for s in string.split():
if '@' in s:
print(s)
|
winter2017/econ129/python/Econ129_Class_03_Complete.ipynb
|
letsgoexploring/teaching
|
mit
|
Numpy
NumPy is a powerful Python module for scientific computing. Among other things, NumPy defines an N-dimensional array object that is especially convenient to use for plotting functions and for simulating and storing time series data. NumPy also defines many useful mathematical functions like, for example, the sine, cosine, and exponential functions and has excellent functions for probability and statistics including random number generators, and many cumulative density functions and probability density functions.
Importing NumPy
The standard way to import NumPy so that the namespace is np. This is for the sake of brevity.
|
import numpy as np
|
winter2017/econ129/python/Econ129_Class_03_Complete.ipynb
|
letsgoexploring/teaching
|
mit
|
NumPy arrays
A NumPy ndarray is a homogeneous multidimensional array. Here, homogeneous means that all of the elements of the array have the same type. An nadrray is a table of numbers (like a matrix but with possibly more dimensions) indexed by a tuple of positive integers. The dimensions of NumPy arrays are called axes and the number of axes is called the rank. For this course, we will work almost exclusively with 1-dimensional arrays that are effectively vectors. Occasionally, we might run into a 2-dimensional array.
Basics
The most straightforward way to create a NumPy array is to call the array() function which takes as an argument a list. For example:
|
# Create a variable called a1 equal to a numpy array containing the numbers 1 through 5
a1 = np.array([1,2,3,4,5])
print(a1)
# Find the type of a1
print(type(a1))
# find the shape of a1
print(np.shape(a1))
# Use ndim to find the rank or number of dimensions of a1
print(np.ndim(a1))
# Create a variable called a2 equal to a 2-dimensionl numpy array containing the numbers 1 through 4
a2 = np.array([[1,2],[3,4]])
print(a2)
# find the shape of a2
print(np.shape(a2))
# Use ndim to find the rank or number of dimensions of a2
print(np.ndim(a2))
# Create a variable called c an empty numpy array
a3 = np.array([])
print(a3)
# find the shape of a3
print(np.shape(a3))
# Use ndim to find the rank or number of dimensions of a3
print(np.ndim(a3))
|
winter2017/econ129/python/Econ129_Class_03_Complete.ipynb
|
letsgoexploring/teaching
|
mit
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.