markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
La columna Score_gda si tiene alguna importancia para nosotros. Es la que va a decirnos quรฉ tan fiable es cada entrada del dataset. Seguramente la gente que sepa de biologรญa entenderรก mucho mรกs sobre la relevancia de este parรกmetro, pero por ahora nos quedamos con la idea de que mientras mayor sea, indica una mayor fiabilidad de la asignaciรณn gen-enfermedad correspondiente. El describe nos da un poco de informaciรณn, pero uno podrรญa querer extraerla para verla mas en detalle. Veamos como extraemos los valores de una columna:
# En general vamos a acceder a las columnas como dataframe['columna'] # entonces para sacar la Score_gda de disgenet_data podemos hacer: # disgenet_data['Score_gda'] # # Para sacar solo los valores de adentro podemos hacer: score_values = disgenet_data['Score_gda'].values print(score_values) print(f"# de entradas: {len(score_values)}") print(f"Tipo: {type(score_values)}")
python/Extras/Pandas_DataFrames/braindiseases.ipynb
fifabsas/talleresfifabsas
mit
Fijense que extraimos todos los datos y lo guardo en un array de numpy!!! Acรก ya estamos como queremos, porque vimos en la clase un montรณn de cosas que se pueden hacer con los arrays estos. Por ejemplo, podemos ver su distribuciรณn con un histograma:
_ = plt.hist(disgenet_data['Score_gda'].values)
python/Extras/Pandas_DataFrames/braindiseases.ipynb
fifabsas/talleresfifabsas
mit
Vemos que hay una bocha de datos que son reee poco fiables. Algo facil entonces que podesmos querer hacer es filtrar los datos y quedarnos solamente con aquellos que tienen un score bien alto, por ejemplo mayor que 0.7
# Aca vamos a usar la funciรณn loc (locate) de pandas. # basicamente uno le da una condicion y retorna las filas que cumplen la condicion esa disgenet_cortado = disgenet_data.loc[disgenet_data['Score_gda'] > 0.7] disgenet_cortado
python/Extras/Pandas_DataFrames/braindiseases.ipynb
fifabsas/talleresfifabsas
mit
Fijense cรณmo se recortรณ los datos!!! Pasamos de tener 28571 entradas a solamente 1982. A esto se le llama en la jerga popular como "hachar los datos" o bien "pasarle con la motosierra" Luego de destruir el 90\% de los datos, una pregunta mรกs que razonable es: ยฟQuรฉ cosas quedaron vivas? Para eso podemos ver, por ejemplo quรฉ tipos de asociaciones sobrevivieron la purga.
# Aca vamos a usar la funcion 'unique', para que solo muestre las asociaciones distintas # que aparecen, y no repita cien veces cada una print( disgenet_cortado['Association_Type'].unique() )
python/Extras/Pandas_DataFrames/braindiseases.ipynb
fifabsas/talleresfifabsas
mit
Tarea para ustedes es verificar que efectivamente hemos perdido algunos tipos de asociaciรณn en el recorte de datos (de paso pueden ver cuรกles perdimos, y explicarnos si tiene sentido haber perdido esas, porque nosotros no tenemos ni idea). Para finalizar podemos ver como se distribuyen los tipos de asociaciรณn. Osea, de quรฉ asociacion hay mรกs o menos. Esto se logra fรกcil con la funciรณn groupby
disgenet_cortado.groupby("Association_Type").count() # Si solo queremos ver una columna del output: disgenet_cortado.groupby("Association_Type")['Gene'].count() # Si lo quieren ver en forma de grafico de barras. Aquรญ esta para ustedes: asoc_types = disgenet_cortado.groupby("Association_Type")['Gene'].count() ax = asoc_types.plot.bar() # Aca le roto un poco los ticks, for tick in ax.get_xticklabels(): tick.set_ha('right') tick.set_rotation(50)
python/Extras/Pandas_DataFrames/braindiseases.ipynb
fifabsas/talleresfifabsas
mit
2. Load data 2.1 Read data and mask arr<=0.0
geo = gdal.Open('data\Hazard_AUS__1000.grd') arr = geo.ReadAsArray() arr = ma.masked_less_equal(arr, 0.0, copy=True)
ex22-Visualize GAR Global Flood Hazard Map with Python.ipynb
royalosyin/Python-Practical-Application-on-Climate-Variability-Studies
mit
2.2 Prepare coordinates
x_coords = np.arange(geo.RasterXSize) y_coords = np.arange(geo.RasterYSize) (upper_left_x, x_size, x_rotation, upper_left_y, y_rotation, y_size) = geo.GetGeoTransform() x_coords = x_coords * x_size + upper_left_x + (x_size / 2) # add half the cell size y_coords = y_coords * y_size + upper_left_y + (y_size / 2) # to centre the point
ex22-Visualize GAR Global Flood Hazard Map with Python.ipynb
royalosyin/Python-Practical-Application-on-Climate-Variability-Studies
mit
3. Visualize
fig = plt.figure(figsize=(9, 15)) ax = fig.add_subplot(1, 1, 1) m = Basemap(projection='cyl', resolution='i', llcrnrlon=min(x_coords), llcrnrlat=min(y_coords), urcrnrlon=max(x_coords), urcrnrlat=max(y_coords)) x, y = m(*np.meshgrid(x_coords, y_coords)) #m.arcgisimage(service='World_Terrain_Base', xpixels = 3500, dpi=500, verbose= True) cs = m.contourf(x, y, arr, cmap='RdBu_r') m.drawcoastlines() m.drawrivers() m.drawstates() cb = m.colorbar(cs, pad="1%", size="3%",) plt.title('Flood Hazard 1000 years (cm)')
ex22-Visualize GAR Global Flood Hazard Map with Python.ipynb
royalosyin/Python-Practical-Application-on-Climate-Variability-Studies
mit
Distribuiรงรฃo de Veรญculos com base no Ano de Registro
# Crie um Plot com a Distribuiรงรฃo de Veรญculos com base no Ano de Registro # Salvando o plot fig.savefig("plots/Analise1/vehicle-distribution.png")
Cap09/Mini-Projeto2/Mini-Projeto2 - Analise1.ipynb
dsacademybr/PythonFundamentos
gpl-3.0
Variaรงรฃo da faixa de preรงo pelo tipo de veรญculo
# Crie um Boxplot para avaliar os outliers # Salvando o plot fig.savefig("plots/Analise1/price-vehicleType-boxplot.png")
Cap09/Mini-Projeto2/Mini-Projeto2 - Analise1.ipynb
dsacademybr/PythonFundamentos
gpl-3.0
Contagem total de veรญculos ร  venda conforme o tipo de veรญculo
# Crie um Count Plot que mostre o nรบmero de veรญculos pertencentes a cada categoria # Salvando o plot g.savefig("plots/Analise1/count-vehicleType.png")
Cap09/Mini-Projeto2/Mini-Projeto2 - Analise1.ipynb
dsacademybr/PythonFundamentos
gpl-3.0
ํŒŒ์ด์ฌ์„ ๊ณ„์‚ฐ๊ธฐ์ฒ˜๋Ÿผ ํ™œ์šฉํ•  ์ˆ˜๋„ ์žˆ๋‹ค.
2 + 3 a = 2 + 3 a + 1 42 - 15.3 100 * 11 7 / 2
previous/notes2017/W01/GongSu03_Python_DataTypes_1.ipynb
liganega/Gongsu-DataSci
gpl-3.0
์ฃผ์˜: ํŒŒ์ด์ฌ2์—์„œ๋Š” ๋‚˜๋ˆ—์…ˆ ์—ฐ์‚ฐ์ž(/)๋Š” ์ •์ˆ˜ ์ž๋ฃŒํ˜•์ธ ๊ฒฝ์šฐ ๋ชซ์„ ๊ณ„์‚ฐํ•œ๋‹ค. ๋ฐ˜๋ฉด์— ๋ถ€๋™์†Œ์ˆ˜์ ์ด ์‚ฌ์šฉ๋˜๋ฉด ๋ถ€๋™์†Œ์ˆ˜์ ์„ ๋ฆฌํ„ดํ•œ๋‹ค. ํŒŒ์ด์ฌ3์—์„œ๋Š” ๋‚˜๋ˆ—์…ˆ ์—ฐ์‚ฐ์ž(/)๋Š” ๋ฌด์กฐ๊ฑด ๋ถ€๋™์†Œ์ˆ˜์ ์„ ๋ฆฌํ„ดํ•œ๋‹ค. In [22]: 7 / 2 Out[22]: 3.5
7.0 / 2
previous/notes2017/W01/GongSu03_Python_DataTypes_1.ipynb
liganega/Gongsu-DataSci
gpl-3.0
๋‚˜๋จธ์ง€๋ฅผ ๊ณ„์‚ฐํ•˜๋Š” ์—ฐ์‚ฐ์ž๋Š” % ์ด๋‹ค.
7%5
previous/notes2017/W01/GongSu03_Python_DataTypes_1.ipynb
liganega/Gongsu-DataSci
gpl-3.0
๊ธฐ๋ณธ ์ž๋ฃŒํ˜• ํŒŒ์ด์ฌ์—๋Š” 8๊ฐœ์˜ ์ž๋ฃŒํ˜•์ด ๋ฏธ๋ฆฌ ์„ ์–ธ๋˜์–ด ์žˆ๋‹ค. ๊ทธ์ค‘ ๋„ค ๊ฐœ๋Š” ๋‹จ์ˆœ์ž๋ฃŒํ˜•์ด๋ฉฐ, ๋‚˜๋จธ์ง€ ๋„ค ๊ฐœ๋Š” ์ปฌ๋ ‰์…˜ ์ž๋ฃŒํ˜•(๋ชจ์Œ ์ž๋ฃŒํ˜•)์ด๋‹ค. ๋‹จ์ˆœ ์ž๋ฃŒํ˜• ํ•˜๋‚˜์˜ ๊ฐ’๋งŒ์„ ๋Œ€์ƒ์œผ๋กœ ํ•œ๋‹ค๋Š” ์˜๋ฏธ์—์„œ ๋‹จ์ˆœ ์ž๋ฃŒํ˜•์ด๋‹ค. ์ฆ‰, ์ •์ˆ˜ ํ•˜๋‚˜, ๋ถ€๋™์†Œ์ˆ˜์  ํ•˜๋‚˜, ๋ถˆ๋ฆฌ์–ธ ๊ฐ’ ํ•˜๋‚˜, ๋ฌธ์ž์—ด ํ•˜๋‚˜ ๋“ฑ๋“ฑ. ์ •์ˆ˜(int) ๋ถ€๋™์†Œ์ˆ˜์ (float) ๋ถˆ๋ฆฌ์–ธ ๊ฐ’(bool) ๋ฌธ์ž์—ด(str) ์ปฌ๋ ‰์…˜ ์ž๋ฃŒํ˜• ์—ฌ๋Ÿฌ ๊ฐœ์˜ ๊ฐ’๋“ค์„ ํ•˜๋‚˜๋กœ ๋ฌถ์–ด์„œ ๋‹ค๋ฃฌ๋‹ค๋Š” ์˜๋ฏธ์—์„œ ์ปฌ๋ ‰์…˜ ์ž๋ฃŒํ˜•์ด๋‹ค. ๋ฆฌ์ŠคํŠธ(list) ํŠœํ”Œ(tuple) ์ง‘ํ•ฉ(set) ์‚ฌ์ „(dictionary) ์—ฌ๊ธฐ์„œ๋Š” ๋‹จ์ˆœ ์ž๋ฃŒํ˜•์„ ์†Œ๊ฐœํ•˜๊ณ , ์ปฌ๋ ‰์…˜ ์ž๋ฃŒํ˜•์€ ์ดํ›„์— ๋‹ค๋ฃฌ๋‹ค. ์ •์ˆ˜(int) ์ผ๋ฐ˜์ ์œผ๋กœ ์•Œ๊ณ  ์žˆ๋Š” ์ •์ˆ˜(์ž์—ฐ์ˆ˜, 0, ์Œ์˜ ์ •์ˆ˜)๋“ค์˜ ์ž๋ฃŒํ˜•์„ ๋‚˜ํƒ€๋‚ด๋ฉด ๋ง์…ˆ, ๋บ„์…ˆ, ๊ณฑ์…ˆ, ๋‚˜๋ˆ—์…ˆ ๋“ฑ์˜ ์ผ๋ฐ˜ ์—ฐ์‚ฐ์ด ๊ฐ€๋Šฅํ•˜๋‹ค. ์ฃผ์˜: ์ •์ˆ˜๋“ค์˜ ๋‚˜๋ˆ—์…ˆ์˜ ๊ฒฐ๊ณผ๋Š” ๋ถ€๋™์†Œ์ˆ˜์ ์ด๋‹ค. ํŒŒ์ด์ฌ3์—์„œ ์ฒ˜๋Ÿผ ์ •์ˆ˜๋“ค์˜ ๋‚˜๋ˆ—์…ˆ์ด ๋ถ€๋™์†Œ์ˆ˜์ ์ด ๋˜๋„๋ก ํ•˜๋ ค๋ฉด ์•„๋ž˜ ๋ช…๋ น์–ด๋ฅผ ๋จผ์ € ์‹คํ–‰ํ•˜๋ฉด ๋œ๋‹ค. ์ตœ์‹  ๋ฒ„์ ผ์ธ ํŒŒ์ด์ฌ3๊ณผ์˜ ํ˜ธํ™˜์„ฑ์„ ์œ„ํ•ด ํ•„์š”ํ•  ์ˆ˜ ์žˆ๋‹ค. from __future__ import division In [4]: 8 / 5 Out[4]: 1.6 ์œ„ ๋ช…๋ น์–ด๋ฅผ ์‹คํ–‰ํ•œ ํ›„์— ๊ธฐ์กด์˜ ์ •์ˆ˜๋“ค์˜ ๋‚˜๋ˆ—์…ˆ ์—ฐ์‚ฐ์„ ์œ„ํ•ด์„œ๋Š” ๋ชซ์„ ๊ณ„์‚ฐํ•˜๋Š” ์—ฐ์‚ฐ์ž์ธ //์„ ์‚ฌ์šฉํ•˜๋ฉด ๋œ๋‹ค. In [5]: 8 // 5 Out[5]: 1 In [5]: 8 % 5 Out[5]: 3 ๋ถ€๋™์†Œ์ˆ˜์ (float) ๋ถ€๋™์†Œ์ˆ˜์ ์€ ์›๋ž˜ ์‹ค์ˆ˜๋ฅผ ์ปดํ“จํ„ฐ์—์„œ ๋‹ค๋ฃจ๊ธฐ ์œ„ํ•ด ๊ฐœ๋ฐœ๋˜์—ˆ์œผ๋‚˜ ์‹ค์ œ๋กœ๋Š” ์œ ๋ฆฌ์ˆ˜ ์ผ๋ถ€๋งŒ์„ ๋‹ค๋ฃฌ๋‹ค. ๋ฌด๋ฆฌ์ˆ˜์ธ ์›์ฃผ์œจ pi์˜ ๊ฒฝ์šฐ์—๋„ ์ปดํ“จํ„ฐ์˜ ํ•œ๊ณ„๋กœ ์ธํ•ด ์†Œ์ˆ˜์  ์ดํ•˜ ์ ๋‹นํ•œ ์ž๋ฆฌ์—์„œ ๋Š์–ด์„œ ์‚ฌ์šฉํ•œ๋‹ค.
new_float = 4.0 print(new_float)
previous/notes2017/W01/GongSu03_Python_DataTypes_1.ipynb
liganega/Gongsu-DataSci
gpl-3.0
์—ฐ์Šต ์ดˆ(second) ๋‹จ์œ„์˜ ์ˆซ์ž๋ฅผ ๋ฐ›์•„์„œ ์ผ(day) ๋‹จ์œ„์˜ ๊ฐ’์œผ๋กœ ๋˜๋Œ๋ ค์ฃผ๋Š” seconds2days(n) ํ•จ์ˆ˜๋ฅผ ์ •์˜ํ•˜๋ผ. ์ž…๋ ฅ๊ฐ’์€ int ๋˜๋Š” float ์ผ ์ˆ˜ ์žˆ์œผ๋ฉฐ ๋ฆฌํ„ด๊ฐ’์€ float ์ž๋ฃŒํ˜•์ด์–ด์•ผ ํ•œ๋‹ค. ํ™œ์šฉ ์˜ˆ: In [ ]: seconds2days(43200) Out[ ]: 0.5 ๊ฒฌ๋ณธ๋‹ต์•ˆ:
# ํ•˜๋ฃจ๋Š” ์•„๋ž˜ ์ˆซ์ž๋งŒํผ์˜ ์ดˆ๋กœ ์ด๋ฃจ์–ด์ง„๋‹ค. # ํ•˜๋ฃจ = 24์‹œ๊ฐ„ * 60๋ถ„ * 60์ดˆ. daysec = 60 * 60 * 24 # ์ด์ œ ์ดˆ๋ฅผ ์ผ ๋‹จ์œ„๋กœ ๋ณ€๊ฒฝํ•  ์ˆ˜ ์žˆ๋‹ค. def seconds2days(sec): """ sec์„ ์ผ ๋‹จ์œ„๋กœ ๋ณ€๊ฒฝํ•˜๋Š” ํ•จ์ˆ˜. ๊ฐ•์ œํ˜•๋ณ€ํ™˜์— ์ฃผ์˜ํ•  ๊ฒƒ""" return (float(sec) / daysec) seconds2days(43200)
previous/notes2017/W01/GongSu03_Python_DataTypes_1.ipynb
liganega/Gongsu-DataSci
gpl-3.0
ํŒŒ์ด์ฌ3์˜ ๊ฒฝ์šฐ์—๋Š” ์•„๋ž˜์™€ ๊ฐ™์ด ์ •์˜ํ•ด๋„ ๋œ๋‹ค. def seconds2days(sec): return (sec / daysec) ์—ฐ์Šต ๋ณ€์˜ ๊ธธ์ด๊ฐ€ ๊ฐ๊ฐ a, b, c์ธ ์ง๊ฐ์œก๋ฉด์ฒด์˜ ํ‘œ๋ฉด์ ์„ ๊ณ„์‚ฐํ•ด์ฃผ๋Š” ํ•จ์ˆ˜ box_surface(a, b, c)๋ฅผ ์ •์˜ํ•˜๋ผ. ์˜ˆ๋ฅผ ๋“ค์–ด, ๋ฐ•์Šค๋ฅผ ํŽ˜์ธํŠธ์น ํ•˜๊ณ ์ž ํ•  ๋•Œ ํ•„์š”ํ•œ ํŽ˜์ธํŠธ์˜ ์–‘์„ ๊ณ„์‚ฐํ•˜๋Š” ๋ฌธ์ œ์ด๋‹ค. ํ™œ์šฉ ์˜ˆ: In [ ]: box_surface(1, 1, 1) Out[ ]: 6 In [ ]: box_surface(2, 2, 3) Out[ ]: 32 ๊ฒฌ๋ณธ๋‹ต์•ˆ:
def box_surface(a, b, c): """ ๊ฐ ๋ณ€์˜ ๊ธธ์ด๊ฐ€ ๊ฐ๊ฐ a, b, c์ธ ๋ฐ•์Šค์˜ ํ‘œ๋ฉด์ ์„ ๋ฆฌํ„ดํ•˜๋Š” ํ•จ์ˆ˜. ํžŒํŠธ: 6๊ฐœ์˜ ๋ฉด์˜ ํ•ฉ์„ ๊ตฌํ•˜๋ฉด ๋œ๋‹ค""" s1, s2, s3 = a * b, b * c, c * a return 2 * (s1 + s2 + s3) box_surface(1, 1, 1) box_surface(2, 2, 3)
previous/notes2017/W01/GongSu03_Python_DataTypes_1.ipynb
liganega/Gongsu-DataSci
gpl-3.0
Define the problem First construct the model.
M = 50 m = np.zeros((M, 1)) m[10:15,:] = 1.0 m[15:27,:] = -0.3 m[27:35,:] = 2.1
NumPy.ipynb
kwinkunks/axb
apache-2.0
Now we make the kernel matrix G, which represents convolution.
N = 20 L = 100 alpha = 0.8 x = np.arange(0, M, 1) * L/(M-1) dx = L/(M-1) r = np.arange(0, N, 1) * L/(N-1) G = np.zeros((N, M)) for j in range(M): for k in range(N): G[k,j] = dx * np.exp(-alpha * np.abs(r[k] - x[j])**2) plt.imshow(G, cmap='viridis', interpolation='none') # Compute data d = G.dot(m) # Or, in Python 3.5 d = G @ m
NumPy.ipynb
kwinkunks/axb
apache-2.0
Noise-free: minimum norm
# Minimum norm solution in Python < 3.5 m_est = G.T.dot(la.inv(G.dot(G.T)).dot(d)) d_pred = G.dot(m_est) # Or, in Python 3.5 m_est = G.T @ la.inv(G @ G.T) @ d d_pred = G @ m_est plot_all(m, d, m_est, d_pred)
NumPy.ipynb
kwinkunks/axb
apache-2.0
Solve with LAPACK
m_est = la.lstsq(G, d)[0] d_pred = G.dot(m_est) # Or, in Python 3.5 d_pred = G @ m_est plot_all(m, d, m_est, d_pred)
NumPy.ipynb
kwinkunks/axb
apache-2.0
With noise: damped least squares
# Add noise. dc = G.dot(m) # Python < 3.5 dc = G @ m # Python 3.5 # Add to the data. s = 1 d = dc + s * np.random.random(dc.shape) # Use the second form. I = np.eye(N) ยต = 2.5 # We can use Unicode symbols in Python 3, just be careful m_est = G.T.dot(la.inv(G.dot(G.T) + ยต * I)).dot(d) d_pred = G.dot(m_est) # Or, in Python 3.5 m_est = G.T @ la.inv(G @ G.T + ยต * I) @ d d_pred = G @ m_est plot_all(m, d, m_est, d_pred)
NumPy.ipynb
kwinkunks/axb
apache-2.0
With noise: damped least squares with first derivative regularization
convmtx([1, -1], 5) W = convmtx([1,-1], M)[:,:-1] # Skip last column
NumPy.ipynb
kwinkunks/axb
apache-2.0
Now we solve: $$ \hat{\mathbf{m}} = (\mathbf{G}^\mathrm{T} \mathbf{G} + \mu \mathbf{W}^\mathrm{T} \mathbf{W})^{-1} \mathbf{G}^\mathrm{T} \mathbf{d} \ \ $$
m_est = la.inv(G.T.dot(G) + ยต*W.T.dot(W)).dot(G.T.dot(d)) d_pred = G.dot(m_est) # Or, in Python 3.5 m_est = la.inv(G.T @ G + ยต * W.T @ W) @ G.T @ d d_pred = G @ m_est plot_all(m, d, m_est, d_pred)
NumPy.ipynb
kwinkunks/axb
apache-2.0
Implement Preprocessing Functions The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below: - Lookup Table - Tokenize Punctuation Lookup Table To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries: - Dictionary to go from the words to an id, we'll call vocab_to_int - Dictionary to go from the id to word, we'll call int_to_vocab Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
import numpy as np import problem_unittests as tests def create_lookup_tables(text): """ Create lookup tables for vocabulary :param text: The text of tv scripts split into words :return: A tuple of dicts (vocab_to_int, int_to_vocab) """ # TODO: Implement Function words = list(set(text)) vocab_to_int = {} int_to_vocab = {} for i, word in enumerate(words): vocab_to_int[word] = i int_to_vocab[i] = word return vocab_to_int, int_to_vocab """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_create_lookup_tables(create_lookup_tables)
tv-script-generation/dlnd_tv_script_generation.ipynb
Bismarrck/deep-learning
mit
Tokenize Punctuation We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!". Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token: - Period ( . ) - Comma ( , ) - Quotation Mark ( " ) - Semicolon ( ; ) - Exclamation mark ( ! ) - Question mark ( ? ) - Left Parentheses ( ( ) - Right Parentheses ( ) ) - Dash ( -- ) - Return ( \n ) This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
def token_lookup(): """ Generate a dict to turn punctuation into a token. :return: Tokenize dictionary where the key is the punctuation and the value is the token """ # TODO: Implement Function puncs = {".": "Period", ",": "Comma", "\"": "QuotationMark", ";": "Semicolon", "!": "ExclamationMark", "?": "QuestionMark", "(": "LeftParantheses", ")": "RightParantheses", "--": "Dash", "\n": "Return" } return {key: "||{}||".format(val) for key, val in puncs.items()} """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_tokenize(token_lookup)
tv-script-generation/dlnd_tv_script_generation.ipynb
Bismarrck/deep-learning
mit
Input Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders: - Input text placeholder named "input" using the TF Placeholder name parameter. - Targets placeholder - Learning Rate placeholder Return the placeholders in the following tuple (Input, Targets, LearningRate)
def get_inputs(): """ Create TF Placeholders for input, targets, and learning rate. :return: Tuple (input, targets, learning rate) """ # TODO: Implement Function inputs = tf.placeholder(tf.int32, [None, None], name="input") targets = tf.placeholder(tf.int32, [None, None], name="targets") lr = tf.placeholder(tf.float32, name="learning_rate") return inputs, targets, lr """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_inputs(get_inputs)
tv-script-generation/dlnd_tv_script_generation.ipynb
Bismarrck/deep-learning
mit
Build RNN Cell and Initialize Stack one or more BasicLSTMCells in a MultiRNNCell. - The Rnn size should be set using rnn_size - Initalize Cell State using the MultiRNNCell's zero_state() function - Apply the name "initial_state" to the initial state using tf.identity() Return the cell and initial state in the following tuple (Cell, InitialState)
lstm_layers = 2 def get_init_cell(batch_size, rnn_size): """ Create an RNN Cell and initialize it. :param batch_size: Size of batches :param rnn_size: Size of RNNs :return: Tuple (cell, initialize state) """ lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size) # Stack up multiple LSTM layers, for deep learning cell = tf.contrib.rnn.MultiRNNCell([lstm] * lstm_layers) # Getting an initial state of all zeros initial_state = tf.identity(cell.zero_state(batch_size, tf.float32), name="initial_state") return cell, initial_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_init_cell(get_init_cell)
tv-script-generation/dlnd_tv_script_generation.ipynb
Bismarrck/deep-learning
mit
Word Embedding Apply embedding to input_data using TensorFlow. Return the embedded sequence.
def get_embed(input_data, vocab_size, embed_dim): """ Create embedding for <input_data>. :param input_data: TF placeholder for text input. :param vocab_size: Number of words in vocabulary. :param embed_dim: Number of embedding dimensions :return: Embedded input. """ # TODO: Implement Function embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1)) return tf.nn.embedding_lookup(embedding, input_data) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_embed(get_embed)
tv-script-generation/dlnd_tv_script_generation.ipynb
Bismarrck/deep-learning
mit
Build the Neural Network Apply the functions you implemented above to: - Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function. - Build RNN using cell and your build_rnn(cell, inputs) function. - Apply a fully connected layer with a linear activation and vocab_size as the number of outputs. Return the logits and final state in the following tuple (Logits, FinalState)
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim): """ Build part of the neural network :param cell: RNN cell :param rnn_size: Size of rnns :param input_data: Input data :param vocab_size: Vocabulary size :param embed_dim: Number of embedding dimensions :return: Tuple (Logits, FinalState) """ # TODO: Implement Function inputs = get_embed(input_data, vocab_size, rnn_size) outputs, final_state = build_rnn(cell, inputs) predictions = tf.contrib.layers.fully_connected(outputs, vocab_size, activation_fn=None) return predictions, final_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_build_nn(build_nn)
tv-script-generation/dlnd_tv_script_generation.ipynb
Bismarrck/deep-learning
mit
Batches Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements: - The first element is a single batch of input with the shape [batch size, sequence length] - The second element is a single batch of targets with the shape [batch size, sequence length] If you can't fill the last batch with enough data, drop the last batch. For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2) would return a Numpy array of the following: ``` [ # First Batch [ # Batch of Input [[ 1 2], [ 7 8], [13 14]] # Batch of targets [[ 2 3], [ 8 9], [14 15]] ] # Second Batch [ # Batch of Input [[ 3 4], [ 9 10], [15 16]] # Batch of targets [[ 4 5], [10 11], [16 17]] ] # Third Batch [ # Batch of Input [[ 5 6], [11 12], [17 18]] # Batch of targets [[ 6 7], [12 13], [18 1]] ] ] ``` Notice that the last target value in the last batch is the first input value of the first batch. In this case, 1. This is a common technique used when creating sequence batches, although it is rather unintuitive.
def get_batches(int_text, batch_size, seq_length): """ Return batches of input and target :param int_text: Text with the words replaced by their ids :param batch_size: The size of batch :param seq_length: The length of sequence :return: Batches as a Numpy array """ # TODO: Implement Function n_batches = (len(int_text) - 1) // (batch_size * seq_length) batches = np.zeros((n_batches, 2, batch_size, seq_length)) x = int_text[:] y = int_text[1:] for i in range(batch_size): for j in range(n_batches): k = (i * n_batches + j) * seq_length batches[j, 0, i] = x[k: k + seq_length] batches[j, 1, i] = y[k: k + seq_length] return batches # Self check print(get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3)) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_batches(get_batches)
tv-script-generation/dlnd_tv_script_generation.ipynb
Bismarrck/deep-learning
mit
Neural Network Training Hyperparameters Tune the following parameters: Set num_epochs to the number of epochs. Set batch_size to the batch size. Set rnn_size to the size of the RNNs. Set embed_dim to the size of the embedding. Set seq_length to the length of sequence. Set learning_rate to the learning rate. Set show_every_n_batches to the number of batches the neural network should print progress.
# Number of Epochs num_epochs = 200 # Batch Size batch_size = 128 # RNN Size rnn_size = 2000 # Sequence Length seq_length = 50 # Learning Rate learning_rate = 0.0002 # Show stats for every n number of batches show_every_n_batches = 5 """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ save_dir = './save'
tv-script-generation/dlnd_tv_script_generation.ipynb
Bismarrck/deep-learning
mit
Implement Generate Functions Get Tensors Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names: - "input:0" - "initial_state:0" - "final_state:0" - "probs:0" Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
def get_tensors(loaded_graph): """ Get input, initial state, final state, and probabilities tensor from <loaded_graph> :param loaded_graph: TensorFlow graph loaded from file :return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor) """ # TODO: Implement Function input_ = loaded_graph.get_tensor_by_name("input:0") initial_state_ = loaded_graph.get_tensor_by_name("initial_state:0") final_state_ = loaded_graph.get_tensor_by_name("final_state:0") probs_ = loaded_graph.get_tensor_by_name("probs:0") return input_, initial_state_, final_state_, probs_ """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_tensors(get_tensors)
tv-script-generation/dlnd_tv_script_generation.ipynb
Bismarrck/deep-learning
mit
Choose Word Implement the pick_word() function to select the next word using probabilities.
def pick_word(probabilities, int_to_vocab): """ Pick the next word in the generated text :param probabilities: Probabilites of the next word :param int_to_vocab: Dictionary of word ids as the keys and words as the values :return: String of the predicted word """ # TODO: Implement Function choice = np.random.choice([int(x) for x in int_to_vocab.keys()], p=probabilities) return int_to_vocab[choice] """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_pick_word(pick_word)
tv-script-generation/dlnd_tv_script_generation.ipynb
Bismarrck/deep-learning
mit
0.1 Directory Set up & Display Image
datadir = '' objname = '2016HO3' def plotfits(imno): img = fits.open(datadir+objname+'_{0:02d}.fits'.format(imno))[0].data f = plt.figure(figsize=(10,12)) im = plt.imshow(img, cmap='hot') im = plt.imshow(img[480:560, 460:540], cmap='hot') plt.clim(1800, 2800) plt.colorbar(im, fraction=0.046, pad=0.04) plt.savefig("figure{0}.png".format(imno)) plt.show() numb = 1 plotfits(numb)
Sessions/Session05/Day3/Introduction to Astrometry.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
1. Centroiding on Images Write a text file with image centers. Write code to open each image and extract centroid position from previous exercise. Save results in a text file.
centers = np.array([[502,501], [502,501]]) np.savetxt('centers.txt', centers, fmt='%i') centers = np.loadtxt('centers.txt', dtype='int') searchr = 5
Sessions/Session05/Day3/Introduction to Astrometry.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
1.1 Center of Mass
def cent_weight(n): """ Assigns centroid weights """ wghts=np.zeros((n),np.float) for i in range(n): wghts[i]=float(i-n/2)+0.5 return wghts def calc_CoM(psf, weights): """ Finds Center of Mass of image """ cent=np.zeros((2),np.float) temp=sum(sum(psf) - min(sum(psf) )) cent[1]=sum(( sum(psf) - min(sum(psf)) ) * weights)/temp cent[0]=sum(( sum(psf.T) - min(sum(psf.T)) ) *weights)/temp return cent centlist = [] for i, center in enumerate(centers): image = fits.open(datadir+objname+'_{0:02d}.fits'.format(i+1))[0].data searchbox = image[center[0]-searchr : center[0]+searchr, center[1]-searchr : center[1]+searchr] boxlen = len(searchbox) weights = cent_weight(boxlen) cen_offset = calc_CoM(searchbox, weights) centlist.append(center + cen_offset) print(centlist[0])
Sessions/Session05/Day3/Introduction to Astrometry.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
2. Identifying Stars in the Field Ex 1. Write code to identify stars in the field. One way to do it would be: Create a new image using an mapping arc sinh that captures the full dynamic range effectively. Locate lower and upper bounds that should include only stars. Refine the parameters to optimize the extraction of stars from background.
no = 1 image = fits.open(datadir+objname+'_{0:02d}.fits'.format(no))[0].data
Sessions/Session05/Day3/Introduction to Astrometry.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
1.a. Create a new image using an mapping arc sinh that captures the full dynamic range effectively. Consider Gaussian smoothing to get rid of inhomogineties in the image.
## Some functions you may want to use import skimage.exposure as skie from scipy.ndimage import gaussian_filter ### code here ### ### code here ### ### code here ### ### code here ### ### code here ### ### code here ###
Sessions/Session05/Day3/Introduction to Astrometry.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
1.b. Create a new image that is scaled between the lower and upper limits for displaying the star map. Search the arcsinh-stretched original image for local maxima and catalog those brighter than a threshold that is adjusted based on the image.
## Consider using import skimage.morphology as morph ### code here ### ### code here ### ### code here ### ### code here ### ### code here ###
Sessions/Session05/Day3/Introduction to Astrometry.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Plot image with identified stars and target
f = plt.figure(figsize=(10,12)) plt.imshow(opt_img, cmap='hot') plt.colorbar(fraction=0.046, pad=0.04) plt.scatter(x2, y2, s=80, facecolors='none', edgecolors='r') plt.scatter(502.01468185, 501.00082137, s=80, facecolors='none', edgecolors='y' ) plt.show()
Sessions/Session05/Day3/Introduction to Astrometry.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
3. Converting pixel coordinates to WCS
def load_wcs_from_file(filename, xx, yy): # Load the FITS hdulist using astropy.io.fits hdulist = fits.open(filename) # Parse the WCS keywords in the primary HDU w = wcs.WCS(hdulist[0].header) # Print out the "name" of the WCS, as defined in the FITS header print(w.wcs.name) # Print out all of the settings that were parsed from the header w.wcs.print_contents() # Coordinates of interest. # Note we've silently assumed a NAXIS=2 image here targcrd = np.array([centlist[0]], np.float_) starscrd = np.array([xx, yy], np.float_) # Convert pixel coordinates to world coordinates # The second argument is "origin". world = w.wcs_pix2world(starscrd.T, 0) return w, world
Sessions/Session05/Day3/Introduction to Astrometry.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Find position of Asteroid in WCS
wparams, scoords = load_wcs_from_file(datadir+objname+'_{0:02d}.fits'.format(1), x2, y2) print(scoords) wparams, tcoords = load_wcs_from_file(datadir+objname+'_{0:02d}.fits'.format(1), np.array([centlist[0][0]]), np.array([centlist[0][1]])) print(tcoords)
Sessions/Session05/Day3/Introduction to Astrometry.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
3. Matching 3.1 Get astrometric catalog
job = Gaia.launch_job_async("SELECT * \ FROM gaiadr1.gaia_source \ WHERE CONTAINS(POINT('ICRS',gaiadr1.gaia_source.ra,gaiadr1.gaia_source.dec),CIRCLE('ICRS', 193.34, 33.86, 0.08))=1;" \ , dump_to_file=True) print (job) r = job.get_results() print (r['source_id'], r['ra'], r['dec']) print(type(r['ra']))
Sessions/Session05/Day3/Introduction to Astrometry.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
3.2 Perform Match Convert Gaia WCS coordinates to pixels
ra = np.array(r['ra']) dec = np.array(r['dec']) xpix, ypix = ### fill in one line here ###
Sessions/Session05/Day3/Introduction to Astrometry.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Plot Gaia stars over identified stars in image
f = plt.figure(figsize=(20,22)) plt.imshow(opt_img, cmap='hot') plt.colorbar(fraction=0.046, pad=0.04) plt.scatter(x2, y2, s=80, facecolors='none', edgecolors='r') plt.scatter(xpix, ypix, s=80, facecolors='none', edgecolors='g') #plt.scatter(xpix[17], ypix[17], s=80, facecolors='none', edgecolors='y') plt.imshow(opt_img, cmap='hot') plt.show()
Sessions/Session05/Day3/Introduction to Astrometry.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Ex. 2 Find the amount of shift needed. Match catalogue stars to identified stars and measure amount of shift needed to overlay FoV stars to catalogue. E.g. Find closest star to one of the Gaia stars near the center of image. Find magnitude of shift. Shift all other Gaia stars and see whether resulting difference is small.
### code here ### ### code here ### ### code here ### ### code here ### ### code here ### ### code here ### ### code here ### ### code here ### ### code here ### ### code here ### ### code here ### ### code here ### ### code here ### ### code here ### ### code here ###
Sessions/Session05/Day3/Introduction to Astrometry.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Shift
targshifted = centlist[0] + np.array([xshift, yshift])
Sessions/Session05/Day3/Introduction to Astrometry.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Convert shifted coordinate into WCS
wparams, tscoords = load_wcs_from_file(datadir+objname+'_{0:02d}.fits'.format(1), np.array([targshifted[0][0][0]]), np.array([targshifted[0][0][1]]))
Sessions/Session05/Day3/Introduction to Astrometry.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Runtime comparison Let's see how the runtimes of these two algorithms compare. We expect variable elimination to outperform enumeration by a large margin as we reduce the number of repetitive calculations significantly.
%%timeit enumeration_ask('Burglary', dict(JohnCalls=True, MaryCalls=True), burglary).show_approx() %%timeit elimination_ask('Burglary', dict(JohnCalls=True, MaryCalls=True), burglary).show_approx()
probability.ipynb
jo-tez/aima-python
mit
We observe that variable elimination was faster than enumeration as we had expected but the gain in speed is not a lot, in fact it is just about 30% faster. <br> This happened because the bayesian network in question is pretty small, with just 5 nodes, some of which aren't even required in the inference process. For more complicated networks, variable elimination will be significantly faster and runtime will reduce not just by a constant factor, but by a polynomial factor proportional to the number of nodes, due to the reduction in repeated calculations. Approximate Inference in Bayesian Networks Exact inference fails to scale for very large and complex Bayesian Networks. This section covers implementation of randomized sampling algorithms, also called Monte Carlo algorithms.
psource(BayesNode.sample)
probability.ipynb
jo-tez/aima-python
mit
Image IO Reading and writing images is easy. ANTsPy has some included data which we will use.
fname1 = ants.get_ants_data('r16') fname2 = ants.get_ants_data('r64') print(fname1) img1 = ants.image_read(fname1) img2 = ants.image_read(fname2) print(img1)
tutorials/10minTutorial.ipynb
ANTsX/ANTsPy
apache-2.0
You can read also convert numpy arrays to ANTsImage types.. Here's an example of an fMRI image (an image with "components")
arr_4d = np.random.randn(70,70,70,10).astype('float32') img_fmri = ants.from_numpy(arr_4d, has_components=True) print(img_fmri)
tutorials/10minTutorial.ipynb
ANTsX/ANTsPy
apache-2.0
Once you have an ANTsImage type, it basically acts as a numpy array:
# clone img = ants.image_read(fname1) img2 = img.clone() # convert to numpy img_arr = img.numpy() # create another image with same properties but different data img2 = img.new_image_like(img_arr*2) # save to file # img.to_file(...) # many useful things: img.median() img.std() img.argmin() img.argmax() img.flatten() img.nonzero() img.unique() # do any operations directly on ANTsImage types img3 = img2 - img img3 = img2 > img img3 = img2 / img img3 = img2 == img # change any physical properties img4 = img.clone() print(img4.spacing) img4.set_spacing((1,1)) print(img4.spacing) # test if two images are allclose in values issame = ants.allclose(img,img2) # test if two images have same physical space issame_phys = ants.image_physical_space_consistency(img,img2)
tutorials/10minTutorial.ipynb
ANTsX/ANTsPy
apache-2.0
Segmentation This module includes Atropos segmentation, Joint Label Fusion, cortical thickness estimation, and prior-based segmentation. Atropos segmentation:
img = ants.image_read(ants.get_ants_data('r16')) img = ants.resample_image(img, (64,64), 1, 0) mask = ants.get_mask(img) img_seg = ants.atropos(a=img, m='[0.2,1x1]', c='[2,0]', i='kmeans[3]', x=mask) print(img_seg.keys()) ants.plot(img_seg['segmentation'])
tutorials/10minTutorial.ipynb
ANTsX/ANTsPy
apache-2.0
Cortical thickness:
img = ants.image_read( ants.get_ants_data('r16') ,2) mask = ants.get_mask( img ).threshold_image( 1, 2 ) segs=ants.atropos( a = img, m = '[0.2,1x1]', c = '[2,0]', i = 'kmeans[3]', x = mask ) thickimg = ants.kelly_kapowski(s=segs['segmentation'], g=segs['probabilityimages'][1], w=segs['probabilityimages'][2], its=45, r=0.5, m=1) print(thickimg) img.plot(overlay=thickimg, overlay_cmap='jet')
tutorials/10minTutorial.ipynb
ANTsX/ANTsPy
apache-2.0
Registration This module includes the main ANTs registration interface, from which all registration algorithms can be run - along with various functions for evaluating registration algorithms or resampling/reorienting images or applying specific transformations to images. SyN registration:
fixed = ants.image_read( ants.get_ants_data('r16') ).resample_image((64,64),1,0) moving = ants.image_read( ants.get_ants_data('r64') ).resample_image((64,64),1,0) fixed.plot(overlay=moving, title='Before Registration') mytx = ants.registration(fixed=fixed , moving=moving, type_of_transform='SyN' ) print(mytx) warped_moving = mytx['warpedmovout'] fixed.plot(overlay=warped_moving, title='After Registration')
tutorials/10minTutorial.ipynb
ANTsX/ANTsPy
apache-2.0
You can also use the transforms output from registration and apply them directly to the image:
mywarpedimage = ants.apply_transforms(fixed=fixed, moving=moving, transformlist=mytx['fwdtransforms']) mywarpedimage.plot()
tutorials/10minTutorial.ipynb
ANTsX/ANTsPy
apache-2.0
Other utilities N3 and N4 bias correction:
image = ants.image_read( ants.get_ants_data('r16') ) image_n4 = ants.n4_bias_field_correction(image) ants.plot( image_n4 )
tutorials/10minTutorial.ipynb
ANTsX/ANTsPy
apache-2.0
<h4>Classification Overview</h4> <ul> <li>Predict a binary class as output based on given features. </li> <li>Examples: Do we need to follow up on a customer review? Is this transaction fraudulent or valid one? Are there signs of onset of a medical condition or disease? Is this considered junk food or not?</li> <li>Linear Model. Estimated Target = w<sub>0</sub> + w<sub>1</sub>x<sub>1</sub> + w<sub>2</sub>x<sub>2</sub> + w<sub>3</sub>x<sub>3</sub> + โ€ฆ + w<sub>n</sub>x<sub>n</sub><br> where, w is the weight and x is the feature </li> <li><b>Logistic Regression</b>. Estimated Probability = <b>sigmoid</b>(w<sub>0</sub> + w<sub>1</sub>x<sub>1</sub> + w<sub>2</sub>x<sub>2</sub> + w<sub>3</sub>x<sub>3</sub> + โ€ฆ + w<sub>n</sub>x<sub>n</sub>)<br> where, w is the weight and x is the feature </li> <li>Linear model output is fed thru a sigmoid or logistic function to produce the probability.</li> <li>Predicted Value: Probability of a binary outcome. Closer to 1 is positive class, closer to 0 is negative class</li> <li>Algorithm Used: Logistic Regression. Objective is to find the weights w that maximizes separation between the two classes</li> <li>Optimization: Stochastic Gradient Descent. Seeks to minimize loss/cost so that predicted value is as close to actual as possible</li> <li>Cost/Loss Calculation: Logistic loss function</li> </ul>
# Sigmoid or logistic function # For any x, output is bounded to 0 & 1. def sigmoid_func(x): return 1.0/(1 + math.exp(-x)) sigmoid_func(10) sigmoid_func(-100) sigmoid_func(0) # Sigmoid function example x = pd.Series(np.arange(-8, 8, 0.5)) y = x.map(sigmoid_func) x.head() fig = plt.figure(figsize = (12, 8)) plt.plot(x,y) plt.ylim((-0.2, 1.2)) plt.xlabel('input') plt.ylabel('sigmoid output') plt.grid(True) plt.axvline(x = 0, ymin = 0, ymax = 1, ls = 'dashed') plt.axhline(y = 0.5, xmin = 0, xmax = 10, ls = 'dashed') plt.axhline(y = 1.0, xmin = 0, xmax = 10, color = 'r') plt.axhline(y = 0.0, xmin = 0, xmax = 10, color = 'r') plt.title('Sigmoid')
17-09-27-AWS Machine Learning A Complete Guide With Python/07 - Binary Classification/01 - ml_logistic_cost_example.ipynb
arcyfelix/Courses
apache-2.0
Example Dataset - Hours spent and Exam Results: https://en.wikipedia.org/wiki/Logistic_regression Sigmoid function produces an output between 0 and 1 no. Input closer to 0 produces and output of 0.5 probability. Negative input produces value less than 0.5 while positive input produces value greater than 0.5
data_path = r'..\Data\ClassExamples\HoursExam\HoursExamResult.csv' df = pd.read_csv(data_path)
17-09-27-AWS Machine Learning A Complete Guide With Python/07 - Binary Classification/01 - ml_logistic_cost_example.ipynb
arcyfelix/Courses
apache-2.0
Input Feature: Hours<br> Output: Pass (1 = pass, 0 = fail)
df.head() # optimal weights given in the wiki dataset def straight_line(x): return 1.5046 * x - 4.0777 # How does weight affect outcome def straight_line_weight(weight1, x): return weight1 * x - 4.0777 # Generate probability by running feature thru the linear model and then thru sigmoid function y_vals = df.Hours.map(straight_line).map(sigmoid_func) fig = plt.figure(figsize = (12, 8)) plt.scatter(x = df.Hours, y = y_vals, color = 'b', label = 'logistic') plt.scatter(x = df[df.Pass == 1].Hours, y = df[df.Pass == 1].Pass, color = 'g', label = 'pass') plt.scatter(x = df[df.Pass == 0].Hours, y = df[df.Pass == 0].Pass, color = 'r', label = 'fail') plt.title('Hours Spent Reading - Pass Probability') plt.xlabel('Hours') plt.ylabel('Pass Probability') plt.grid(True) plt.xlim((0,7)) plt.ylim((-0.2,1.5)) plt.axvline(x = 2.75, ymin = 0, ymax=1) plt.axhline(y = 0.5, xmin = 0, xmax = 6, label = 'cutoff at 0.5', ls = 'dashed') plt.axvline(x = 2, ymin = 0, ymax = 1) plt.axhline(y = 0.3, xmin = 0, xmax = 6, label = 'cutoff at 0.3', ls = 'dashed') plt.axvline(x = 3, ymin = 0, ymax=1) plt.axhline(y = 0.6, xmin = 0, xmax = 6, label='cutoff at 0.6', ls = 'dashed') plt.legend()
17-09-27-AWS Machine Learning A Complete Guide With Python/07 - Binary Classification/01 - ml_logistic_cost_example.ipynb
arcyfelix/Courses
apache-2.0
At 2.7 hours of study time, we hit 0.5 probability. So, any student who spent 2.7 hours or more would have a higher probability of passing the exam. In the above example,<br> 1. Top right quadrant = true positive. pass got classified correctly as pass 2. Bottom left quadrant = true negative. fail got classified correctly as fail 3. Top left quadrant = false negative. pass got classified as fail 4. Bottom right quadrant = false positive. fail got classified as pass Cutoff can be adjusted; instead of 0.5, cutoff could be established at 0.4 or 0.6 depending on the nature of problem and impact of misclassification
weights = [0, 1, 2] y_at_weight = {} for w in weights: y_calculated = [] y_at_weight[w] = y_calculated for x in df.Hours: y_calculated.append(sigmoid_func(straight_line_weight(w, x))) y_sig_vals = y_vals.map(sigmoid_func) fig = plt.figure(figsize = (12, 8)) plt.scatter(x = df.Hours, y = y_vals, color = 'b', label = 'logistic curve') plt.scatter(x = df[df.Pass==1].Hours, y = df[df.Pass==1].Pass, color = 'g', label = 'pass') plt.scatter(x = df[df.Pass==0].Hours, y = df[df.Pass==0].Pass, color = 'r', label = 'fail') plt.scatter(x = df.Hours, y = y_at_weight[0], color = 'k', label = 'at wt 0') plt.scatter(x = df.Hours, y = y_at_weight[1], color = 'm', label = 'at wt 1') plt.scatter(x = df.Hours, y = y_at_weight[2], color = 'y', label = 'at wt 2') plt.xlim((0,8)) plt.ylim((-0.2, 1.5)) plt.axhline(y = 0.5, xmin = 0, xmax = 6, color = 'b', ls = 'dashed') plt.axvline(x = 4, ymin = 0, ymax = 1, color = 'm', ls = 'dashed') plt.xlabel('Hours') plt.ylabel('Pass Probability') plt.grid(True) plt.title('How weights impact classification - cutoff 0.5') plt.legend()
17-09-27-AWS Machine Learning A Complete Guide With Python/07 - Binary Classification/01 - ml_logistic_cost_example.ipynb
arcyfelix/Courses
apache-2.0
Logistic Regression Cost/Loss Function<br>
# Cost Function z = pd.Series(np.linspace(0.000001, 0.999999, 100)) ypositive = -z.map(math.log) ynegative = -z.map(lambda x: math.log(1-x)) fig = plt.figure(figsize = (12, 8)) plt.plot(z, ypositive, label = 'Loss curve for positive example') plt.plot(z, ynegative, label = 'Loss curve for negative example') plt.ylabel('Loss') plt.xlabel('Class') plt.title('Loss Curve') plt.legend()
17-09-27-AWS Machine Learning A Complete Guide With Python/07 - Binary Classification/01 - ml_logistic_cost_example.ipynb
arcyfelix/Courses
apache-2.0
Cost function is a log curve<br> 1. positive example correctly classified as positive is given a lower loss/cost 2. positive example incorrectly classified as negative is given a higher loss/cost 3. Negative example correctly classified as negative is given a lower loss/cost 4. Negative example incorrectly classifed as positive is given a higher loss/cost
def compute_logisitic_cost(y_actual, y_predicted): y_pos_cost = y_predicted[y_actual == 1] y_neg_cost = y_predicted[y_actual == 0] positive_cost = (-y_pos_cost.map(math.log)).sum() negative_cost = -y_neg_cost.map(lambda x: math.log(1 - x)).sum() return positive_cost + negative_cost # Example of how prediction vs actual impact loss # Prediction is exact opposite of actual. Loss/Cost should be very high actual = pd.Series([1, 0, 1]) predicted = pd.Series([0.001, .9999, 0.0001]) print('Loss: {0:0.3f}'.format(compute_logisitic_cost(actual, predicted))) # Prediction is close to actual. Loss/Cost should be very low y_actual = pd.Series([1, 0, 1]) y_predicted = pd.Series([0.9, 0.1, 0.8]) print('Loss: {0:0.3f}'.format(compute_logisitic_cost(y_actual, y_predicted))) # Prediction is midpoint. Loss/Cost should be high y_actual = pd.Series([1, 0, 1]) y_predicted = pd.Series([0.5, 0.5, 0.5]) print('Loss: {0:0.3f}'.format(compute_logisitic_cost(y_actual, y_predicted))) # Prediction is midpoint. Loss/Cost should be high y_actual = pd.Series([1, 0, 1]) y_predicted = pd.Series([0.8, 0.4, 0.7]) print('Loss: {0:0.3f}'.format(compute_logisitic_cost(y_actual, y_predicted))) weight = pd.Series(np.linspace(-1.5, 5, num = 100)) cost = [] cost_at_wt = [] for w1 in weight: y_calculated = [] for x in df.Hours: y_calculated.append (sigmoid_func(straight_line_weight(w1, x))) cost_at_wt.append(compute_logisitic_cost(df.Pass, pd.Series(y_calculated))) fig = plt.figure(figsize = (12, 8)) plt.scatter(x = weight, y = cost_at_wt) plt.xlabel('Weight') plt.ylabel('Cost') plt.grid(True) plt.axvline(x = 1.5, ymin = 0, ymax = 100, label = 'Minimal loss') plt.axhline(y = 6.5, xmin = 0, xmax = 6) plt.title('Finding optimal weights') plt.legend()
17-09-27-AWS Machine Learning A Complete Guide With Python/07 - Binary Classification/01 - ml_logistic_cost_example.ipynb
arcyfelix/Courses
apache-2.0
์ฃผ์˜: ํŒŒ์ด์ฌ 3์˜ ๊ฒฝ์šฐ input() ํ•จ์ˆ˜๋ฅผ raw_input() ๋Œ€์‹ ์— ์‚ฌ์šฉํ•ด์•ผ ํ•œ๋‹ค. ์œ„ ์ฝ”๋“œ๋Š” ์ •์ˆ˜๋“ค์˜ ์ œ๊ณฑ์„ ๊ณ„์‚ฐํ•˜๋Š” ํ”„๋กœ๊ทธ๋žจ์ด๋‹ค. ํ•˜์ง€๋งŒ ์‚ฌ์šฉ์ž๊ฐ€ ๊ฒฝ์šฐ์— ๋”ฐ๋ผ ์ •์ˆ˜ ์ด์™ธ์˜ ๊ฐ’์„ ์ž…๋ ฅํ•˜๋ฉด ์‹œ์Šคํ…œ์ด ๋‹ค์šด๋œ๋‹ค. ์ด์— ๋Œ€ํ•œ ํ•ด๊ฒฐ์ฑ…์„ ๋‹ค๋ฃจ๊ณ ์ž ํ•œ๋‹ค. ์˜ค๋ฅ˜ ์˜ˆ์ œ ๋จผ์ € ์˜ค๋ฅ˜์˜ ๋‹ค์–‘ํ•œ ์˜ˆ์ œ๋ฅผ ์‚ดํŽด๋ณด์ž. ๋‹ค์Œ ์ฝ”๋“œ๋“ค์€ ๋ชจ๋‘ ์˜ค๋ฅ˜๋ฅผ ๋ฐœ์ƒ์‹œํ‚จ๋‹ค. ์˜ˆ์ œ: 0์œผ๋กœ ๋‚˜๋ˆ„๊ธฐ ์˜ค๋ฅ˜ python 4.6/0 ์˜ค๋ฅ˜ ์„ค๋ช…: 0์œผ๋กœ ๋‚˜๋ˆŒ ์ˆ˜ ์—†๋‹ค. ์˜ˆ์ œ: ๋ฌธ๋ฒ• ์˜ค๋ฅ˜ python sentence = 'I am a sentence ์˜ค๋ฅ˜ ์„ค๋ช…: ๋ฌธ์ž์—ด ์–‘ ๋์˜ ๋”ฐ์˜ดํ‘œ๊ฐ€ ์ง์ด ๋งž์•„์•ผ ํ•œ๋‹ค. * ์ž‘์€ ๋”ฐ์˜ดํ‘œ๋ผ๋ฆฌ ๋˜๋Š” ํฐ ๋”ฐ์˜ดํ‘œ๋ผ๋ฆฌ ์˜ˆ์ œ: ๋“ค์—ฌ์“ฐ๊ธฐ ๋ฌธ๋ฒ• ์˜ค๋ฅ˜ python for i in range(3): j = i * 2 print(i, j) ์˜ค๋ฅ˜ ์„ค๋ช…: 2๋ฒˆ ์ค„๊ณผ 3๋ฒˆ ์ค„์˜ ๋“ค์—ฌ์“ฐ๊ธฐ ์ •๋„๊ฐ€ ๋™์ผํ•ด์•ผ ํ•œ๋‹ค. ์˜ˆ์ œ: ์ž๋ฃŒํ˜• ์˜ค๋ฅ˜ ```python new_string = 'cat' - 'dog' new_string = 'cat' * 'dog' new_string = 'cat' / 'dog' new_string = 'cat' + 3 new_string = 'cat' - 3 new_string = 'cat' / 3 ``` ์˜ค๋ฅ˜ ์„ค๋ช…: ๋ฌธ์ž์—ด ์ž๋ฃŒํ˜•๋ผ๋ฆฌ์˜ ํ•ฉ, ๋ฌธ์ž์—ด๊ณผ ์ •์ˆ˜์˜ ๊ณฑ์…ˆ๋งŒ ์ •์˜๋˜์–ด ์žˆ๋‹ค. ์˜ˆ์ œ: ์ด๋ฆ„ ์˜ค๋ฅ˜ python print(party) ์˜ค๋ฅ˜ ์„ค๋ช…: ๋ฏธ๋ฆฌ ์„ ์–ธ๋œ ๋ณ€์ˆ˜๋งŒ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋‹ค. ์˜ˆ์ œ: ์ธ๋ฑ์Šค ์˜ค๋ฅ˜ python a_string = 'abcdefg' a_string[12] ์˜ค๋ฅ˜ ์„ค๋ช…: ์ธ๋ฑ์Šค๋Š” ๋ฌธ์ž์—ด์˜ ๊ธธ์ด๋ณด๋‹ค ์ž‘์€ ์ˆ˜๋งŒ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋‹ค. ์˜ˆ์ œ: ๊ฐ’ ์˜ค๋ฅ˜ python int(a_string) ์˜ค๋ฅ˜ ์„ค๋ช…: int() ํ•จ์ˆ˜๋Š” ์ •์ˆ˜๋กœ๋งŒ ๊ตฌ์„ฑ๋œ ๋ฌธ์ž์—ด๋งŒ ์ฒ˜๋ฆฌํ•  ์ˆ˜ ์žˆ๋‹ค. ์˜ˆ์ œ: ์†์„ฑ ์˜ค๋ฅ˜ python print(a_string.len()) ์˜ค๋ฅ˜ ์„ค๋ช…: ๋ฌธ์ž์—ด ์ž๋ฃŒํ˜•์—๋Š” len() ๋ฉ”์†Œ๋“œ๊ฐ€ ์กด์žฌํ•˜์ง€ ์•Š๋Š”๋‹ค. ์ฃผ์˜: len() ์ด๋ผ๋Š” ํ•จ์ˆ˜๋Š” ๋ฌธ์ž์—ด์˜ ๊ธธ์ด๋ฅผ ํ™•์ธํ•˜์ง€๋งŒ ๋ฌธ์ž์—ด ๋ฉ”์†Œ๋“œ๋Š” ์•„๋‹ˆ๋‹ค. ์ดํ›„์— ๋‹ค๋ฃฐ ๋ฆฌ์ŠคํŠธ, ํŠœํ”Œ ๋“ฑ์— ๋Œ€ํ•ด์„œ๋„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ํ•จ์ˆ˜์ด๋‹ค. ์˜ค๋ฅ˜ ํ™•์ธ ์•ž์„œ ์–ธ๊ธ‰ํ•œ ์ฝ”๋“œ๋“ค์„ ์‹คํ–‰ํ•˜๋ฉด ์˜ค๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ•˜๊ณ  ์–ด๋””์„œ ์–ด๋–ค ์˜ค๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ•˜์˜€๋Š”๊ฐ€์— ๋Œ€ํ•œ ์ •๋ณด๋ฅผ ํŒŒ์ด์ฌ ํ•ด์„๊ธฐ๊ฐ€ ๋ฐ”๋กœ ์•Œ๋ ค ์ค€๋‹ค. ์˜ˆ์ œ
sentence = 'I am a sentence
previous/notes2017/W03/GongSu06_Errors_and_Exception_Handling.ipynb
liganega/Gongsu-DataSci
gpl-3.0
์˜ค๋ฅ˜๋ฅผ ํ™•์ธํ•˜๋Š” ๋ฉ”์‹œ์ง€๊ฐ€ ์ฒ˜์Œ ๋ณผ ๋•Œ๋Š” ๋งค์šฐ ์ƒ์†Œํ•˜๋‹ค. ์œ„ ์˜ค๋ฅ˜ ๋ฉ”์‹œ์ง€๋ฅผ ๊ฐ„๋‹จํ•˜๊ฒŒ ์‚ดํŽด๋ณด๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™๋‹ค. File "&lt;ipython-input-37-a6097ed4dc2e&gt;", line 1 1๋ฒˆ ์ค„์—์„œ ์˜ค๋ฅ˜ ๋ฐœ์ƒ sentence = 'I am a sentence ^ ์˜ค๋ฅ˜ ๋ฐœ์ƒ ์œ„์น˜ ๋ช…์‹œ SyntaxError: EOL while scanning string literal ์˜ค๋ฅ˜ ์ข…๋ฅ˜ ํ‘œ์‹œ: ๋ฌธ๋ฒ• ์˜ค๋ฅ˜(SyntaxError) ์˜ˆ์ œ ์•„๋ž˜ ์˜ˆ์ œ๋Š” 0์œผ๋กœ ๋‚˜๋ˆŒ ๋•Œ ๋ฐœ์ƒํ•˜๋Š” ์˜ค๋ฅ˜๋ฅผ ๋‚˜ํƒ€๋‚ธ๋‹ค. ์˜ค๋ฅ˜์— ๋Œ€ํ•œ ์ •๋ณด๋ฅผ ์ž˜ ์‚ดํŽด๋ณด๋ฉด์„œ ์–ด๋–ค ๋‚ด์šฉ์„ ๋‹ด๊ณ  ์žˆ๋Š”์ง€ ํ™•์ธํ•ด ๋ณด์•„์•ผ ํ•œ๋‹ค.
a = 0 4/a
previous/notes2017/W03/GongSu06_Errors_and_Exception_Handling.ipynb
liganega/Gongsu-DataSci
gpl-3.0
์˜ค๋ฅ˜์˜ ์ข…๋ฅ˜ ์•ž์„œ ์˜ˆ์ œ๋“ค์„ ํ†ตํ•ด ์‚ดํŽด ๋ณด์•˜๋“ฏ์ด ๋‹ค์–‘ํ•œ ์ข…๋ฅ˜์˜ ์˜ค๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ•˜๋ฉฐ, ์ฝ”๋“œ๊ฐ€ ๊ธธ์–ด์ง€๊ฑฐ๋‚˜ ๋ณต์žกํ•ด์ง€๋ฉด ์˜ค๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ•  ๊ฐ€๋Šฅ์„ฑ์€ ์ ์ฐจ ์ปค์ง„๋‹ค. ์˜ค๋ฅ˜์˜ ์ข…๋ฅ˜๋ฅผ ํŒŒ์•…ํ•˜๋ฉด ์–ด๋””์„œ ์™œ ์˜ค๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ•˜์˜€๋Š”์ง€๋ฅผ ๋ณด๋‹ค ์‰ฝ๊ฒŒ ํŒŒ์•…ํ•˜์—ฌ ์ฝ”๋“œ๋ฅผ ์ˆ˜์ •ํ•  ์ˆ˜ ์žˆ๊ฒŒ ๋œ๋‹ค. ๋”ฐ๋ผ์„œ ์ฝ”๋“œ์˜ ๋ฐœ์ƒ์›์ธ์„ ๋ฐ”๋กœ ์•Œ์•„๋‚ผ ์ˆ˜ ์žˆ์–ด์•ผ ํ•˜๋ฉฐ ์ด๋ฅผ ์œ„ํ•ด์„œ๋Š” ์˜ค๋ฅ˜ ๋ฉ”์‹œ์ง€๋ฅผ ์ œ๋Œ€๋กœ ํ™•์ธํ•  ์ˆ˜ ์žˆ์–ด์•ผ ํ•œ๋‹ค. ํ•˜์ง€๋งŒ ์—ฌ๊ธฐ์„œ๋Š” ์–ธ๊ธ‰๋œ ์˜ˆ์ œ ์ •๋„์˜ ์ˆ˜์ค€๋งŒ ๋‹ค๋ฃจ๊ณ  ๋„˜์–ด๊ฐ„๋‹ค. ์ฝ”๋”ฉ์„ ํ•˜๋‹ค ๋ณด๋ฉด ์–ด์ฐจํ”ผ ๋‹ค์–‘ํ•œ ์˜ค๋ฅ˜์™€ ๋งˆ์ฃผ์น˜๊ฒŒ ๋  ํ…๋ฐ ๊ทธ๋•Œ๋งˆ๋‹ค ์Šค์Šค๋กœ ์˜ค๋ฅ˜์˜ ๋‚ด์šฉ๊ณผ ์›์ธ์„ ํ™•์ธํ•ด ๋‚˜๊ฐ€๋Š” ๊ณผ์ •์„ ํ†ตํ•ด ๋ณด๋‹ค ๋งŽ์€ ๊ฒฝํ—˜์„ ์Œ“๋Š” ๊ธธ ์™ธ์—๋Š” ๋‹ฌ๋ฆฌ ๋ฐฉ๋ฒ•์ด ์—†๋‹ค. ์˜ˆ์™ธ ์ฒ˜๋ฆฌ ์ฝ”๋“œ์— ๋ฌธ๋ฒ• ์˜ค๋ฅ˜๊ฐ€ ํฌํ•จ๋˜์–ด ์žˆ๋Š” ๊ฒฝ์šฐ ์•„์˜ˆ ์‹คํ–‰๋˜์ง€ ์•Š๋Š”๋‹ค. ๊ทธ๋ ‡์ง€ ์•Š์€ ๊ฒฝ์šฐ์—๋Š” ์ผ๋‹จ ์‹คํ–‰์ด ๋˜๊ณ  ์ค‘๊ฐ„์— ์˜ค๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ•˜๋ฉด ๋ฐ”๋กœ ๋ฉˆ์ถฐ๋ฒ„๋ฆฐ๋‹ค. ์ด๋ ‡๊ฒŒ ์ค‘๊ฐ„์— ์˜ค๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ๋Š” ๊ฒฝ์šฐ๋ฅผ ๋ฏธ๋ฆฌ ์ƒ๊ฐํ•˜์—ฌ ๋Œ€๋น„ํ•˜๋Š” ๊ณผ์ •์„ ์˜ˆ์™ธ ์ฒ˜๋ฆฌ(exception handling)๋ผ๊ณ  ๋ถ€๋ฅธ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ์˜ค๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ•˜๋”๋ผ๋„ ์˜ค๋ฅ˜๋ฐœ์ƒ ์ด์ „๊นŒ์ง€ ์ƒ์„ฑ๋œ ์ •๋ณด๋“ค์„ ์ €์žฅํ•˜๊ฑฐ๋‚˜, ์˜ค๋ฅ˜๋ฐœ์ƒ ์ด์œ ๋ฅผ ์ข€ ๋” ์ž์„ธํžˆ ๋‹ค๋ฃจ๊ฑฐ๋‚˜, ์•„๋‹ˆ๋ฉด ์˜ค๋ฅ˜๋ฐœ์ƒ์— ๋Œ€ํ•œ ๋ณด๋‹ค ์ž์„ธํ•œ ์ •๋ณด๋ฅผ ์‚ฌ์šฉ์ž์—๊ฒŒ ์•Œ๋ ค์ฃผ๊ธฐ ์œ„ํ•ด ์˜ˆ์™ธ ์ฒ˜๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•œ๋‹ค. ์˜ˆ์ œ ์•„๋ž˜ ์ฝ”๋“œ๋Š” raw_input() ํ•จ์ˆ˜๋ฅผ ์ด์šฉํ•˜์—ฌ ์‚ฌ์šฉ์ž๋กœ๋ถ€ํ„ฐ ์ˆซ์ž๋ฅผ ์ž…๋ ฅ๋ฐ›์•„ ๊ทธ ์ˆซ์ž์˜ ์ œ๊ณฑ์„ ๋ฆฌํ„ดํ•˜๊ณ ์ž ํ•˜๋Š” ๋‚ด์šฉ์„ ๋‹ด๊ณ  ์žˆ์œผ๋ฉฐ, ์ฝ”๋“œ์—๋Š” ๋ฌธ๋ฒ•์  ์˜ค๋ฅ˜๊ฐ€ ์—†๋‹ค. ๊ทธ๋ฆฌ๊ณ  ์ฝ”๋“œ๋ฅผ ์‹คํ–‰ํ•˜๋ฉด ์ˆซ์ž๋ฅผ ์ž…๋ ฅํ•˜๋ผ๋Š” ์ฐฝ์ด ๋‚˜์˜จ๋‹ค. ์—ฌ๊ธฐ์— ์ˆซ์ž 3์„ ์ž…๋ ฅํ•˜๋ฉด ์ •์ƒ์ ์œผ๋กœ ์ž‘๋™ํ•˜์ง€๋งŒ ์˜ˆ๋ฅผ ๋“ค์–ด, 3.2๋ฅผ ์ž…๋ ฅํ•˜๋ฉด ๊ฐ’ ์˜ค๋ฅ˜(value error)๊ฐ€ ๋ฐœ์ƒํ•œ๋‹ค.
from __future__ import print_function number_to_square = raw_input("A number please") # number_to_square ๋ณ€์ˆ˜์˜ ์ž๋ฃŒํ˜•์ด ๋ฌธ์ž์—ด(str)์ž„์— ์ฃผ์˜ํ•˜๋ผ. # ๋”ฐ๋ผ์„œ ์—ฐ์‚ฐ์„ ํ•˜๊ณ  ์‹ถ์œผ๋ฉด ์ •์ˆ˜ํ˜•(int)์œผ๋กœ ํ˜•๋ณ€ํ™˜์„ ๋จผ์ € ํ•ด์•ผ ํ•œ๋‹ค. number = int(number_to_square) print("์ œ๊ณฑ์˜ ๊ฒฐ๊ณผ๋Š”", number**2, "์ž…๋‹ˆ๋‹ค.")
previous/notes2017/W03/GongSu06_Errors_and_Exception_Handling.ipynb
liganega/Gongsu-DataSci
gpl-3.0
3.2๋ฅผ ์ž…๋ ฅํ–ˆ์„ ๋•Œ ์˜ค๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ•˜๋Š” ์ด์œ ๋Š” int() ํ•จ์ˆ˜๊ฐ€ ์ •์ˆ˜ ๋ชจ์–‘์˜ ๋ฌธ์ž์—ด๋งŒ ์ฒ˜๋ฆฌํ•  ์ˆ˜ ์žˆ๊ธฐ ๋•Œ๋ฌธ์ด๋‹ค. ์‚ฌ์‹ค ์ •์ˆ˜๋“ค์˜ ์ œ๊ณฑ์„ ๊ณ„์‚ฐํ•˜๋Š” ํ”„๋กœ๊ทธ๋žจ์„ ์ž‘์„ฑํ•˜์˜€์ง€๋งŒ ๊ฒฝ์šฐ์— ๋”ฐ๋ผ ์ •์ˆ˜ ์ด์™ธ์˜ ๊ฐ’์„ ์ž…๋ ฅํ•˜๋Š” ๊ฒฝ์šฐ๊ฐ€ ๋ฐœ์ƒํ•˜๊ฒŒ ๋˜๋ฉฐ, ์ด๋Ÿฐ ๊ฒฝ์šฐ๋ฅผ ๋Œ€๋น„ํ•ด์•ผ ํ•œ๋‹ค. ์ฆ‰, ์˜ค๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ•  ๊ฒƒ์„ ๋ฏธ๋ฆฌ ์˜ˆ์ƒํ•ด์•ผ ํ•˜๋ฉฐ, ์–ด๋–ป๊ฒŒ ๋Œ€์ฒ˜ํ•ด์•ผ ํ• ์ง€ ์ค€๋น„ํ•ด์•ผ ํ•˜๋Š”๋ฐ, try ... except ...๋ฌธ์„ ์ด์šฉํ•˜์—ฌ ์˜ˆ์™ธ๋ฅผ ์ฒ˜๋ฆฌํ•˜๋Š” ๋ฐฉ์‹์„ ํ™œ์šฉํ•  ์ˆ˜ ์žˆ๋‹ค.
number_to_square = raw_input("A number please:") try: number = int(number_to_square) print("์ œ๊ณฑ์˜ ๊ฒฐ๊ณผ๋Š”", number ** 2, "์ž…๋‹ˆ๋‹ค.") except: print("์ •์ˆ˜๋ฅผ ์ž…๋ ฅํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค.")
previous/notes2017/W03/GongSu06_Errors_and_Exception_Handling.ipynb
liganega/Gongsu-DataSci
gpl-3.0
์˜ค๋ฅ˜ ์ข…๋ฅ˜์— ๋งž์ถ”์–ด ๋‹ค์–‘ํ•œ ๋Œ€์ฒ˜๋ฅผ ํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š” ์˜ค๋ฅ˜์˜ ์ข…๋ฅ˜๋ฅผ ๋ช…์‹œํ•˜์—ฌ ์˜ˆ์™ธ์ฒ˜๋ฆฌ๋ฅผ ํ•˜๋ฉด ๋œ๋‹ค. ์•„๋ž˜ ์ฝ”๋“œ๋Š” ์ž…๋ ฅ ๊ฐ‘์— ๋”ฐ๋ผ ๋‹ค๋ฅธ ์˜ค๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ•˜๊ณ  ๊ทธ์— ์ƒ์‘ํ•˜๋Š” ๋ฐฉ์‹์œผ๋กœ ์˜ˆ์™ธ์ฒ˜๋ฆฌ๋ฅผ ์‹คํ–‰ํ•œ๋‹ค. ๊ฐ’ ์˜ค๋ฅ˜(ValueError)์˜ ๊ฒฝ์šฐ
number_to_square = raw_input("A number please: ") try: number = int(number_to_square) a = 5/(number - 4) print("๊ฒฐ๊ณผ๋Š”", a, "์ž…๋‹ˆ๋‹ค.") except ValueError: print("์ •์ˆ˜๋ฅผ ์ž…๋ ฅํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค.") except ZeroDivisionError: print("4๋Š” ๋นผ๊ณ  ํ•˜์„ธ์š”.")
previous/notes2017/W03/GongSu06_Errors_and_Exception_Handling.ipynb
liganega/Gongsu-DataSci
gpl-3.0
0์œผ๋กœ ๋‚˜๋ˆ„๊ธฐ ์˜ค๋ฅ˜(ZeroDivisionError)์˜ ๊ฒฝ์šฐ
number_to_square = raw_input("A number please: ") try: number = int(number_to_square) a = 5/(number - 4) print("๊ฒฐ๊ณผ๋Š”", a, "์ž…๋‹ˆ๋‹ค.") except ValueError: print("์ •์ˆ˜๋ฅผ ์ž…๋ ฅํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค.") except ZeroDivisionError: print("4๋Š” ๋นผ๊ณ  ํ•˜์„ธ์š”.")
previous/notes2017/W03/GongSu06_Errors_and_Exception_Handling.ipynb
liganega/Gongsu-DataSci
gpl-3.0
์ฃผ์˜: ์ด์™€ ๊ฐ™์ด ๋ฐœ์ƒํ•  ์ˆ˜ ์˜ˆ์™ธ๋ฅผ ๊ฐ€๋Šฅํ•œ ํ•œ ๋ชจ๋‘ ์—ผ๋‘ํ•˜๋Š” ํ”„๋กœ๊ทธ๋žจ์„ ๊ตฌํ˜„ํ•ด์•ผ ํ•˜๋Š” ์ผ์€ ๋งค์šฐ ์–ด๋ ค์šด ์ผ์ด๋‹ค. ์•ž์„œ ๋ณด์•˜๋“ฏ์ด ์˜ค๋ฅ˜์˜ ์ข…๋ฅ˜๋ฅผ ์ •ํ™•ํžˆ ์•Œ ํ•„์š”๊ฐ€ ๋ฐœ์ƒํ•œ๋‹ค. ๋‹ค์Œ ์˜ˆ์ œ์—์†Œ ๋ณด๋“ฏ์ด ์˜ค๋ฅ˜์˜ ์ข…๋ฅ˜๋ฅผ ํ‹€๋ฆฌ๊ฒŒ ๋ช…์‹œํ•˜๋ฉด ์˜ˆ์™ธ ์ฒ˜๋ฆฌ๊ฐ€ ์ œ๋Œ€๋กœ ์ž‘๋™ํ•˜์ง€ ์•Š๋Š”๋‹ค.
try: a = 1/0 except ValueError: print("This program stops here.")
previous/notes2017/W03/GongSu06_Errors_and_Exception_Handling.ipynb
liganega/Gongsu-DataSci
gpl-3.0
์˜ค๋ฅ˜์— ๋Œ€ํ•œ ๋ณด๋‹ค ์ž์„ธํ•œ ์ •๋ณด ํŒŒ์ด์ฌ์—์„œ ๋‹ค๋ฃจ๋Š” ์˜ค๋ฅ˜์— ๋Œ€ํ•œ ๋ณด๋‹ค ์ž์„ธํ•œ ์ •๋ณด๋Š” ์•„๋ž˜ ์‚ฌ์ดํŠธ๋“ค์— ์ƒ์„ธํ•˜๊ฒŒ ์•ˆ๋‚ด๋˜์–ด ์žˆ๋‹ค. ํŒŒ์ด์ฌ ๊ธฐ๋ณธ ๋‚ด์žฅ ์˜ค๋ฅ˜ ์ •๋ณด ๋ฌธ์„œ: https://docs.python.org/3.4/library/exceptions.html ํŒŒ์ด์ฌ ์˜ˆ์™ธ์ฒ˜๋ฆฌ ์ •๋ณด ๋ฌธ์„œ: https://docs.python.org/3.4/tutorial/errors.html ์—ฐ์Šต๋ฌธ์ œ ์—ฐ์Šต ์•„๋ž˜ ์ฝ”๋“œ๋Š” 100์„ ์ž…๋ ฅํ•œ ๊ฐ’์œผ๋กœ ๋‚˜๋ˆ„๋Š” ํ•จ์ˆ˜์ด๋‹ค. ๋‹ค๋งŒ 0์„ ์ž…๋ ฅํ•  ๊ฒฝ์šฐ 0์œผ๋กœ ๋‚˜๋ˆ„๊ธฐ ์˜ค๋ฅ˜(ZeroDivisionError)๊ฐ€ ๋ฐœ์ƒํ•œ๋‹ค.
from __future__ import print_function number_to_square = raw_input("A number to divide 100: ") number = int(number_to_square) print("100์„ ์ž…๋ ฅํ•œ ๊ฐ’์œผ๋กœ ๋‚˜๋ˆˆ ๊ฒฐ๊ณผ๋Š”", 100/number, "์ž…๋‹ˆ๋‹ค.")
previous/notes2017/W03/GongSu06_Errors_and_Exception_Handling.ipynb
liganega/Gongsu-DataSci
gpl-3.0
์•„๋ž˜ ๋‚ด์šฉ์ด ์ถฉ์กฑ๋˜๋„๋ก ์œ„ ์ฝ”๋“œ๋ฅผ ์ˆ˜์ •ํ•˜๋ผ. ๋‚˜๋ˆ—์…ˆ์ด ๋ถ€๋™์†Œ์ˆ˜์ ์œผ๋กœ ๊ณ„์‚ฐ๋˜๋„๋ก ํ•œ๋‹ค. 0์ด ์•„๋‹Œ ์ˆซ์ž๊ฐ€ ์ž…๋ ฅ๋  ๊ฒฝ์šฐ 100์„ ๊ทธ ์ˆซ์ž๋กœ ๋‚˜๋ˆˆ๋‹ค. 0์ด ์ž…๋ ฅ๋  ๊ฒฝ์šฐ 0์ด ์•„๋‹Œ ์ˆซ์ž๋ฅผ ์ž…๋ ฅํ•˜๋ผ๊ณ  ์ „๋‹ฌํ•œ๋‹ค. ์ˆซ์ž๊ฐ€ ์•„๋‹Œ ๊ฐ’์ด ์ž…๋ ฅ๋  ๊ฒฝ์šฐ ์ˆซ์ž๋ฅผ ์ž…๋ ฅํ•˜๋ผ๊ณ  ์ „๋‹ฌํ•œ๋‹ค. ๊ฒฌ๋ณธ๋‹ต์•ˆ:
number_to_square = raw_input("A number to divide 100: ") try: number = float(number_to_square) print("100์„ ์ž…๋ ฅํ•œ ๊ฐ’์œผ๋กœ ๋‚˜๋ˆˆ ๊ฒฐ๊ณผ๋Š”", 100/number, "์ž…๋‹ˆ๋‹ค.") except ZeroDivisionError: raise ZeroDivisionError('0์ด ์•„๋‹Œ ์ˆซ์ž๋ฅผ ์ž…๋ ฅํ•˜์„ธ์š”.') except ValueError: raise ValueError('์ˆซ์ž๋ฅผ ์ž…๋ ฅํ•˜์„ธ์š”.') number_to_square = raw_input("A number to divide 100: ") try: number = float(number_to_square) print("100์„ ์ž…๋ ฅํ•œ ๊ฐ’์œผ๋กœ ๋‚˜๋ˆˆ ๊ฒฐ๊ณผ๋Š”", 100/number, "์ž…๋‹ˆ๋‹ค.") except ZeroDivisionError: raise ZeroDivisionError('0์ด ์•„๋‹Œ ์ˆซ์ž๋ฅผ ์ž…๋ ฅํ•˜์„ธ์š”.') except ValueError: raise ValueError('์ˆซ์ž๋ฅผ ์ž…๋ ฅํ•˜์„ธ์š”.')
previous/notes2017/W03/GongSu06_Errors_and_Exception_Handling.ipynb
liganega/Gongsu-DataSci
gpl-3.0
Define the input string:
data = 'hello world' print(data)
NLP/01-One_hot_encoding/notebooks/One_hot_encoding.ipynb
rahulremanan/python_tutorial
mit
Define universe of possible input values:
alphabet = 'abcdefghijklmnopqrstuvwxyz '
NLP/01-One_hot_encoding/notebooks/One_hot_encoding.ipynb
rahulremanan/python_tutorial
mit
Define a mapping of characters to corresponding integers:
char_to_int = dict((c, i) for i, c in enumerate(alphabet)) int_to_char = dict((i, c) for i, c in enumerate(alphabet))
NLP/01-One_hot_encoding/notebooks/One_hot_encoding.ipynb
rahulremanan/python_tutorial
mit
Integer encoding of the input data:
integer_encoded = [char_to_int[char] for char in data] print(integer_encoded)
NLP/01-One_hot_encoding/notebooks/One_hot_encoding.ipynb
rahulremanan/python_tutorial
mit
One hot encoding:
onehot_encoded = list() for value in integer_encoded: letter = [0 for _ in range(len(alphabet))] letter[value] = 1 onehot_encoded.append(letter) print(onehot_encoded)
NLP/01-One_hot_encoding/notebooks/One_hot_encoding.ipynb
rahulremanan/python_tutorial
mit
Dencoding one hot encoded data -- First character:
inverted = int_to_char[np.argmax(onehot_encoded[0])] print(inverted)
NLP/01-One_hot_encoding/notebooks/One_hot_encoding.ipynb
rahulremanan/python_tutorial
mit
Dencoding one hot encoded data -- Entire one-hot encoded input:
decoded = list() for i in range(len(onehot_encoded)): decoded_char = int_to_char[np.argmax(onehot_encoded[i])] decoded.append(decoded_char) print (''.join([str(item) for item in decoded]))
NLP/01-One_hot_encoding/notebooks/One_hot_encoding.ipynb
rahulremanan/python_tutorial
mit
Part 02a -- One hot encoding using sci-kit learn: Importing libraries:
from numpy import array from numpy import argmax from sklearn.preprocessing import LabelEncoder from sklearn.preprocessing import OneHotEncoder
NLP/01-One_hot_encoding/notebooks/One_hot_encoding.ipynb
rahulremanan/python_tutorial
mit
Define example :
data = ['cold', 'cold', 'warm', 'cold', 'hot', 'hot', 'warm', 'cold', 'warm', 'hot'] values = array(data) print(values)
NLP/01-One_hot_encoding/notebooks/One_hot_encoding.ipynb
rahulremanan/python_tutorial
mit
Integer encoding:
label_encoder = LabelEncoder() label_encoded = label_encoder.fit_transform(values) print(label_encoded)
NLP/01-One_hot_encoding/notebooks/One_hot_encoding.ipynb
rahulremanan/python_tutorial
mit
Binary encoding:
onehot_encoder = OneHotEncoder(sparse=False) label_encoded = label_encoded.reshape(len(label_encoded), 1) onehot_encoded = onehot_encoder.fit_transform(label_encoded) print(onehot_encoded)
NLP/01-One_hot_encoding/notebooks/One_hot_encoding.ipynb
rahulremanan/python_tutorial
mit
Invert first example:
inverted = label_encoder.inverse_transform([argmax(onehot_encoded[0, :])])
NLP/01-One_hot_encoding/notebooks/One_hot_encoding.ipynb
rahulremanan/python_tutorial
mit
Output the decoded example:
print(inverted)
NLP/01-One_hot_encoding/notebooks/One_hot_encoding.ipynb
rahulremanan/python_tutorial
mit
Part 02b -- One-hot encode using keras: Importing libraries:
from numpy import array from numpy import argmax from sklearn.preprocessing import LabelEncoder from sklearn.preprocessing import OneHotEncoder from keras.utils import to_categorical
NLP/01-One_hot_encoding/notebooks/One_hot_encoding.ipynb
rahulremanan/python_tutorial
mit
Define the variable:
data = ['cold', 'cold', 'warm', 'cold', 'hot', 'hot', 'warm', 'cold', 'warm', 'hot'] values = array(data) print(values)
NLP/01-One_hot_encoding/notebooks/One_hot_encoding.ipynb
rahulremanan/python_tutorial
mit
Integer encoding:
label_encoder = LabelEncoder() label_encoded = label_encoder.fit_transform(values) print(label_encoded) # one hot encode encoded = to_categorical(label_encoded) print(encoded) # invert encoding label_encoded = argmax(encoded[0]) inverted = label_encoder.inverse_transform(label_encoded) print(inverted)
NLP/01-One_hot_encoding/notebooks/One_hot_encoding.ipynb
rahulremanan/python_tutorial
mit
Part 02c -- One-hot encode using keras for numerical categories:
from numpy import array from numpy import argmax from keras.utils import to_categorical # define example data = [1, 3, 2, 0, 3, 2, 2, 1, 0, 1] data = array(data) print(data) # one hot encode encoded = to_categorical(data) print(encoded) # invert encoding inverted = argmax(encoded[0]) print(inverted)
NLP/01-One_hot_encoding/notebooks/One_hot_encoding.ipynb
rahulremanan/python_tutorial
mit
Linear models Linear models are useful when little data is available or for very large feature spaces as in text classification. In addition, they form a good case study for regularization. Linear models for regression All linear models for regression learn a coefficient parameter coef_ and an offset intercept_ to make predictions using a linear combination of features: y_pred = x_test[0] * coef_[0] + ... + x_test[n_features-1] * coef_[n_features-1] + intercept_ The difference between the linear models for regression is what kind of restrictions or penalties are put on coef_ as regularization , in addition to fitting the training data well. The most standard linear model is the 'ordinary least squares regression', often simply called 'linear regression'. It doesn't put any additional restrictions on coef_, so when the number of features is large, it becomes ill-posed and the model overfits. Let us generate a simple simulation, to see the behavior of these models.
from sklearn.datasets import make_regression from sklearn.model_selection import train_test_split X, y, true_coefficient = make_regression(n_samples=200, n_features=30, n_informative=10, noise=100, coef=True, random_state=5) X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=5, train_size=60, test_size=140) print(X_train.shape) print(y_train.shape)
notebooks/17.In_Depth-Linear_Models.ipynb
amueller/scipy-2017-sklearn
cc0-1.0
Lasso (L1 penalty) The Lasso estimator is useful to impose sparsity on the coefficient. In other words, it is to be prefered if we believe that many of the features are not relevant. This is done via the so-called l1 penalty. $$ \text{min}_{w, b} \sum_i \frac{1}{2} || w^\mathsf{T}x_i + b - y_i||^2 + \alpha ||w||_1$$
from sklearn.linear_model import Lasso lasso_models = {} training_scores = [] test_scores = [] for alpha in [30, 10, 1, .01]: lasso = Lasso(alpha=alpha).fit(X_train, y_train) training_scores.append(lasso.score(X_train, y_train)) test_scores.append(lasso.score(X_test, y_test)) lasso_models[alpha] = lasso plt.figure() plt.plot(training_scores, label="training scores") plt.plot(test_scores, label="test scores") plt.xticks(range(4), [30, 10, 1, .01]) plt.legend(loc="best") plt.figure(figsize=(10, 5)) plt.plot(true_coefficient[coefficient_sorting], "o", label="true", c='b') for i, alpha in enumerate([30, 10, 1, .01]): plt.plot(lasso_models[alpha].coef_[coefficient_sorting], "o", label="alpha = %.2f" % alpha, c=plt.cm.summer(i / 3.)) plt.legend(loc="best") plt.figure(figsize=(10, 5)) plot_learning_curve(LinearRegression(), X, y) plot_learning_curve(Ridge(alpha=10), X, y) plot_learning_curve(Lasso(alpha=10), X, y)
notebooks/17.In_Depth-Linear_Models.ipynb
amueller/scipy-2017-sklearn
cc0-1.0
Similar to the Ridge/Lasso separation, you can set the penalty parameter to 'l1' to enforce sparsity of the coefficients (similar to Lasso) or 'l2' to encourage smaller coefficients (similar to Ridge). Multi-class linear classification
from sklearn.datasets import make_blobs plt.figure() X, y = make_blobs(random_state=42) plt.scatter(X[:, 0], X[:, 1], c=plt.cm.spectral(y / 2.)); from sklearn.svm import LinearSVC linear_svm = LinearSVC().fit(X, y) print(linear_svm.coef_.shape) print(linear_svm.intercept_.shape) plt.scatter(X[:, 0], X[:, 1], c=plt.cm.spectral(y / 2.)) line = np.linspace(-15, 15) for coef, intercept in zip(linear_svm.coef_, linear_svm.intercept_): plt.plot(line, -(line * coef[0] + intercept) / coef[1]) plt.ylim(-10, 15) plt.xlim(-10, 8);
notebooks/17.In_Depth-Linear_Models.ipynb
amueller/scipy-2017-sklearn
cc0-1.0
Points are classified in a one-vs-rest fashion (aka one-vs-all), where we assign a test point to the class whose model has the highest confidence (in the SVM case, highest distance to the separating hyperplane) for the test point. <div class="alert alert-success"> <b>EXERCISE</b>: <ul> <li> Use LogisticRegression to classify the digits data set, and grid-search the C parameter. </li> <li> How do you think the learning curves above change when you increase or decrease alpha? Try changing the alpha parameter in ridge and lasso, and see if your intuition was correct. </li> </ul> </div>
from sklearn.datasets import load_digits from sklearn.linear_model import LogisticRegression digits = load_digits() X_digits, y_digits = digits.data, digits.target # split the dataset, apply grid-search # %load solutions/17A_logreg_grid.py # %load solutions/17B_learning_curve_alpha.py
notebooks/17.In_Depth-Linear_Models.ipynb
amueller/scipy-2017-sklearn
cc0-1.0
You can apply the replace() method multiple times:
# Create a variable that stores the strong called 'apple' a = 'apple' # Create a copy of a with the ps, l, and e removed and reassign the value of a a = a.replace('p','').replace('l','').replace('e','') print(a)
winter2017/econ129/python/Econ129_Class_03_Complete.ipynb
letsgoexploring/teaching
mit
Now we have the tools to solve the email problem.
# Original character string string = '"Carl Friedrich Gauss" <approximatelynormal@email.com>, "Leonhard Euler" <e@email.com>, "Bernhard Riemann" <zeta@email.com>' # Remove <, >, and " from string and overwrite and print the result string = string.replace('<','').replace('>','').replace('"<"','') # Create a new variable called string_formatted with the commas replaced by the new line character '\n' string_formatted = string.replace(', ','\n') # Print string_formatted print(string_formatted)
winter2017/econ129/python/Econ129_Class_03_Complete.ipynb
letsgoexploring/teaching
mit
A related problem might be to extract only the email address from the orginal string. To do this, we can use replace() method to remove the '&lt;', '&gt;', and ',' characters. Then we use the split() method to break the string apart at the spaces. The we loop over the resulting list of strings and take only the strings with '@' characters in them.
string = '"Carl Friedrich Gauss" <approximatelynormal@email.com>, "Leonhard Euler" <e@email.com>, "Bernhard Riemann" <zeta@email.com>' string = string.replace('<','').replace('>','').replace('"<"','').replace(',','') for s in string.split(): if '@' in s: print(s)
winter2017/econ129/python/Econ129_Class_03_Complete.ipynb
letsgoexploring/teaching
mit
Numpy NumPy is a powerful Python module for scientific computing. Among other things, NumPy defines an N-dimensional array object that is especially convenient to use for plotting functions and for simulating and storing time series data. NumPy also defines many useful mathematical functions like, for example, the sine, cosine, and exponential functions and has excellent functions for probability and statistics including random number generators, and many cumulative density functions and probability density functions. Importing NumPy The standard way to import NumPy so that the namespace is np. This is for the sake of brevity.
import numpy as np
winter2017/econ129/python/Econ129_Class_03_Complete.ipynb
letsgoexploring/teaching
mit
NumPy arrays A NumPy ndarray is a homogeneous multidimensional array. Here, homogeneous means that all of the elements of the array have the same type. An nadrray is a table of numbers (like a matrix but with possibly more dimensions) indexed by a tuple of positive integers. The dimensions of NumPy arrays are called axes and the number of axes is called the rank. For this course, we will work almost exclusively with 1-dimensional arrays that are effectively vectors. Occasionally, we might run into a 2-dimensional array. Basics The most straightforward way to create a NumPy array is to call the array() function which takes as an argument a list. For example:
# Create a variable called a1 equal to a numpy array containing the numbers 1 through 5 a1 = np.array([1,2,3,4,5]) print(a1) # Find the type of a1 print(type(a1)) # find the shape of a1 print(np.shape(a1)) # Use ndim to find the rank or number of dimensions of a1 print(np.ndim(a1)) # Create a variable called a2 equal to a 2-dimensionl numpy array containing the numbers 1 through 4 a2 = np.array([[1,2],[3,4]]) print(a2) # find the shape of a2 print(np.shape(a2)) # Use ndim to find the rank or number of dimensions of a2 print(np.ndim(a2)) # Create a variable called c an empty numpy array a3 = np.array([]) print(a3) # find the shape of a3 print(np.shape(a3)) # Use ndim to find the rank or number of dimensions of a3 print(np.ndim(a3))
winter2017/econ129/python/Econ129_Class_03_Complete.ipynb
letsgoexploring/teaching
mit