markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Now for the heavy lifting: We first have to come up with the weights, - calculate the month lengths for each monthly data record - calculate weights using groupby('time.season') From Norman: http://xarray.pydata.org/en/stable/time-series.html#datetime-components Finally, we just need to multiply our weights by the Dataset and sum allong the time dimension.
# Make a DataArray with the number of days in each month, size = len(time) month_length = xr.DataArray(get_dpm(ds.time.to_index(), calendar='noleap'), coords=[ds.time], name='month_length') # Calculate the weights by grouping by 'time.season'. seasons = month_length.groupby('time.season') weights = seasons / seasons.sum() # Test that the sum of the weights for each season is 1.0 np.testing.assert_allclose(weights.groupby('time.season').sum().values, np.ones(4)) # Calculate the weighted average ds_weighted = (ds * weights).groupby('time.season').sum(dim='time') ds_weighted # only used for comparisons ds_unweighted = ds.groupby('time.season').mean('time') ds_diff = ds_weighted - ds_unweighted # Quick plot to show the results is_null = np.isnan(ds_unweighted['Tair'][0].values) fig, axes = plt.subplots(nrows=4, ncols=3, figsize=(14,12)) for i, season in enumerate(('DJF', 'MAM', 'JJA', 'SON')): plt.sca(axes[i, 0]) plt.pcolormesh(np.ma.masked_where(is_null, ds_weighted['Tair'].sel(season=season).values), vmin=-30, vmax=30, cmap='Spectral_r') plt.colorbar(extend='both') plt.sca(axes[i, 1]) plt.pcolormesh(np.ma.masked_where(is_null, ds_unweighted['Tair'].sel(season=season).values), vmin=-30, vmax=30, cmap='Spectral_r') plt.colorbar(extend='both') plt.sca(axes[i, 2]) plt.pcolormesh(np.ma.masked_where(is_null, ds_diff['Tair'].sel(season=season).values), vmin=-0.1, vmax=.1, cmap='RdBu_r') plt.colorbar(extend='both') for j in range(3): axes[i, j].axes.get_xaxis().set_ticklabels([]) axes[i, j].axes.get_yaxis().set_ticklabels([]) axes[i, j].axes.axis('tight') axes[i, 0].set_ylabel(season) axes[0, 0].set_title('Weighted by DPM') axes[0, 1].set_title('Equal Weighting') axes[0, 2].set_title('Difference') plt.tight_layout() fig.suptitle('Seasonal Surface Air Temperature', fontsize=16, y=1.02)
notebooks/norman/xarray-ex-3.ipynb
CCI-Tools/sandbox
gpl-3.0
문제2. X 행렬이 다음과 같을 때 NumPy 슬라이싱 인덱싱을 사용하여 행렬의 짝수 부분만을 선택하여 행렬로 만드는 NumPy코드를 작성하세요.(행렬 인덱싱을 사용하지 말 것!)
X = np.array([[1,1,1,1], [1,2,4,8], [1,3,5,7], [1,4,16,32], [1,5,9,13]]) X X[1::2, 1:]
통계, 머신러닝 복습/160523월_6일차_기초 확률론 2 - 확률 분포 Probability Distribution/0.시험자료(누적).ipynb
kimkipyo/dss_git_kkp
mit
문제3. X행렬이 다음과 같을 때 행렬 인덱싱을 사용하여 4의 배수만을 선택하여 하나의 벡터로 만드는 NumPy 코드를 작성하세요.(NumPy 슬라이싱 인덱싱을 사용하지 말 것!)
X = np.array([[1,1,1,1], [1,2,4,8], [1,3,5,7],[1,4,16,32],[1,5,9,13]]) X X[X%4==0]
통계, 머신러닝 복습/160523월_6일차_기초 확률론 2 - 확률 분포 Probability Distribution/0.시험자료(누적).ipynb
kimkipyo/dss_git_kkp
mit
문제4. 모든 원소의 값이 1인 (5,4)행렬 X와 모든 원소의 값이 0인 (5,4) 행렬 Y를 순서대로 합쳐서 크기가 (5,8)인 행렬을 만드는 NumPy코드를 3줄로 작성하세요.
X = np.ones((5,4)) Y = np.zeros((5,4)) np.hstack([X, Y])
통계, 머신러닝 복습/160523월_6일차_기초 확률론 2 - 확률 분포 Probability Distribution/0.시험자료(누적).ipynb
kimkipyo/dss_git_kkp
mit
문제 5. arange명령과 reshape명령만을 사용하여 다음 행렬을 만드는 NumPy 코드를 1줄로 작성하세요.
np.arange(1,6) np.arange(1,6).reshape(5,1)
통계, 머신러닝 복습/160523월_6일차_기초 확률론 2 - 확률 분포 Probability Distribution/0.시험자료(누적).ipynb
kimkipyo/dss_git_kkp
mit
문제6. 다음과 같이 x배열 변수가 존재하는 경우 이 x배열과 newaxis명령, 그리고 +연산만을 이용하여 다음 행렬을 만드는 코드를 1줄로 작성하세요.
x = np.arange(5) x x[:, np.newaxis] + 1 10 * (x[:, np.newaxis] + 1) 10 * (x[:, np.newaxis] + 1) + x
통계, 머신러닝 복습/160523월_6일차_기초 확률론 2 - 확률 분포 Probability Distribution/0.시험자료(누적).ipynb
kimkipyo/dss_git_kkp
mit
문제7. 다음 행렬 X가 5명의 학생이 3번 시험 본 성적이라고 가정하고 각 학생의 최고 성적을 구하는 코드를 1줄로 작성하세요.
np.random.seed(0) X = np.random.random_integers(0, 100, (5,3)) X X.max(axis=1)
통계, 머신러닝 복습/160523월_6일차_기초 확률론 2 - 확률 분포 Probability Distribution/0.시험자료(누적).ipynb
kimkipyo/dss_git_kkp
mit
문제10 meshgrid명령과 scatter명령을 사용하여 다음 그림을 그리는 코드를 1줄로 작성하라.
plt.scatter(*np.meshgrid(range(5), range(6)));
통계, 머신러닝 복습/160523월_6일차_기초 확률론 2 - 확률 분포 Probability Distribution/0.시험자료(누적).ipynb
kimkipyo/dss_git_kkp
mit
선형대수 문제12. 다음 중 일반적으로 성립하지 않는 것을 모두 고르세요. 2.det(cA) = c*det(A) 6.det(A+B) = det(A) + det(B) 9.tr(A)-1(역함수) = tr(A-1(역))
A = np.array([[1,2], [3,4]]) B = np.array([[5,6,], [7,8]]) print(np.linalg.det(3*A), 3*np.linalg.det(A)) #no.2 print(np.linalg.det(A+B), np.linalg.det(A) + np.linalg.det(B)) #no.6 print(np.trace(A), np.trace(np.linalg.inv(A))) #no.9
통계, 머신러닝 복습/160523월_6일차_기초 확률론 2 - 확률 분포 Probability Distribution/0.시험자료(누적).ipynb
kimkipyo/dss_git_kkp
mit
On change le style pour un style plus moderne, celui de ggplot :
from jyquickhelper import add_notebook_menu add_notebook_menu()
_doc/notebooks/sessions/seance6_graphes_correction.ipynb
sdpython/actuariat_python
mit
Données élections Pour tous les exemples qui suivent, on utilise les résultat élection présidentielle de 2012. Si vous n'avez pas le module actuariat_python, il vous suffit de recopier le code de la fonction elections_presidentielles qui utilise la fonction read_excel :
from actuariat_python.data import elections_presidentielles dict_df = elections_presidentielles(local=True, agg="dep") def cleandep(s): if isinstance(s, str): r = s.lstrip('0') else: r = str(s) return r dict_df["dep1"]["Code du département"] = dict_df["dep1"]["Code du département"].apply(cleandep) dict_df["dep2"]["Code du département"] = dict_df["dep2"]["Code du département"].apply(cleandep) deps = dict_df["dep1"].merge(dict_df["dep2"], on="Code du département", suffixes=("T1", "T2")) deps["rHollandeT1"] = deps['François HOLLANDE (PS)T1'] / (deps["VotantsT1"] - deps["Blancs et nulsT1"]) deps["rSarkozyT1"] = deps['Nicolas SARKOZY (UMP)T1'] / (deps["VotantsT1"] - deps["Blancs et nulsT1"]) deps["rNulT1"] = deps["Blancs et nulsT1"] / deps["VotantsT1"] deps["rHollandeT2"] = deps["François HOLLANDE (PS)T2"] / (deps["VotantsT2"] - deps["Blancs et nulsT2"]) deps["rSarkozyT2"] = deps['Nicolas SARKOZY (UMP)T2'] / (deps["VotantsT2"] - deps["Blancs et nulsT2"]) deps["rNulT2"] = deps["Blancs et nulsT2"] / deps["VotantsT2"] data = deps[["Code du département", "Libellé du départementT1", "VotantsT1", "rHollandeT1", "rSarkozyT1", "rNulT1", "VotantsT2", "rHollandeT2", "rSarkozyT2", "rNulT2"]] data_elections = data # parfois data est remplacé dans la suite data.head()
_doc/notebooks/sessions/seance6_graphes_correction.ipynb
sdpython/actuariat_python
mit
localisation des villes
from pyensae.datasource import download_data download_data("villes_france.csv", url="http://sql.sh/ressources/sql-villes-france/") cols = ["ncommune", "numero_dep", "slug", "nom", "nom_simple", "nom_reel", "nom_soundex", "nom_metaphone", "code_postal", "numero_commune", "code_commune", "arrondissement", "canton", "pop2010", "pop1999", "pop2012", "densite2010", "surface", "superficie", "dlong", "dlat", "glong", "glat", "slong", "slat", "alt_min", "alt_max"] import pandas villes = pandas.read_csv("villes_france.csv", header=None,low_memory=False, names=cols)
_doc/notebooks/sessions/seance6_graphes_correction.ipynb
sdpython/actuariat_python
mit
exercice 1 : centrer la carte de la France On recentre la carte. Seule modification : [-5, 10, 38, 52].
import matplotlib.pyplot as plt import cartopy.crs as ccrs import cartopy.feature as cfeature fig = plt.figure(figsize=(6,6)) ax = fig.add_subplot(1, 1, 1, projection=ccrs.PlateCarree()) ax.set_extent([-5, 10, 38, 52]) ax.add_feature(cfeature.OCEAN.with_scale('50m')) ax.add_feature(cfeature.RIVERS.with_scale('50m')) ax.add_feature(cfeature.BORDERS.with_scale('50m'), linestyle=':') ax.set_title('France');
_doc/notebooks/sessions/seance6_graphes_correction.ipynb
sdpython/actuariat_python
mit
exercice 2 : placer les plus grandes villes de France sur la carte On reprend la fonction carte_france donnée par l'énoncé et modifié avec le résultat de la question précédente.
def carte_france(figsize=(7, 7)): fig = plt.figure(figsize=figsize) ax = fig.add_subplot(1, 1, 1, projection=ccrs.PlateCarree()) ax.set_extent([-5, 10, 38, 52]) ax.add_feature(cfeature.OCEAN.with_scale('50m')) ax.add_feature(cfeature.RIVERS.with_scale('50m')) ax.add_feature(cfeature.BORDERS.with_scale('50m'), linestyle=':') ax.set_title('France'); return ax carte_france();
_doc/notebooks/sessions/seance6_graphes_correction.ipynb
sdpython/actuariat_python
mit
On ne retient que les villes de plus de 100.000 habitants. Toutes les villes ne font pas partie de la métropole :
grosses_villes = villes[villes.pop2012 > 100000][["dlong","dlat","nom", "pop2012"]] grosses_villes.describe() grosses_villes.sort_values("dlat").head()
_doc/notebooks/sessions/seance6_graphes_correction.ipynb
sdpython/actuariat_python
mit
Saint-Denis est à la Réunion. On l'enlève de l'ensemble :
grosses_villes = villes[(villes.pop2012 > 100000) & (villes.dlat > 40)] \ [["dlong","dlat","nom", "pop2012"]]
_doc/notebooks/sessions/seance6_graphes_correction.ipynb
sdpython/actuariat_python
mit
On dessine la carte souhaitée en ajoutant un marqueur pour chaque ville dont la surface dépend du nombre d'habitant. Sa taille doit être proportionnelle à à la racine carrée du nombre d'habitants.
import matplotlib.pyplot as plt ax = carte_france() def affiche_ville(ax, x, y, nom, pop): ax.plot(x, y, 'ro', markersize=pop**0.5/50) ax.text(x, y, nom) for lon, lat, nom, pop in zip(grosses_villes["dlong"], grosses_villes["dlat"], grosses_villes["nom"], grosses_villes["pop2012"]): affiche_ville(ax, lon, lat, nom, pop) ax;
_doc/notebooks/sessions/seance6_graphes_correction.ipynb
sdpython/actuariat_python
mit
rappel : fonction zip La fonction zip colle deux séquences ensemble.
list(zip([1,2,3], ["a", "b", "c"]))
_doc/notebooks/sessions/seance6_graphes_correction.ipynb
sdpython/actuariat_python
mit
On l'utilise souvent de cette manière :
for a,b in zip([1,2,3], ["a", "b", "c"]): # faire quelque chose avec a et b print(a,b)
_doc/notebooks/sessions/seance6_graphes_correction.ipynb
sdpython/actuariat_python
mit
Sans la fonction zip :
ax = carte_france() def affiche_ville(ax, x, y, nom, pop): ax.plot(x, y, 'ro', markersize=pop**0.5/50) ax.text(x, y, nom) def affiche_row(ax, row): affiche_ville(ax, row["dlong"], row["dlat"], row["nom"], row["pop2012"]) grosses_villes.apply(lambda row: affiche_row(ax, row), axis=1) ax;
_doc/notebooks/sessions/seance6_graphes_correction.ipynb
sdpython/actuariat_python
mit
Ou encore :
import matplotlib.pyplot as plt ax = carte_france() def affiche_ville(ax, x, y, nom, pop): ax.plot(x, y, 'ro', markersize=pop**0.5/50) ax.text(x, y, nom) for i in range(0, grosses_villes.shape[0]): ind = grosses_villes.index[i] # important ici, les lignes sont indexées par rapport à l'index de départ # comme les lignes ont été filtrées pour ne garder que les grosses villes, # il faut soit utiliser reset_index soit récupérer l'indice de la ligne lon, lat = grosses_villes.loc[ind, "dlong"], grosses_villes.loc[ind, "dlat"] nom, pop = grosses_villes.loc[ind, "nom"], grosses_villes.loc[ind, "pop2012"] affiche_ville(ax, lon, lat, nom, pop) ax;
_doc/notebooks/sessions/seance6_graphes_correction.ipynb
sdpython/actuariat_python
mit
exercice 3 : résultats des élections par département On reprend le résultat des élections, on construit d'abord un dictionnaire dans lequel { departement: vainqueur }.
data_elections.shape, data_elections[data_elections.rHollandeT2 > data_elections.rSarkozyT2].shape
_doc/notebooks/sessions/seance6_graphes_correction.ipynb
sdpython/actuariat_python
mit
Il y a 63 départements où Hollande est vainqueur.
hollande_gagnant = dict(zip(data_elections["Libellé du départementT1"], data_elections.rHollandeT2 > data_elections.rSarkozyT2)) list(hollande_gagnant.items())[:5]
_doc/notebooks/sessions/seance6_graphes_correction.ipynb
sdpython/actuariat_python
mit
On récupère les formes de chaque département :
from pyensae.datasource import download_data try: download_data("GEOFLA_2-1_DEPARTEMENT_SHP_LAMB93_FXX_2015-12-01.7z", website="https://wxs-telechargement.ign.fr/oikr5jryiph0iwhw36053ptm/telechargement/inspire/" + \ "GEOFLA_THEME-DEPARTEMENTS_2015_2$GEOFLA_2-1_DEPARTEMENT_SHP_LAMB93_FXX_2015-12-01/file/") except Exception as e: # au cas le site n'est pas accessible download_data("GEOFLA_2-1_DEPARTEMENT_SHP_LAMB93_FXX_2015-12-01.7z", website="xd") from pyquickhelper.filehelper import un7zip_files try: un7zip_files("GEOFLA_2-1_DEPARTEMENT_SHP_LAMB93_FXX_2015-12-01.7z", where_to="shapefiles") departements = 'shapefiles/GEOFLA_2-1_DEPARTEMENT_SHP_LAMB93_FXX_2015-12-01/GEOFLA/1_DONNEES_LIVRAISON_2015/' + \ 'GEOFLA_2-1_SHP_LAMB93_FR-ED152/DEPARTEMENT/DEPARTEMENT.shp' except FileNotFoundError as e: # Il est possible que cette instruction ne fonctionne pas. # Dans ce cas, on prendra une copie de ce fichier. import warnings warnings.warn("Plan B parce que " + str(e)) download_data("DEPARTEMENT.zip") departements = "DEPARTEMENT.shp" import os if not os.path.exists(departements): raise FileNotFoundError("Impossible de trouver '{0}'".format(departements)) import shapefile shp = departements r = shapefile.Reader(shp) shapes = r.shapes() records = r.records() records[0]
_doc/notebooks/sessions/seance6_graphes_correction.ipynb
sdpython/actuariat_python
mit
Le problème est que les codes sont difficiles à associer aux résultats des élections. La page Wikipedia de Bas-Rhin lui associe le code 67. Le Bas-Rhin est orthographié BAS RHIN dans la liste des résultats. Le code du département n'apparaît pas dans les shapefile récupérés. Il faut matcher sur le nom du département. On met tout en minuscules et on enlève espaces et tirets.
hollande_gagnant_clean = { k.lower().replace("-", "").replace(" ", ""): v for k,v in hollande_gagnant.items()} list(hollande_gagnant_clean.items())[:5]
_doc/notebooks/sessions/seance6_graphes_correction.ipynb
sdpython/actuariat_python
mit
Et comme il faut aussi remplacer les accents, on s'inspire de la fonction remove_diacritic :
import unicodedata def retourne_vainqueur(nom_dep): s = nom_dep.lower().replace("-", "").replace(" ", "") nkfd_form = unicodedata.normalize('NFKD', s) only_ascii = nkfd_form.encode('ASCII', 'ignore') s = only_ascii.decode("utf8") if s in hollande_gagnant_clean: return hollande_gagnant_clean[s] else: keys = list(sorted(hollande_gagnant_clean.keys())) keys = [_ for _ in keys if _[0].lower() == s[0].lower()] print("impossible de savoir pour ", nom_dep, "*", s, "*", " --- ", keys[:5]) return None import math def lambert932WGPS(lambertE, lambertN): class constantes: GRS80E = 0.081819191042816 LONG_0 = 3 XS = 700000 YS = 12655612.0499 n = 0.7256077650532670 C = 11754255.4261 delX = lambertE - constantes.XS delY = lambertN - constantes.YS gamma = math.atan(-delX / delY) R = math.sqrt(delX * delX + delY * delY) latiso = math.log(constantes.C / R) / constantes.n sinPhiit0 = math.tanh(latiso + constantes.GRS80E * math.atanh(constantes.GRS80E * math.sin(1))) sinPhiit1 = math.tanh(latiso + constantes.GRS80E * math.atanh(constantes.GRS80E * sinPhiit0)) sinPhiit2 = math.tanh(latiso + constantes.GRS80E * math.atanh(constantes.GRS80E * sinPhiit1)) sinPhiit3 = math.tanh(latiso + constantes.GRS80E * math.atanh(constantes.GRS80E * sinPhiit2)) sinPhiit4 = math.tanh(latiso + constantes.GRS80E * math.atanh(constantes.GRS80E * sinPhiit3)) sinPhiit5 = math.tanh(latiso + constantes.GRS80E * math.atanh(constantes.GRS80E * sinPhiit4)) sinPhiit6 = math.tanh(latiso + constantes.GRS80E * math.atanh(constantes.GRS80E * sinPhiit5)) longRad = math.asin(sinPhiit6) latRad = gamma / constantes.n + constantes.LONG_0 / 180 * math.pi longitude = latRad / math.pi * 180 latitude = longRad / math.pi * 180 return longitude, latitude lambert932WGPS(99217.1, 6049646.300000001), lambert932WGPS(1242417.2, 7110480.100000001)
_doc/notebooks/sessions/seance6_graphes_correction.ipynb
sdpython/actuariat_python
mit
Puis on utilise le code de l'énoncé en changeant la couleur. Pas de couleur indique les départements pour lesquels on ne sait pas.
import cartopy.crs as ccrs import matplotlib.pyplot as plt ax = carte_france((8,8)) from matplotlib.collections import LineCollection import shapefile import geopandas from shapely.geometry import Polygon from shapely.ops import cascaded_union, unary_union shp = departements r = shapefile.Reader(shp) shapes = r.shapes() records = r.records() polys = [] colors = [] for i, (record, shape) in enumerate(zip(records, shapes)): # Vainqueur dep = retourne_vainqueur(record[2]) if dep is not None: couleur = "red" if dep else "blue" else: couleur = "gray" # les coordonnées sont en Lambert 93 if i == 0: print(record, shape.parts, couleur) geo_points = [lambert932WGPS(x,y) for x, y in shape.points] if len(shape.parts) == 1: # Un seul polygone poly = Polygon(geo_points) else: # Il faut les fusionner. ind = list(shape.parts) + [len(shape.points)] pols = [Polygon(geo_points[ind[i]:ind[i+1]]) for i in range(0, len(shape.parts))] try: poly = unary_union(pols) except Exception as e: print("Cannot merge: ", record) print([_.length for _ in pols], ind) poly = Polygon(geo_points) polys.append(poly) colors.append(couleur) data = geopandas.GeoDataFrame(dict(geometry=polys, colors=colors)) geopandas.plotting.plot_polygon_collection(ax, data['geometry'], facecolor=data['colors'], values=None, edgecolor='black');
_doc/notebooks/sessions/seance6_graphes_correction.ipynb
sdpython/actuariat_python
mit
La fonction fait encore une erreur pour la Corse du Sud... Je la laisse en guise d'exemple. exercice 3 avec les shapefile etalab Les données sont disponibles sur GEOFLA® Départements mais vous pouvez reprendre le code ce-dessus pour les télécharger.
# ici, il faut dézipper manuellement les données # à terminer
_doc/notebooks/sessions/seance6_graphes_correction.ipynb
sdpython/actuariat_python
mit
exercice 4 : même code, widget différent On utilise des checkbox pour activer ou désactiver l'un des deux candidats.
import matplotlib.pyplot as plt from ipywidgets import interact, Checkbox def plot(candh, cands): fig, axes = plt.subplots(1, 1, figsize=(14,5), sharey=True) if candh: data_elections.plot(x="rHollandeT1", y="rHollandeT2", kind="scatter", label="H", ax=axes) if cands: data_elections.plot(x="rSarkozyT1", y="rSarkozyT2", kind="scatter", label="S", ax=axes, c="red") axes.plot([0.2,0.7], [0.2,0.7], "g--") return axes candh = Checkbox(description='Hollande', value=True) cands = Checkbox(description='Sarkozy', value=True) interact(plot, candh=candh, cands=cands);
_doc/notebooks/sessions/seance6_graphes_correction.ipynb
sdpython/actuariat_python
mit
通过 Keras 模型创建 Estimator <table class="tfo-notebook-buttons" align="left"> <td><a target="_blank" href="https://tensorflow.google.cn/tutorials/estimator/keras_model_to_estimator" class=""><img src="https://tensorflow.google.cn/images/tf_logo_32px.png" class="">在 TensorFlow.org 上查看</a></td> <td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/estimator/keras_model_to_estimator.ipynb" class=""><img src="https://tensorflow.google.cn/images/colab_logo_32px.png" class="">在 Google Colab 中运行</a></td> <td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/estimator/keras_model_to_estimator.ipynb" class=""><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png" class="">在 GitHub 上查看源代码</a></td> <td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/tutorials/estimator/keras_model_to_estimator.ipynb" class=""><img src="https://tensorflow.google.cn/images/download_logo_32px.png" class="">下载笔记本</a></td> </table> 概述 TensorFlow 完全支持 TensorFlow Estimator,可以从新的和现有的 tf.keras 模型创建 Estimator。本教程包含了该过程完整且最为简短的示例。 设置
import tensorflow as tf import numpy as np import tensorflow_datasets as tfds
site/zh-cn/tutorials/estimator/keras_model_to_estimator.ipynb
tensorflow/docs-l10n
apache-2.0
创建一个简单的 Keras 模型。 在 Keras 中,需要通过组装层来构建模型。模型(通常)是由层构成的计算图。最常见的模型类型是一种叠加层:tf.keras.Sequential 模型。 构建一个简单的全连接网络(即多层感知器):
model = tf.keras.models.Sequential([ tf.keras.layers.Dense(16, activation='relu', input_shape=(4,)), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(3) ])
site/zh-cn/tutorials/estimator/keras_model_to_estimator.ipynb
tensorflow/docs-l10n
apache-2.0
编译模型并获取摘要。
model.compile(loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), optimizer='adam') model.summary()
site/zh-cn/tutorials/estimator/keras_model_to_estimator.ipynb
tensorflow/docs-l10n
apache-2.0
创建输入函数 使用 Datasets API 可以扩展到大型数据集或多设备训练。 Estimator 需要控制构建输入流水线的时间和方式。为此,它们需要一个“输入函数”或 input_fn。Estimator 将不使用任何参数调用此函数。input_fn 必须返回 tf.data.Dataset。
def input_fn(): split = tfds.Split.TRAIN dataset = tfds.load('iris', split=split, as_supervised=True) dataset = dataset.map(lambda features, labels: ({'dense_input':features}, labels)) dataset = dataset.batch(32).repeat() return dataset
site/zh-cn/tutorials/estimator/keras_model_to_estimator.ipynb
tensorflow/docs-l10n
apache-2.0
测试您的 input_fn
for features_batch, labels_batch in input_fn().take(1): print(features_batch) print(labels_batch)
site/zh-cn/tutorials/estimator/keras_model_to_estimator.ipynb
tensorflow/docs-l10n
apache-2.0
通过 tf.keras 模型创建 Estimator。 可以使用 tf.estimator API 来训练 tf.keras.Model,方法是使用 tf.keras.estimator.model_to_estimator 将模型转换为 tf.estimator.Estimator 对象。
import tempfile model_dir = tempfile.mkdtemp() keras_estimator = tf.keras.estimator.model_to_estimator( keras_model=model, model_dir=model_dir)
site/zh-cn/tutorials/estimator/keras_model_to_estimator.ipynb
tensorflow/docs-l10n
apache-2.0
训练和评估 Estimator。
keras_estimator.train(input_fn=input_fn, steps=500) eval_result = keras_estimator.evaluate(input_fn=input_fn, steps=10) print('Eval result: {}'.format(eval_result))
site/zh-cn/tutorials/estimator/keras_model_to_estimator.ipynb
tensorflow/docs-l10n
apache-2.0
Load and preparing input data and exact real-axis data A few test model functions are available in the tests directory. Select one of them and then continue the input data in pade.in to the real axis.
def load_and_prepare_input_data(folder,inputfile='pade.in',realaxisfile='exact.dat',delta=0.01): ''' folder - directory to test model to analytically continue inputfile - input data realaxisfile - exact values on the real-axis delta - distance above the real axis where to evaluete the continuations ''' # Matsubara input data d = np.loadtxt(folder + 'pade.in') zin = d[:,0] + d[:,1]*1j fin = d[:,2]+d[:,3]*1j # (Fortran ordering) fin = np.atleast_2d(fin).T # exact function exact = np.loadtxt(folder + 'exact.dat') # real axis energy w = exact[:,0] zout = w + delta*1j # the exact Green's function exact = exact[:,1] + exact[:,2]*1j return zin,fin,w,zout,exact
python/Pade.ipynb
JohanSchott/Pade_approximants
mit
Green's function for the Hubbard-model on the Bethe-lattice with no on-site Coulomb interaction (U=0) $G(z) = \frac{8 z}{W^2}(1-\sqrt{1-(\frac{2z}{W})^{-2}})$, where $W = 2$ is the bandwidth.
zin,fin,w,zout,exact = load_and_prepare_input_data('../tests/betheU0/')
python/Pade.ipynb
JohanSchott/Pade_approximants
mit
Non-interacting Haldane model with Van Hove singularities
zin,fin,w,zout,exact = load_and_prepare_input_data('../tests/haldane_model/')
python/Pade.ipynb
JohanSchott/Pade_approximants
mit
Sm7 self-energy Calculated from DFT+DMFT with the HIA as an impurity solver.
zin,fin,w,zout,exact = load_and_prepare_input_data('../tests/Sm7/')
python/Pade.ipynb
JohanSchott/Pade_approximants
mit
two-pole function: $G(z) = \frac{1}{2}(1/(z-0.75)+1/(z-1.25))$
zin,fin,w,zout,exact = load_and_prepare_input_data('../tests/two-poles_wc1_dw0.5/')
python/Pade.ipynb
JohanSchott/Pade_approximants
mit
Call the main analytical continuation function
fmeans,fs,masks = pade(zin,fin,zout) print '----------- After pade routine ----------' print '(#physical continuation, #E, #orbitals) = ',np.shape(fs) print '#picked continuations =',np.sum(masks) print '(#E, #orbitals) = ',np.shape(fmeans)
python/Pade.ipynb
JohanSchott/Pade_approximants
mit
Profile the main analytical continuation function
%timeit pade(zin,fin,zout)
python/Pade.ipynb
JohanSchott/Pade_approximants
mit
Plot spectral function $\rho(\omega) = -\frac{1}{\pi}\mathrm{Im}f(\omega+i \delta)$
plt.figure() # pick one orbital to plot col = 0 # plot all physical continuations plt.plot(w,-1/np.pi*np.imag(fs[:,:,col].T),'-g') plt.plot(w,-1/np.pi*np.imag(fs[0,:,col].T),'-g',label='all physical continuations') # plot all picked continuations plt.plot(w,-1/np.pi*np.imag(fs[masks[:,col],:,col].T),'-b') plt.plot(w,-1/np.pi*np.imag((fs[masks[:,col],:,col].T)[:,0]),'-b',label='all picked continuations') # plot the average of the picked continuations plt.plot(w,-1/np.pi*np.imag(fmeans[:,col]),'-r',linewidth=2,label='average') # plot exact spectral function plt.plot(w,-1/np.pi*np.imag(exact),'-k',linewidth=1.5,label='exact') plt.xlabel(r'$\omega$') plt.ylabel(r'$\rho$') plt.legend() plt.tight_layout() plt.show()
python/Pade.ipynb
JohanSchott/Pade_approximants
mit
Save $f(z_\mathrm{out})$ to file
tmp = np.vstack([w,(fmeans.real).T,(fmeans.imag).T]).T np.savetxt('out.dat',tmp)
python/Pade.ipynb
JohanSchott/Pade_approximants
mit
Using the UncertaintyAnalysis class Methods for visualising the lithological variability produced by purturbation of the input datasets, which can be considered a proxy for lithological uncertainty, are implemented in the UncertaintyAnalysis class. Initialisising a new UncertaintyAnalysis experiment is no different to initialising a MonteCarlo experiment (as UncertaintyAnalysis uses this class extensively) - we load a history file and associated csv file defining the PDF's to sample the input data from.
reload(pynoddy.history) reload(pynoddy.output) reload(pynoddy.experiment.uncertainty_analysis) reload(pynoddy) from pynoddy.experiment.uncertainty_analysis import UncertaintyAnalysis # the model itself is now part of the repository, in the examples directory: history_file = os.path.join(repo_path, "examples/fold_dyke_fault.his") #this file defines the statistical distributions to sample from params = os.path.join(repo_path, "examples/fold_dyke_fault.csv") uc_experiment = UncertaintyAnalysis(history_file,params) #plot the intial model uc_experiment.change_cube_size(55) uc_experiment.plot_section(direction='y',position='center')
docs/notebooks/Test-Uncertainty-Analysis.ipynb
Leguark/pynoddy
gpl-2.0
The next step is to perform the Monte Carlo purturbation of this initial model, and use this to estimate uncertainty. This sampling is wrapped into the estimate_uncertainty function - all that is required from us is the number of trials to produce. Realistically, several thousand samples are typically necessary before sampling can be considered representative. However, in order to speed things up a bit we'll produce 10 model samples.
uc_experiment.estimate_uncertainty(10,verbose=False)
docs/notebooks/Test-Uncertainty-Analysis.ipynb
Leguark/pynoddy
gpl-2.0
Now, a quick description of what we have done... the estimate_uncertainty function generates the specified amount (10) of randomly varying models using the MonteCarlo class. It then loads the output and loops calculates the lithology present at each voxel in each model. This information is used to calculate probability maps for each lithology at each point in the model! This can be seen if we plot the probability of observing lithology 3:
uc_experiment.plot_probability(4, direction='y',position='center')
docs/notebooks/Test-Uncertainty-Analysis.ipynb
Leguark/pynoddy
gpl-2.0
These probability maps can then be used to calculate the information entropy of each cell. These can then be plotted as follows:
uc_experiment.plot_entropy(direction='y',position='center')
docs/notebooks/Test-Uncertainty-Analysis.ipynb
Leguark/pynoddy
gpl-2.0
Scripting As opposed to R, with python is extremely easy to create handy scripts. Those are very useful when working from the command line and/or in HPC (high performance computing). A word on the Unix philosophy When writing a script, it's always a good idea to follow the Unix philosophy, which emphasizes simplicity, interoperability and modularity instead of overengineering. In short: Write programs that do one thing and do it well. Write programs to work together. Write programs to handle text streams, because that is a universal interface. If you have even a basic knowledge of the use of the bash (or bash-like) command line, you would probably already be familiar with these concepts. Consider the following example: &gt; curl --silent "http://wodaklab.org/cyc2008/resources/CYC2008_complex.tab" | head -n 5 ORF Name Complex PubMed_id Method GO_id GO_term Jaccard_Index YKR068C BET3 TRAPP complex 10727015 "Affinity Capture-Western,Affinity Capture-MS" GO:0030008 TRAPP complex 1 YML077W BET5 TRAPP complex YDR108W GSG1 TRAPP complex YGR166W KRE11 TRAPP complex Here we have chained two command line tools: curl to stream a text file from the internet and piped it into head to show only the first 5 rows. Anideal python script should follow the same principles. Immagine we wanted to substitute head with a little script that transforms the text file in a way such that for each complex name (Name column) we report all the genes belonging to that complex. For instance: &gt; curl --silent "http://wodaklab.org/cyc2008/resources/CYC2008_complex.tab" | ./cyc2txt | head -n 5 SIR YLR442C,YDL042C,YDR227W SIP YGL208W,YDR422C PAC1 YGR078C,YDR488C SIT YDL047W CPA YJR109C,YOR303W Parsing the command line As shown in the example above, command line tools often accept options and even input files (i.e. head -n 5). Parsing these arguments with the necessary flexibility is not trivial. Writing a command line argument parser that handles positional and optional arguments, potentially with some checks on their type is not trivial.
def parse_args(cmd_line): Args = collections.namedtuple('Args', ['n', 'in_file']) n_trigger = False # default value for "n" n = 1 for arg in cmd_line: if n_trigger: n = int(arg) n_trigger = False continue if arg == '-n': # next argument belongs to "-n" n_trigger = True continue else: # it must be the positional argument in_file = arg return Args(n=n, in_file=in_file) # immaginary command line cmd_line = '-n 5 myfile.txt' parse_args(cmd_line.split()) # immaginary command line with multiple input files cmd_line = '-n 5 myfile.txt another_one.txt' parse_args(cmd_line.split())
notebooks/5-Scripting.ipynb
mgalardini/2017_python_course
gpl-2.0
Note: in real life we would use the following startegy to read the arguments from the command line: import sys cmd_line = ' '.join(sys.argv[1:]) sys.argv[0] will be the name of the script, as called from the command line We need to extend our original function, to account for additional positional arguments. We'll also add an extra boolean option.
def parse_args(cmd_line): Args = collections.namedtuple('Args', ['n', 'verbose', 'in_file', 'another_file']) n_trigger = False # default value for "n" n = 1 # default value for "verbose" verbose = False # list to hold the positional arguments positional = [] for arg in cmd_line: if n_trigger: n = int(arg) n_trigger = False continue if arg == '-n': # next argument belongs to "-n" n_trigger = True elif arg == '--verbose' or arg == '-v': verbose = True else: # it must be the positional argument positional.append(arg) return Args(n=n, verbose=verbose, in_file=positional[0], another_file=positional[1]) # immaginary command line with multiple input files cmd_line = '-n 5 myfile.txt another_one.txt' parse_args(cmd_line.split())
notebooks/5-Scripting.ipynb
mgalardini/2017_python_course
gpl-2.0
What if the --verbose option can be called multiple times to modulate the amount of verbosity of our script?
def parse_args(cmd_line): Args = collections.namedtuple('Args', ['n', 'verbose', 'in_file', 'another_file']) n_trigger = False # default value for "n" n = 1 # default value for "verbose" verbose = 0 # list to hold the positional arguments positional = [] for arg in cmd_line: if n_trigger: n = int(arg) n_trigger = False continue if arg == '-n': # next argument belongs to "-n" n_trigger = True elif arg == '--verbose' or arg == '-v': verbose += 1 else: # it must be the positional argument positional.append(arg) return Args(n=n, verbose=verbose, in_file=positional[0], another_file=positional[1]) # immaginary command line with increased verbosity cmd_line = '-n 5 -v -v myfile.txt another_one.txt' parse_args(cmd_line.split()) # by convention we can also increase verbosity in the following manner cmd_line = '-n 5 -vvv myfile.txt another_one.txt' parse_args(cmd_line)
notebooks/5-Scripting.ipynb
mgalardini/2017_python_course
gpl-2.0
Let's add this additional functionality, hopefully you are starting to see how complicated and prone to bugs is writing your own command line parser!
def parse_args(cmd_line): Args = collections.namedtuple('Args', ['n', 'verbose', 'in_file', 'another_file']) n_trigger = False # default value for "n" n = 1 # default value for "verbose" verbose = 0 # list to hold the positional arguments positional = [] for arg in cmd_line: if n_trigger: n = int(arg) n_trigger = False continue if arg == '-n': # next argument belongs to "-n" n_trigger = True elif arg == '--verbose' or arg == '-v' or arg.startswith('-v'): if arg.startswith('-v') and len(arg) > 2 and len({char for char in arg[1:]}) == 1: verbose += len(arg[1:]) else: verbose += 1 else: # it must be the positional argument positional.append(arg) return Args(n=n, verbose=verbose, in_file=positional[0], another_file=positional[1]) # by convention we can also increase verbosity in the following manner cmd_line = '-n 5 -vvv myfile.txt another_one.txt' parse_args(cmd_line.split())
notebooks/5-Scripting.ipynb
mgalardini/2017_python_course
gpl-2.0
The argparse module Python as a very useful module to create scripts, and it is included the standard library: argparse. It allows to create command line parser that are concise yet very flexible and powerful. Let's rewrite our last example using argparse.
def parse_args(cmd_line): parser = argparse.ArgumentParser(prog='fake_script', description='An argparse test') # positional arguments parser.add_argument('my_file', help='My input file') parser.add_argument('another_file', help='Another input file') # optional arguments parser.add_argument('-n', type=int, default=1, help='Number of Ns [Default: 1]') parser.add_argument('-v', '--verbose', action='count', default=0, help='Increase verbosity level') return parser.parse_args(cmd_line) # by convention we can also increase verbosity in the following manner cmd_line = '-n 5 -vvv myfile.txt another_one.txt' parse_args(cmd_line.split())
notebooks/5-Scripting.ipynb
mgalardini/2017_python_course
gpl-2.0
By indicating the type of the -n options, we can easily check for its type.
# by convention we can also increase verbosity in the following manner cmd_line = '-n not_an_integer -vvv myfile.txt another_one.txt' parse_args(cmd_line.split())
notebooks/5-Scripting.ipynb
mgalardini/2017_python_course
gpl-2.0
...and we also get an -h (help) option for free, already formatted!
# by convention we can also increase verbosity in the following manner cmd_line = '-h' parse_args(cmd_line.split())
notebooks/5-Scripting.ipynb
mgalardini/2017_python_course
gpl-2.0
More argparse examples Boolean arguments Sometimes you want to add a parameter to your script that is a simple trigger and doesn't receive any value. The action keyword argument in argparse allows us to implement such behavior. argparse's documentation has more examples on how to use the action argument.
def parse_args(cmd_line): parser = argparse.ArgumentParser(prog='fake_script', description='An argparse example') # boolean option parser.add_argument('-f', '--force', action='store_true', default=False, help='Force file creation') return parser.parse_args(cmd_line) cmd_line = '-f' parse_args(cmd_line.split()) cmd_line = '' parse_args(cmd_line.split())
notebooks/5-Scripting.ipynb
mgalardini/2017_python_course
gpl-2.0
Multiple choices Sometimes not only you would like to define a type for an option, but only allow certain values from a list.
def parse_args(cmd_line): parser = argparse.ArgumentParser(prog='fake_script', description='An argparse example') # multiple choices positional argument parser.add_argument('-m', '--metric', choices=['jaccard', 'hamming'], default='jaccard', help='Distance metric [Default: jaccard]') parser.add_argument('-b', '--bootstraps', type=int, choices=range(10, 21), default=10, help='Bootstraps [Default: 10]') return parser.parse_args(cmd_line) cmd_line = '-m euclidean' parse_args(cmd_line.split()) cmd_line = '-m hamming -b 15' parse_args(cmd_line.split())
notebooks/5-Scripting.ipynb
mgalardini/2017_python_course
gpl-2.0
Flexible number of arguments: nargs In some cases you might want to have multiple values assigned to an option: for that the nargs keyword argument is a flexible option.
def parse_args(cmd_line): parser = argparse.ArgumentParser(prog='fake_script', description='An argparse example') parser.add_argument('fastq', nargs='+', help='Input fastq files') parser.add_argument('-m', '--mate-pairs', nargs='*', help='Mate pairse fastq files') return parser.parse_args(cmd_line) cmd_line = 'r1.fq.gz r2.fq.gz' parse_args(cmd_line.split()) cmd_line = 'r1.fq.gz r2.fq.gz -m m1.fq.gz m2.fq.gz' parse_args(cmd_line.split()) cmd_line = '-m m1.fq.gz m2.fq.gz' parse_args(cmd_line.split()) cmd_line = '-h' parse_args(cmd_line.split())
notebooks/5-Scripting.ipynb
mgalardini/2017_python_course
gpl-2.0
One script to rule them all: subcommands Some software sometimes contain more than one utility at a time, which is handy if you don't want to remember all the various subcommands and their options. Common command line examples are git (e.g. git commit and git push) and in bioinformatics there are many more examples (bwa, bedtools, samtools, ...). If you are developing a program that performs many related tasks, it might be a good idea to have them as functions/classes in a module, and call them through a single script with many subcommands. Here's an example:
def init(options): print('Init the project') print(options.name, options.description) def add(options): print('Add an entry') print(options.ID, options.name, options.description, options.color) def parse_args(cmd_line): parser = argparse.ArgumentParser(prog='fake_script', description='An argparse example') subparsers = parser.add_subparsers() parser_init = subparsers.add_parser('init', help='Initialize the project') parser_init.add_argument('-n', '--name', default='Project', help='Project name') parser_init.add_argument('-d', '--description', default='My project', help='Project description') parser_init.set_defaults(func=init) parser_add = subparsers.add_parser('add', help='Add an entry') parser_add.add_argument('ID', help='Entry ID') parser_add.add_argument('-n', '--name', default='', help='Entry name') parser_add.add_argument('-d', '--description', default = '', help='Entry description') parser_add.add_argument('-c', '--color', default='red', help='Entry color') parser_add.set_defaults(func=add) return parser.parse_args(cmd_line) cmd_line = '-h' parse_args(cmd_line.split()) cmd_line = 'init -h' parse_args(cmd_line.split()) cmd_line = 'add -h' parse_args(cmd_line.split()) cmd_line = 'init -n my_project -d awesome' options = parse_args(cmd_line.split()) options.func(options) cmd_line = 'add test -n entry1 -d my_entry' options = parse_args(cmd_line.split()) options.func(options)
notebooks/5-Scripting.ipynb
mgalardini/2017_python_course
gpl-2.0
Logging A good script is able to keep the user informed about "what's going on" during its execution. Given that a script might use the standard output as a way to ouput the results of the script, using the print function might not always be an option. In fact it is good practice to at least output the script execution messages to stderr, using the sys module. This allows you to redirect the stdout to a file or another program/script, while being able to monitor the execution messages or to redirect them to a different file.
sys.stderr.write('Running an immaginary analysis on the input genes\n') # the result of our immaginary analysis value = 400 # regular output of our immaginary script print('\t'.join(['gene1', 'gene2', str(value)]))
notebooks/5-Scripting.ipynb
mgalardini/2017_python_course
gpl-2.0
It is also a good idea to return a non-zero exit code when the script is encountering an error. By default, python will return an non-zero exit code when the script end because of an uncatched exception. If you are cathcing it and want to exit in a slightly more grateful way you can use the sys.exit function.
user_provided_value = 'a' try: # impossible parameter = int(user_provided_value) except ValueError: sys.stderr.write('Invalid type provided\n') sys.exit(1)
notebooks/5-Scripting.ipynb
mgalardini/2017_python_course
gpl-2.0
If you want to be more flexible with your logging, you can use the logging module, present in python's standard library. It allows the user to: redirect the logs to file and standard error at the same time modulate the verbosity of the output add custom formatters (including color with minimal tweaking)
# create the logger logger = logging.getLogger('fake_script') # set the verbosity level logger.setLevel(logging.DEBUG) # we want the log to be redirected # to std. err. ch = logging.StreamHandler() # we want a rich output with additional information formatter = logging.Formatter('%(asctime)s - %(name)s - [%(levelname)s] - %(message)s', '%Y-%m-%d %H:%M:%S') ch.setFormatter(formatter) logger.addHandler(ch) # debug message, will be shown given the level we have set logger.debug('test') logger.setLevel(logging.WARNING) # debug message, will not be shown given the level we have set logger.debug('not-so-interesting debugging information') # warning message, will be shown logger.warning('this might break our script, but i\'m not sure')
notebooks/5-Scripting.ipynb
mgalardini/2017_python_course
gpl-2.0
The logging levels available are (in order of severity): DEBUG INFO WARNING ERROR CRITICAL And as you might have immagined there are corresponding functions to log messages with those levels of severity. Have a look at python's documentation for a more in-depth description of the module and its capabilities. Script template Find below a minimal script template, including an utility function to parse arguments and minimal logging (can also be found here).
#!/usr/bin/env python '''Description here''' import logging import argparse def get_options(): description = '' parser = argparse.ArgumentParser(description=description) parser.add_argument('name', help='Name') return parser.parse_args() def set_logging(level=logging.INFO): logger = logging.getLogger() logger.setLevel(level) ch = logging.StreamHandler() logger.addHandler(ch) return logger if __name__ == "__main__": options = get_options() logger = set_logging()
notebooks/5-Scripting.ipynb
mgalardini/2017_python_course
gpl-2.0
Numpy operations If we apply a NumPy function on a Pandas datframe, the result will be another Pandas dataframe with the indices preserved.
df = pd.DataFrame(np.random.randint(0, 10, (3, 4)), columns=['A', 'B', 'C', 'D']) df np.cos(df * np.pi/2 ) - 1
3-2_Dataframes.ipynb
iutzeler/Introduction-to-Python-for-Data-Sciences
mit
Arithmetic operations Arithmetic operations can also be performed either with <tt>+ - / *</tt> or with dedicated <tt>add multiply</tt> etc methods
A = pd.DataFrame(np.random.randint(0, 20, (2, 2)), columns=list('AB')) A B = pd.DataFrame(np.random.randint(0, 10, (3, 3)), columns=list('BAC')) B A+B
3-2_Dataframes.ipynb
iutzeler/Introduction-to-Python-for-Data-Sciences
mit
The pandas arithmetic functions also have an option to fill missing values by replacing the missing one in either of the dataframes by some value.
A.add(B, fill_value=0.0)
3-2_Dataframes.ipynb
iutzeler/Introduction-to-Python-for-Data-Sciences
mit
Appending, Concatenating, and Merging Thanks to naming, dataframes can be easily added, merged, etc. However, if some entries are missing (columns or indices), the operations may get complicated. Here the most standard situations are covered, take a look at the documentation (notably this one on merging, appending, and concatenating ) Appending is for adding the lines of one dataframe with another one with the same columns.
A = pd.DataFrame(np.random.randint(0, 20, (2, 2)), columns=list('AB')) A2 = pd.DataFrame(np.random.randint(0, 20, (3, 2)), columns=list('AB')) print("A:\n",A,"\nA2:\n",A2) A.append(A2) # this does not "append to A" but creates a new dataframe
3-2_Dataframes.ipynb
iutzeler/Introduction-to-Python-for-Data-Sciences
mit
Sometimes, indexes do not matter, they can be resetted using <tt>ignore_index=True</tt>.
A.append(A2,ignore_index=True)
3-2_Dataframes.ipynb
iutzeler/Introduction-to-Python-for-Data-Sciences
mit
Concatenating is for adding lines and/or columns of multiples datasets (it is a generalization of appending)
A = pd.DataFrame(np.random.randint(0, 20, (2, 2)), columns=list('AB')) A2 = pd.DataFrame(np.random.randint(0, 20, (3, 2)), columns=list('AB')) A3 = pd.DataFrame(np.random.randint(0, 20, (1, 3)), columns=list('CAD')) print("A:\n",A,"\nA2:\n",A2,"\nA3:\n",A3)
3-2_Dataframes.ipynb
iutzeler/Introduction-to-Python-for-Data-Sciences
mit
The most important settings of the <tt>concat</tt> function are <tt>pd.concat(objs, axis=0, join='outer',ignore_index=False)</tt> where <br/> . objs is the list of dataframes to concatenate <br/> . axis is the axis on which to concatenate 0 (default) for the lines and 1 for the columns <br/> . join is to decide if we keep all columns/indices on the other axis ('outer' ,default), or the intersection ( 'inner') <br/> . ignore_index is to decide is we keep the previous names (False, default) or give new ones (True) For a detailed view see this doc on merging, appending, and concatenating
pd.concat([A,A2,A3],ignore_index=True) pd.concat([A,A2,A3],axis=1) pd.concat([A,A2,A3],axis=1,ignore_index=True,join='inner')
3-2_Dataframes.ipynb
iutzeler/Introduction-to-Python-for-Data-Sciences
mit
Merging is for putting together two dataframes with hopefully common data For a detailed view see this doc on merging, appending, and concatenating
df1 = pd.DataFrame({'employee': ['Bob', 'Jake', 'Lisa', 'Sue'], 'group': ['Accounting', 'Engineering', 'Engineering', 'HR']}) df1 df2 = pd.DataFrame({'employee': ['Lisa', 'Bob', 'Jake', 'Sue'], 'hire_date': [2004, 2008, 2012, 2014]}) df2 df3 = pd.merge(df1,df2) df3 df4 = pd.DataFrame({'group': ['Accounting', 'Engineering', 'HR'], 'supervisor': ['Carly', 'Guido', 'Steve']}) df4 pd.merge(df3,df4)
3-2_Dataframes.ipynb
iutzeler/Introduction-to-Python-for-Data-Sciences
mit
Preparing the Data Before exploring the data, it is primordial to verify its soundness, indeed if it has missing or replicated data, the results of our test may not be accurate. Pandas provides a collection of methodes to verify the sanity of the data (recall that when data is missing for an entry, it is noted as NaN, and thus any further operation including this will be NaN). To explore some typical problems in a dataset, I messed with a small part of the MovieLens dataset. The ratings_mess.csv file contains 4 columns: * userId id of the user, integer greater than 1 * movieId id of the user, integer greater than 1 * rating rating of the user to the movie, float between 0.0 and 5.0 * timestamp timestamp, integer and features (man-made!) errors, some of them minor some of them major.
ratings = pd.read_csv('data/ml-small/ratings_mess.csv') ratings.head(7) # enables to display the top n lines of a dataframe, 5 by default
3-2_Dataframes.ipynb
iutzeler/Introduction-to-Python-for-Data-Sciences
mit
Missing values Pandas provides functions that check if the values are missing: isnull(): Generate a boolean mask indicating missing values notnull(): Opposite of isnull()
ratings.isnull().head(5)
3-2_Dataframes.ipynb
iutzeler/Introduction-to-Python-for-Data-Sciences
mit
Carefully pruning data Now that we have to prune lines of our data, this will be done using dropna() through dataframe.dropna(subset=["col_1","col_2"],inplace=True) which drops all rows with at least one missing value in the columns col1, col2 of dataframe in place that is without copy. Warning: this function deletes any line with at least one missing data, which is not always wishable. Also, with inplace=True, it is applied in place, meaning that they modify the dataframe it is applied to, it is thus an irreversible operation; drop inplace=True to create a copy or see the result before apllying it. For instance here, userId,movieId,rating are essential whereas the timestamp is not (it can be dropped for the prediciton process). Thus, we will delete the lines where one of userId,movieId,rating is missing and fill the timestamp with 0 when it is missing.
ratings.dropna(subset=["userId","movieId","rating"],inplace=True) ratings.head(5)
3-2_Dataframes.ipynb
iutzeler/Introduction-to-Python-for-Data-Sciences
mit
To fill missing data (from a certain column), the recommended way is to use fillna() through dataframe["col"].fillna(value,inplace=True) which replace all missing values in the column col of dataframe by value in place that is without copy (again this is irreversible, to use the copy version use inplace=False).
ratings["timestamp"].fillna(0,inplace=True) ratings.head(7)
3-2_Dataframes.ipynb
iutzeler/Introduction-to-Python-for-Data-Sciences
mit
This indeed gives the correct result, however, the line indexing presents missing number. The indexes can be resetted with reset_index(inplace=True,drop=True)
ratings.reset_index(inplace=True,drop=True) ratings.head(7)
3-2_Dataframes.ipynb
iutzeler/Introduction-to-Python-for-Data-Sciences
mit
Improper values Even without the missing values, some lines are problematic as they feature values outside of prescribed range (userId id of the user, integer greater than 1; movieId id of the user, integer greater than 1; rating rating of the user to the movie, float between 0.0 and 5.0; imestamp timestamp, integer )
ratings[ratings["userId"]<1] # Identifying a problem
3-2_Dataframes.ipynb
iutzeler/Introduction-to-Python-for-Data-Sciences
mit
Now, we drop the corresponding line, with drop by drop(problematic_row.index, inplace=True). Warning: Do not forget .index and inplace=True
ratings.drop(ratings[ratings["userId"]<1].index, inplace=True) ratings.head(7) pb_rows = ratings[ratings["movieId"]<1] pb_rows ratings.drop(pb_rows.index, inplace=True)
3-2_Dataframes.ipynb
iutzeler/Introduction-to-Python-for-Data-Sciences
mit
And finally the ratings.
pb_rows = ratings[ratings["rating"]<0] pb_rows2 = ratings[ratings["rating"]>5] tot_pb_rows = pb_rows.append(pb_rows2 ) tot_pb_rows ratings.drop(tot_pb_rows.index, inplace=True) ratings.reset_index(inplace=True,drop=True)
3-2_Dataframes.ipynb
iutzeler/Introduction-to-Python-for-Data-Sciences
mit
We finally have our dataset cured! Let us save it for further use. to_csv saves as CSV into some file, index=False drops the index names as we did not specify it.
ratings.to_csv("data/ml-small/ratings_cured.csv",index=False)
3-2_Dataframes.ipynb
iutzeler/Introduction-to-Python-for-Data-Sciences
mit
Basic Statistics With our cured dataset, we can begin exploring.
ratings = pd.read_csv('data/ml-small/ratings_cured.csv') ratings.head()
3-2_Dataframes.ipynb
iutzeler/Introduction-to-Python-for-Data-Sciences
mit
The following table summarizes some other built-in Pandas aggregations: | Aggregation | Description | |--------------------------|---------------------------------| | count() | Total number of items | | first(), last() | First and last item | | mean(), median() | Mean and median | | min(), max() | Minimum and maximum | | std(), var() | Standard deviation and variance | | mad() | Mean absolute deviation | | prod() | Product of all items | | sum() | Sum of all items | These are all methods of DataFrame and Series objects, and description also provides a quick overview.
ratings.describe()
3-2_Dataframes.ipynb
iutzeler/Introduction-to-Python-for-Data-Sciences
mit
We see that these statistics do not make sense for all rows. Let us drop the timestamp and examine the ratings.
ratings.drop("timestamp",axis=1,inplace=True) ratings.head() ratings["rating"].describe()
3-2_Dataframes.ipynb
iutzeler/Introduction-to-Python-for-Data-Sciences
mit
GroupBy These ratings are linked to users and movies, in order to have a separate view per user/movie, grouping has to be used. The GroupBy operation (that comes from SQL) accomplishes: The split step involves breaking up and grouping a DataFrame depending on the value of the specified key. The apply step involves computing some function, usually an sum, median, means etc within the individual groups. The combine step merges the results of these operations into an output array. <img src="img/GroupBy.png"> <p style="text-align: right">Source: [Python Data Science Handbook](http://shop.oreilly.com/product/0636920034919.do) by Jake VanderPlas</p>
ratings.head()
3-2_Dataframes.ipynb
iutzeler/Introduction-to-Python-for-Data-Sciences
mit
So to get the mean of the ratings per user, the command is
ratings.groupby("userId")["rating"].mean()
3-2_Dataframes.ipynb
iutzeler/Introduction-to-Python-for-Data-Sciences
mit
Filtering Filtering is the action of deleting rows depending on a boolean function. For instance, the following removes the user with a rating of only one movie.
ratings.groupby("userId")["rating"].count() def filter_func(x): return x["rating"].count() >= 2 filtered = ratings.groupby("userId").filter(filter_func) filtered filtered.groupby("userId")["rating"].count()
3-2_Dataframes.ipynb
iutzeler/Introduction-to-Python-for-Data-Sciences
mit
Transformations Transforming is the actions of applying a transformation (sic). For instance, let us normalize the ratings so that they have zero mean for each user.
ratings.groupby("userId")["rating"].mean() def center_ratings(x): x["rating"] = x["rating"] - x["rating"].mean() return x centered = ratings.groupby("userId").apply(center_ratings) centered.groupby("userId")["rating"].mean()
3-2_Dataframes.ipynb
iutzeler/Introduction-to-Python-for-Data-Sciences
mit
Aggregations [*] Aggregations let you aggreagate several operations.
ratings.groupby("userId")["rating"].aggregate([min,max,np.mean,np.median,len])
3-2_Dataframes.ipynb
iutzeler/Introduction-to-Python-for-Data-Sciences
mit
Exercises Exercise: Bots Discovery In the dataset ratings_bots.csv, some users may be bots. To help a movie sucess they add ratings (favorable ones often). To get a better recommendation, we try to remove them. Count the users with a mean rating above 4.7/5 and delete them hint: the nunique function may be helpful to count Delete multiples reviews of a movie by a single user by replacing them with only the first one. What is the proportion of potential bots among the users? hint: the groupby function can be applied to several columns, also reset_index(drop=True) removes the grouby indexing. hint: remember the loc function, e.g. df.loc[df['userId'] == 128] returns a dataframe of the rows where the userId is 128; and df.loc[df['userId'] == 128].loc[samerev['movieId'] == 3825] returns a dataframe of the rows where the userId is 128 and the movieID is 3825. In total , 17 ratings have to be removed. For instance, user 128 has 3 ratings of the movie 3825 This dataset has around 100 000 ratings so hand picking won't do!
import pandas as pd import numpy as np ratings_bots = pd.read_csv('data/ml-small/ratings_bots.csv')
3-2_Dataframes.ipynb
iutzeler/Introduction-to-Python-for-Data-Sciences
mit
Exercise: Planets discovery We will use the Planets dataset, available via the Seaborn package. It provides information on how astronomers found new planets around stars, exoplanets. Display median, mean and quantile informations for these planets orbital periods, masses, and distances. For each method, display statistic on the years planets were discovered using this technique.
import pandas as pd import numpy as np planets = pd.read_csv('data/planets.csv') print(planets.shape) planets.head()
3-2_Dataframes.ipynb
iutzeler/Introduction-to-Python-for-Data-Sciences
mit
Option to save plots upon regeneration (makes a new file since date string is appended.):
mpl.rcParams.update({ 'font.size': 16, 'axes.titlesize': 17, 'axes.labelsize': 15, 'xtick.labelsize': 10, 'ytick.labelsize': 13, #'font.family': 'Lato', 'font.weight': 600, 'axes.labelweight': 600, 'axes.titleweight': 600 #'figure.autolayout': True }) regen_plots = False if regen_plots: date_string = datetime.date.today().strftime("%y%m%d") plotpath = "./plots/" + date_string + "_read_alignments_to_bins"+ "/" print(plotpath) if not os.path.exists(plotpath): os.makedirs(plotpath) # Note: we are going to change the names of these files. # I had originally used summary.dat but learned these are RPKM values even though # they are integers. read_counts_path = \ "/gscratch/lidstrom/meta4_bins/analysis/assemble_summaries/summary_counts.xls" gene_data = pd.read_csv(read_counts_path, sep = '\t') gene_data.head(2) gene_data.set_index(['genome', 'locus_tag', 'product'], inplace=True) gene_data.head() sample_sums = pd.DataFrame({'read counts':gene_data.sum(index=0)}) sample_sums['reads/10^6'] = sample_sums['read counts']/(10**6) sample_sums.head() ax = sample_sums['reads/10^6'].plot.hist(bins = 60) #range=[6.5, 12.5]) fig = ax.get_figure() ax.set_xlabel("million reads mapped to metagenome bin") ax.set_ylabel("# of metagenome bins \n with that many million reads") ax.set_title("The number of reads per bin varies more than an order of magnitude across samples") if regen_plots: fig.savefig(plotpath + 'reads_per_sample--unnormalized.pdf')
ipython_notebooks/160226_reads_mapped_to_genome_bins.ipynb
JanetMatsen/meta4_bins_janalysis
bsd-2-clause
Whats the one with less than a million?
sample_sums.sort(columns='reads/10^6', axis=0, ascending=True) gene_sums = pd.DataFrame({'gene sum':gene_data.sum(axis=1, index=2)}) gene_sums['gene sum'] = gene_sums['gene sum'].astype('int') gene_sums = gene_sums[gene_sums['gene sum'] > 0] gene_sums.head() gene_sums.shape fig, ax = plt.subplots() gene_sums['gene sum'].hist(ax=ax, bins=100, ) ax.set_yscale('log') ax.set_xlabel('number of reads mapped') ax.set_xlabel('number of genes with that many reads') ax.set_title("Most genes have almost no reads") if regen_plots: fig.savefig(plotpath + 'most_genes_have_no_reads--zoom0.pdf') fig, ax = plt.subplots() #ax.set_xlim(0, 10**5) gene_sums[gene_sums['gene sum'] < 10**7]['gene sum'].hist(ax=ax, bins=100, ) ax.set_yscale('log') ax.set_xlabel('number of reads mapped') ax.set_xlabel('number of genes with that many reads') ax.set_title("Most genes have almost no reads") if regen_plots: fig.savefig(plotpath + 'most_genes_have_no_reads--zoom1.pdf') fig, ax = plt.subplots() #ax.set_xlim(0, 10**5) gene_sums[gene_sums['gene sum'] > 10**3]['gene sum'].hist(ax=ax, bins=100, ) ax.set_yscale('log') ax.set_xlabel('number of reads mapped') ax.set_xlabel('number of genes with that many reads') ax.set_title("Most genes have almost no reads") if regen_plots: fig.savefig(plotpath + 'most_genes_have_no_reads--zoom_high.pdf') gene_sums[gene_sums['gene sum'] > 5].head() gene_sums[gene_sums['gene sum'] > 10**6].head() fig, ax = plt.subplots() gene_sums[gene_sums['gene sum'] > 0]['gene sum'].hist(ax=ax, bins=100, ) ax.set_yscale('log') ax.set_xlabel('number of genes with ') fig, ax = plt.subplots() gene_sums[gene_sums['gene sum'] > 10**4]['gene sum'].hist(ax=ax, bins=100, ) ax.set_yscale('log') ax.set_xlabel('number of genes with ')
ipython_notebooks/160226_reads_mapped_to_genome_bins.ipynb
JanetMatsen/meta4_bins_janalysis
bsd-2-clause
4. Enter Line Item To BigQuery Via Values Parameters Move using hard coded Id values. 1. Provide a comma delimited list of line item ids. 1. Specify the dataset and table where the lineitems will be written. 1. The schema will match <a href='https://developers.google.com/bid-manager/guides/entity-write/format' target='_blank'>Entity Write Format</a>. Modify the values below for your use case, can be done multiple times, then click play.
FIELDS = { 'auth_read': 'user', # Credentials used for reading data. 'ids': [], 'destination_dataset': '', 'destination_table': '', } print("Parameters Set To: %s" % FIELDS)
colabs/lineitem_read_to_bigquery_via_value.ipynb
google/starthinker
apache-2.0
5. Execute Line Item To BigQuery Via Values This does NOT need to be modified unless you are changing the recipe, click play.
from starthinker.util.project import project from starthinker.script.parse import json_set_fields USER_CREDENTIALS = '/content/user.json' TASKS = [ { 'lineitem': { 'auth': 'user', 'read': { 'line_items': { 'single_cell': True, 'values': {'field': {'name': 'ids','kind': 'integer_list','order': 1,'default': []}} }, 'out': { 'bigquery': { 'dataset': {'field': {'name': 'destination_dataset','kind': 'string','order': 2,'default': ''}}, 'table': {'field': {'name': 'destination_table','kind': 'string','order': 3,'default': ''}} } } } } } ] json_set_fields(TASKS, FIELDS) project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True, _force=True) project.execute(_force=True)
colabs/lineitem_read_to_bigquery_via_value.ipynb
google/starthinker
apache-2.0
Expected Output <table> <center> Total Params: 3743280 </center> </table> By using a 128-neuron fully connected layer as its last layer, the model ensures that the output is an encoding vector of size 128. You then use the encodings the compare two face images as follows: <img src="images/distance_kiank.png" style="width:680px;height:250px;"> <caption><center> <u> <font color='purple'> Figure 2: <br> </u> <font color='purple'> By computing a distance between two encodings and thresholding, you can determine if the two pictures represent the same person</center></caption> So, an encoding is a good one if: - The encodings of two images of the same person are quite similar to each other - The encodings of two images of different persons are very different The triplet loss function formalizes this, and tries to "push" the encodings of two images of the same person (Anchor and Positive) closer together, while "pulling" the encodings of two images of different persons (Anchor, Negative) further apart. <img src="images/triplet_comparison.png" style="width:280px;height:150px;"> <br> <caption><center> <u> <font color='purple'> Figure 3: <br> </u> <font color='purple'> In the next part, we will call the pictures from left to right: Anchor (A), Positive (P), Negative (N) </center></caption> 1.2 - The Triplet Loss For an image $x$, we denote its encoding $f(x)$, where $f$ is the function computed by the neural network. <img src="images/f_x.png" style="width:380px;height:150px;"> <!-- We will also add a normalization step at the end of our model so that $\mid \mid f(x) \mid \mid_2 = 1$ (means the vector of encoding should be of norm 1). !--> Training will use triplets of images $(A, P, N)$: A is an "Anchor" image--a picture of a person. P is a "Positive" image--a picture of the same person as the Anchor image. N is a "Negative" image--a picture of a different person than the Anchor image. These triplets are picked from our training dataset. We will write $(A^{(i)}, P^{(i)}, N^{(i)})$ to denote the $i$-th training example. You'd like to make sure that an image $A^{(i)}$ of an individual is closer to the Positive $P^{(i)}$ than to the Negative image $N^{(i)}$) by at least a margin $\alpha$: $$\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2 + \alpha < \mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2$$ You would thus like to minimize the following "triplet cost": $$\mathcal{J} = \sum^{m}{i=1} \large[ \small \underbrace{\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2}\text{(1)} - \underbrace{\mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2}\text{(2)} + \alpha \large ] \small+ \tag{3}$$ Here, we are using the notation "$[z]_+$" to denote $max(z,0)$. Notes: - The term (1) is the squared distance between the anchor "A" and the positive "P" for a given triplet; you want this to be small. - The term (2) is the squared distance between the anchor "A" and the negative "N" for a given triplet, you want this to be relatively large, so it thus makes sense to have a minus sign preceding it. - $\alpha$ is called the margin. It is a hyperparameter that you should pick manually. We will use $\alpha = 0.2$. Most implementations also normalize the encoding vectors to have norm equal one (i.e., $\mid \mid f(img)\mid \mid_2$=1); you won't have to worry about that here. Exercise: Implement the triplet loss as defined by formula (3). Here are the 4 steps: 1. Compute the distance between the encodings of "anchor" and "positive": $\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2$ 2. Compute the distance between the encodings of "anchor" and "negative": $\mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2$ 3. Compute the formula per training example: $ \mid \mid f(A^{(i)}) - f(P^{(i)}) \mid - \mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2 + \alpha$ 3. Compute the full formula by taking the max with zero and summing over the training examples: $$\mathcal{J} = \sum^{m}{i=1} \large[ \small \mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2 - \mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2+ \alpha \large ] \small+ \tag{3}$$ Useful functions: tf.reduce_sum(), tf.square(), tf.subtract(), tf.add(), tf.maximum(). For steps 1 and 2, you will need to sum over the entries of $\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2$ and $\mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2$ while for step 4 you will need to sum over the training examples.
# GRADED FUNCTION: triplet_loss def triplet_loss(y_true, y_pred, alpha = 0.2): """ Implementation of the triplet loss as defined by formula (3) Arguments: y_true -- true labels, required when you define a loss in Keras, you don't need it in this function. y_pred -- python list containing three objects: anchor -- the encodings for the anchor images, of shape (None, 128) positive -- the encodings for the positive images, of shape (None, 128) negative -- the encodings for the negative images, of shape (None, 128) Returns: loss -- real number, value of the loss """ anchor, positive, negative = y_pred[0], y_pred[1], y_pred[2] ### START CODE HERE ### (≈ 4 lines) # Step 1: Compute the (encoding) distance between the anchor and the positive, you will need to sum over axis=-1 pos_dist = tf.reduce_sum(tf.square(tf.subtract(anchor, positive)),axis=-1) # Step 2: Compute the (encoding) distance between the anchor and the negative, you will need to sum over axis=-1 neg_dist = tf.reduce_sum(tf.square(tf.subtract(anchor, negative)),axis=-1) # Step 3: subtract the two previous distances and add alpha. basic_loss = pos_dist - neg_dist + alpha # Step 4: Take the maximum of basic_loss and 0.0. Sum over the training examples. loss = tf.maximum(basic_loss, 0) loss = tf.reduce_sum(loss) ### END CODE HERE ### return loss with tf.Session() as test: tf.set_random_seed(1) y_true = (None, None, None) y_pred = (tf.random_normal([3, 128], mean=6, stddev=0.1, seed = 1), tf.random_normal([3, 128], mean=1, stddev=1, seed = 1), tf.random_normal([3, 128], mean=3, stddev=4, seed = 1)) loss = triplet_loss(y_true, y_pred) print("loss = " + str(loss.eval()))
course-deeplearning.ai/course4-cnn/week4-facenet-nstyle/FaceRecognition/Face+Recognition+for+the+Happy+House+-+v3.ipynb
liufuyang/deep_learning_tutorial
mit
https://wwp.shizuoka.ac.jp/philosophy/%E5%93%B2%E5%AD%A6%E5%AF%BE%E8%A9%B1%E5%A1%BE/
reading_plan("『哲学は何を問うてきたか』", 38, (2020,6,15)) reading_plan("『死す哲』", 261, (2020,6,15))
reading_plan.ipynb
hidenori-t/snippet
mit
$$I_{xivia} \approx (twitter + instagram + \Delta facebook ) \bmod 2$$ $$I_{xivia} \approx (note + mathtodon) \bmod 2$$
reading_plan("『メノン』", 263, (2020,10,17)) date(2020, 6, 20) - date.today() reading_plan("『饗宴 解説』", 383, (2020,6,20))
reading_plan.ipynb
hidenori-t/snippet
mit
Train / Test Set This creates the train and test sets.
category2Idx = {} idx = 0 for cat in brown.categories(): category2Idx[cat] = idx idx += 1 file_ids = sorted(brown.fileids()) print "File IDs:",",".join(file_ids[0:10]) random.seed(4) random.shuffle(file_ids) train_file_ids, test_file_ids = file_ids[0:300],file_ids[300:] print "Train File IDs:",",".join(train_file_ids[0:10]) print "Test File IDs:",",".join(test_file_ids[0:10]) train_x = [] train_y = [] test_x = [] test_y = [] for fileid in train_file_ids: category = brown.categories(fileid)[0] all_words = brown.words(fileid) bow = getBoW(all_words) train_x.append(bow) train_y.append(category2Idx[category]) for fileid in test_file_ids: category = brown.categories(fileid)[0] all_words = brown.words(fileid) bow = getBoW(all_words) test_x.append(bow) test_y.append(category2Idx[category]) train_x = np.asarray(train_x, dtype='int32') train_y = np.asarray(train_y, dtype='int32') test_x = np.asarray(test_x, dtype='int32') test_y = np.asarray(test_y, dtype='int32')
2015-10_Lecture/Lecture4/code/BrownCorpus/GenreClassification.ipynb
nreimers/deeplearning4nlp-tutorial
apache-2.0
Neural Network Given the Training and Test sets, we now define a feed forward network. We use a 500 dimensional hidden layer with dropout of 0.5. Feel free to try different hidden layer sizes and number of hidden layers.
from keras.layers import containers import keras from keras.models import Sequential from keras.layers.core import Dense, Flatten, AutoEncoder, Dropout from keras.optimizers import SGD from keras.utils import np_utils batch_size = 30 nb_epoch = 50 nb_classes = len(category2Idx) model = Sequential() model.add(Dense(500, input_dim=num_max_words, activation='tanh')) model.add(Dropout(0.5)) model.add(Dense(nb_classes, activation='softmax')) train_y_cat = np_utils.to_categorical(train_y, nb_classes) test_y_cat = np_utils.to_categorical(test_y, nb_classes) model.compile(loss='categorical_crossentropy', optimizer='Adam') score = model.evaluate(test_x, test_y_cat, show_accuracy=True, verbose=0) print('Test score before fine turning:', score[0]) print('Test accuracy before fine turning:', score[1]) model.fit(train_x, train_y_cat, batch_size=batch_size, nb_epoch=nb_epoch, show_accuracy=True, validation_data=(test_x, test_y_cat)) score = model.evaluate(test_x, test_y_cat, show_accuracy=True, verbose=0) print('Test score after fine turning:', score[0]) print('Test accuracy after fine turning:', score[1])
2015-10_Lecture/Lecture4/code/BrownCorpus/GenreClassification.ipynb
nreimers/deeplearning4nlp-tutorial
apache-2.0