markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
|---|---|---|---|---|
MTF plot of the first lens
|
l1.ipzCaptureWindow('Mtf', percent=15, gamma=0.5)
|
Examples/IPNotebooks/00 Using ZEMAX and PyZDDE with IPython notebook.ipynb
|
indranilsinharoy/PyZDDE
|
mit
|
Executing ZPL macro
Lastly, here is an example of how execute a ZPL macro using the PyZDDE.
Since ZEMAX can execute ZPL macros present in a set folder (generally the default macro folder in the data folder), the appropriate macro folder path needs to be set if it is not the default macro folder path.
|
l1.zSetMacroPath(r"C:\PROGRAMSANDEXPERIMENTS\ZEMAX\Macros")
|
Examples/IPNotebooks/00 Using ZEMAX and PyZDDE with IPython notebook.ipynb
|
indranilsinharoy/PyZDDE
|
mit
|
The following command executes the ZPL macro 'GLOBAL' provided by ZEMAX. The macro computes the global vertex coordinates or orientations surface by surface by surface, and outputs a text window within the ZEMAX environment. Maximize (if required) the ZEMAX application window to see the output after executing the following command.
|
l1.zExecuteZPLMacro('GLO')
|
Examples/IPNotebooks/00 Using ZEMAX and PyZDDE with IPython notebook.ipynb
|
indranilsinharoy/PyZDDE
|
mit
|
Close the DDE links
|
pyz.closeLink() # Also, l1.close(); l2.close()
|
Examples/IPNotebooks/00 Using ZEMAX and PyZDDE with IPython notebook.ipynb
|
indranilsinharoy/PyZDDE
|
mit
|
Berry phase calculation for graphene
This tutorial will describe a complete walk-through of how to calculate the Berry phase for graphene.
Creating the geometry to investigate
Our system of interest will be the pristine graphene system with the on-site terms shifted by $\pm\delta$.
|
graphene = geom.graphene()
H = Hamiltonian(graphene)
H.construct([(0.1, 1.44), (0, -2.7)])
|
docs/tutorials/tutorial_es_2.ipynb
|
zerothi/sids
|
lgpl-3.0
|
H now contains the pristine graphene tight-binding model. The anti-symmetric Hamiltonian is constructed like this:
|
H_bp = H.copy() # an exact copy
H_bp[0, 0] = 0.1
H_bp[1, 1] = -0.1
|
docs/tutorials/tutorial_es_2.ipynb
|
zerothi/sids
|
lgpl-3.0
|
Comparing electronic structures
Before proceeding to the Berry phase calculation lets compare the band structure and DOS of the two models. The anti-symmetric Hamiltonian opens a gap around the Dirac cone. A zoom on the Dirac cone shows this.
|
band = BandStructure(H, [[0, 0.5, 0], [1/3, 2/3, 0], [0.5, 0.5, 0]], 400, [r"$M$", r"$K$", r"$M'$"])
band.set_parent(H)
band_array = band.apply.array
bs = band_array.eigh()
band.set_parent(H_bp)
bp_bs = band_array.eigh()
lk, kt, kl = band.lineark(True)
plt.xticks(kt, kl)
plt.xlim(0, lk[-1])
plt.ylim([-.3, .3])
plt.ylabel('$E-E_F$ [eV]')
for bk in bs.T:
plt.plot(lk, bk)
for bk in bp_bs.T:
plt.plot(lk, bk)
|
docs/tutorials/tutorial_es_2.ipynb
|
zerothi/sids
|
lgpl-3.0
|
The gap opened is equal to the difference between the two on-site terms. In this case it equals $0.2\mathrm{eV}$. Lets, for completeness sake calculate the DOS close to the Dirac point for the two systems. To resolve the gap the distribution function (in this case the Gaussian) needs to have a small smearing value to ensure the states are not too spread and the gap smeared out.
|
bz = MonkhorstPack(H, [41, 41, 1], displacement=[1/3, 2/3, 0], size=[.125, .125, 1])
bz_average = bz.apply.average # specify the Brillouin zone to perform an average
|
docs/tutorials/tutorial_es_2.ipynb
|
zerothi/sids
|
lgpl-3.0
|
The above MonkhorstPack grid initialization is creating a Monkhorst-Pack grid centered on the $K$ point with a reduced Brillouin zone size of $1/8$th of the entire Brillouin zone. Essentially this only calculates the DOS in a small $k$-region around the $K$-point. Since in this case we know the electronic structure of our system we can neglect all contributions from $k$-space away from the $K$-point because we are only interested in energies close to the Dirac-point.
Here the sampled $k$-points are plotted. Note how they are concentrated around $[1/3, -1/3]$ which is the $K$-point.
|
plt.scatter(bz.k[:, 0], bz.k[:, 1], 2);
plt.xlabel(r'$k_x$ [$b_x$]');
plt.ylabel(r'$k_y$ [$b_y$]');
|
docs/tutorials/tutorial_es_2.ipynb
|
zerothi/sids
|
lgpl-3.0
|
Before proceeding to the Berry phase calculation we calculate the DOS in an energy region around the Dirac-point to confirm the band-gap.
|
E = np.linspace(-0.5, 0.5, 1000)
dist = get_distribution('gaussian', 0.03)
bz.set_parent(H)
plt.plot(E, bz_average.DOS(E, distribution=dist), label='Graphene');
bz.set_parent(H_bp)
plt.plot(E, bz_average.DOS(E, distribution=dist), label='Graphene anti');
plt.legend()
plt.ylim([0, None])
plt.xlabel('$E - E_F$ [eV]');
plt.ylabel('DOS [1/eV]');
|
docs/tutorials/tutorial_es_2.ipynb
|
zerothi/sids
|
lgpl-3.0
|
Berry phase calculation
To calculate the Berry phase we are going to perform a discretized integration of the Bloch states on a closed loop. We are going to calculate it around the $K$-point with a given radius. After having created the
|
# Number of discretizations
N = 50
# Circle radius in 1/Ang
kR = 0.01
# Normal vector (in units of reciprocal lattice vectors)
normal = [0, 0, 1]
# Origo (in units of reciprocal lattice vectors)
origin = [1/3, 2/3, 0]
circle = BrillouinZone.param_circle(H, N, kR, normal, origin)
plt.plot(circle.k[:, 0], circle.k[:, 1]);
plt.xlabel(r'$k_x$ [$b_x$]')
plt.ylabel(r'$k_y$ [$b_y$]')
plt.gca().set_aspect('equal');
|
docs/tutorials/tutorial_es_2.ipynb
|
zerothi/sids
|
lgpl-3.0
|
The above plot shows a skewed circle because the $k$-points in the Brillouin zone object is stored in units of reciprocal lattice vectors. I.e. the circle is perfect in the reciprocal space. Note that the below Berry phase calculation ensures the loop is completed by also taking into account the first and last point.
To confirm that the circle is perfect in reciprocal space, we convert the $k$-points and plot again. Note also that the radius of the circle is $0.01\mathrm{Ang}^{-1}$.
|
k = circle.tocartesian(circle.k)
plt.plot(k[:, 0], k[:, 1]);
plt.xlabel(r'$k_x$ [1/Ang]')
plt.ylabel(r'$k_y$ [1/Ang]')
plt.gca().set_aspect('equal');
|
docs/tutorials/tutorial_es_2.ipynb
|
zerothi/sids
|
lgpl-3.0
|
Now we are ready to calculate the Berry phase. We calculate it for both graphene and the anti-symmetric graphene using only the first, second and both bands:
|
circle.set_parent(H)
print('Pristine graphene (0): {:.5f} rad'.format(electron.berry_phase(circle, sub=0)))
print('Pristine graphene (1): {:.5f} rad'.format(electron.berry_phase(circle, sub=1)))
print('Pristine graphene (:): {:.5f} rad'.format(electron.berry_phase(circle)))
circle.set_parent(H_bp)
print('Anti-symmetric graphene (0): {:.5f} rad'.format(electron.berry_phase(circle, sub=0)))
print('Anti-symmetric graphene (1): {:.5f} rad'.format(electron.berry_phase(circle, sub=1)))
print('Anti-symmetric graphene (:): {:.5f} rad'.format(electron.berry_phase(circle)))
|
docs/tutorials/tutorial_es_2.ipynb
|
zerothi/sids
|
lgpl-3.0
|
We now plot the Berry phase as a function of integration radius with a somewhat constant discretization. In addition we calculate the Berry phase in the skewed circle in reciprocal space and perfectly circular in the units of the reciprocal lattice vectors. This enables a comparison of the integration path.
|
kRs = np.linspace(0.001, 0.2, 70)
dk = 0.0001
bp = np.empty([2, len(kRs)])
for i, kR in enumerate(kRs):
circle = BrillouinZone.param_circle(H_bp, dk, kR, normal, origin)
bp[0, i] = electron.berry_phase(circle, sub=0)
circle_other = BrillouinZone.param_circle(utils.mathematics.fnorm(H_bp.rcell), dk, kR, normal, origin)
circle.k[:, :] = circle_other.k[:, :]
bp[1, i] = -electron.berry_phase(circle, sub=0)
plt.plot(kRs, bp[0, :], label=r'1/Ang');
plt.plot(kRs, bp[1, :], label=r'$b_i$');
plt.legend()
plt.xlabel(r'Integration radius [1/Ang]');
plt.ylabel(r'Berry phase [$\phi$]');
|
docs/tutorials/tutorial_es_2.ipynb
|
zerothi/sids
|
lgpl-3.0
|
Setup ids
|
def get_gold_ids(person):
"""Get gold data
Pararemeters
------------
person : {"GP", "MES", "KMA", "common_gold_data"}
Returns
-------
pd.Series
"""
path = Path("/Users/klay6683/Dropbox/Documents/latex_docs/p4_paper1/gold_data")
return pd.read_csv(path / f"{person}.txt", header=None, squeeze=True)
ids = get_gold_ids('common_gold_data')
ids = 'br5 bu5 ek1 pbr 1dt 1dr 1fe dch bvc 1c5 1ab 1dk 18s 1b0 1cl 1ct 1at 1al 1aa 10p 185 139 13t 15k 17a'.split()
def create_and_save_randoms():
myids = np.random.choice(ids, 100)
np.save('myids.npy', myids)
myids = np.load('myids.npy')
len(myids)
combined = list(ids) + list(myids)
%store combined
db = DBScanner(savedir='gold_with_angle_std', do_large_run=True)
for id_ in ids:
print(id_)
db.cluster_image_id(id_)
bucket = []
for img_id in ids:
p4id = markings.ImageID(img_id, scope='planet4', data=db.data)
db.pm.obsid = p4id.image_name
db.pm.id = img_id
try:
bucket.extend(db.pm.fandf.angle_std.values)
except FileNotFoundError:
continue
len(bucket)
bucket = np.array(bucket)
import seaborn as sns
sns.set_context('paper')
bins = np.arange(0, 22, 1)
pd.Series(bucket).to_csv("angle_std_bucket.csv", index=False)
fig, ax = plt.subplots(constrained_layout=True)
sns.distplot(bucket, kde=False, bins=bins)
ax.set_title("Histogram of angular STD for merged fan clusters")
ax.set_xlabel("Fan angle standard deviation per cluster [deg]")
ax.set_ylabel("Histogram Counts")
db.pm.fanfile
db.pm.fandf.angle_std
np.save('combined_ids_to_check.npy', np.array(combined))
from nbtools import execute_in_parallel
def process_id(id_):
from planet4.dbscan import DBScanner
db = DBScanner(savedir='newest_clustering_review', do_large_run=True)
for kind in ['fan', 'blotch']:
db.parameter_scan(id_, kind,
msf_vals_to_scan=[0.1, 0.13],
eps_vals_to_scan=[20, 25, 30],
size_to_scan='large')
|
notebooks/clustering development.ipynb
|
michaelaye/planet4
|
isc
|
Here's my comments from the review"
APF0000br5 - seems like the big blotch should have been seen
APF0000bu5 - seems like middle fan should be there - seems too strict a cut not clustering issue?
APF0000ek1- yellow final blotch comes out of no where
APF0000pbr - bottom right blotch seems like it should have survived
APF00001dt - cyan fan seems bigger than it should be
|
results = execute_in_parallel(process_id, combined)
for id_ in ids:
print(id_)
for kind in ['blotch']:
print(kind)
dbscanner = DBScanner(savedir='do_cluster_on_large', do_large_run=True)
# dbscanner.parameter_scan(kind, [0.1, 0.13], [30, 50, 70])
# for blotch:
dbscanner.cluster_and_plot(id_, kind, saveplot=True)
plt.close('all')
for id_ in ithaca_sample:
print(id_)
for kind in ['blotch']:
print(kind)
dbscanner = DBScanner(id_)
# dbscanner.parameter_scan(kind, [0.1, 0.13], [30, 50, 70])
# for blotch:
dbscanner.parameter_scan(kind, [0.1, 0.13], [15, 22, 30])
plt.close('all')
for id_ in ithaca_sample:
print(id_)
for kind in ['fan']:
print(kind)
dbscanner = DBScanner(id_)
dbscanner.parameter_scan(kind, [0.1, 0.13], [30, 50, 70])
# for blotch:
# dbscanner.parameter_scan(kind, [0.1, 0.13], [15, 22, 30])
plt.close('all')
from shapely.geometry import Point
p1 = Point(266.4, 470.56)
p2 = Point(262.072, 469.679)
p1.distance(p2)
|
notebooks/clustering development.ipynb
|
michaelaye/planet4
|
isc
|
single item checking
|
%matplotlib ipympl
from planet4.catalog_production import ReleaseManager
rm = ReleaseManager('v1.0')
rm.savefolder
db = DBScanner(savedir='examples_for_paper', do_large_run=True)
db.eps_values
db.cluster_and_plot('arp', 'fan')
plotting.plot_image_id_pipeline('gr0', datapath='gold_per_obsid', via_obsid=True)
plt.close('all')
id_ = ids[14]
db.parameter_scan(id_, 'fan', msf_vals_to_scan=(0.1, 0.13),
eps_vals_to_scan=(10, 20, 30), size_to_scan='small')
plotting.plot_image_id_pipeline(id_, datapath=rm.savefolder, save=True, saveroot='./plots')
data = io.DBManager().get_image_id_markings('arp')
data.classification_id.nunique()
data.groupby(['classification_id', 'user_name']).marking.value_counts()
data[data.marking=='blotch'].shape
db.parameter_scan('bsn', 'blotch', [0.10, 0.13], [10, 12, 14], size_to_scan='small', )
v1 = (8.9, 87.3)
v2 = (19.8, 79.8)
v1 = np.array(v1)
v2 = np.array(v2)
from numpy.linalg import norm
norm(v1 - v2)
norm(np.array(v1), np.array(v2))
db.save_results
db.final_clusters['blotch']
import seaborn as sns
sns.set_context('notebook')
import itertools
palette = itertools.cycle(sns.color_palette('bright'))
fig, ax = plt.subplots()
for b in db.final_clusters['blotch'][1]:
db.p4id.plot_blotches(data=b, user_color=next(palette), ax=ax)
ax.set_title('second round')
fig.savefig('second_round.png', dpi=150)
db.parameter_scan('1wg', 'fan',
msf_vals_to_scan=[0.1, 0.13],
eps_vals_to_scan=[20, 25, 30],
size_to_scan='large')
db.parameter_scan('15k', 'blotch',
msf_vals_to_scan=[0.1, 0.13],
eps_vals_to_scan=[10, 12, 15],
size_to_scan='small')
fig, ax = plt.subplots()
db.p4id.plot_blotches(ax=ax)
ax.set_title('input data')
fig.savefig('input_data.png', dpi=150)
blotches = db.p4id.filter_data('blotch').dropna(how='all', axis=1)
blotches['x y radius_1 radius_2 angle'.split()].sort_values(by='radius_1')
fans = db.p4id.filter_data('fan')
xyclusters = pd.concat(db.cluster_xy(blotches, 15)).dropna(how='all', axis=1)
blotches.shape
xyclusters.shape
blotches[~blotches.isin(xyclusters).all(1)].shape
db.eps_values['blotch']['angle']= None
db.eps_values['blotch']['angle']= 20
db.eps_values['blotch']['radius']['small']=30
db.eps_values
db.parameter_scan('bp7', 'blotch', [0.1, 0.13], [15,22,30], 'small')
db.cluster_image_id('bz7')
db.cluster_and_plot('bz7', 'blotch')
db.min_samples
db.cluster_image_id('bb6')
db.final_clusters['blotch'][0][4][markings.Blotch.to_average+['user_name']]
db.final_clusters['blotch'][0][2][markings.Blotch.to_average+['user_name']]
%debug
db.parameter_scan('blotch', [0.1, 0.13], [15, 22, 30])
db.parameter_scan('fan', [0.1,0.15], [30, 50,70])
db.pipeline(10, 3, 50)
db.store_folder
sizes = []
for _,b in blotches.iterrows():
B = markings.Blotch(b, scope='planet4')
sizes.append(B.area)
%matplotlib nbagg
plt.figure()
plt.hist(sizes, bins=50);
db.parameter_scan('fan', [0.1,0.15], [10, 15, 20])
db.cluster_and_plot('blotch', 20, 3)
ax = plt.gca()
ax.get_title()
db.parameter_scan('fan', [0.07, 0.1, 0.15], [15,20])
db.parameter_scan('blotch', [0.07, 0.1, 0.15], [15,20])
ek1.cluster_and_plot('blotch', 20, 3)
ek1.p4id.plot_blotches(data=ek1.finalclusters[5])
ek1.p4id.plot_blotches(data=ek1.averaged[5])
p4id = markings.ImageID('1fe', scope='planet4')
blotches = p4id.get_blotches()
X = blotches['x y'.split()]
dbscanner = DBScanner(X, min_samples=5, eps=20)
clusters = [blotches.loc[idx] for idx in dbscanner.clustered_indices]
from planet4.clustering import cluster_angles
bucket = []
for cluster in clusters:
print(cluster.shape)
bucket.append([cluster.loc[idx] for idx in cluster_angles(cluster, 'blotch', 5)])
for item in bucket:
for subitem in item:
print(subitem.shape)
cluster_and_plot('1dr', production=True, dynamic=True,
msf=msf, eps=eps, radii=False, dbscan=True,
figtitle=figtitle)
cm = cluster_and_plot('1dt', production=False, msf=0.1, dynamic=True,
radii=False, dbscan=False)
df = pd.read_csv('fuckdf.csv')
(df - df.mean(axis=0))/df.std(axis=0)
df[df.apply(lambda x: np.abs(x - x.mean()) / x.std() < 1).all(axis=0)]
from scipy.stats import zscore
zscore??
pd.DataFrame(zscore(df,ddof=1))
def highlight_bigger_std(x):
'''
highlight the maximum in a Series yellow.
'''
is_true = (np.abs(x - x.mean()) / x.std() > 2)
return ['background-color: yellow' if v else '' for v in is_true]
# return is_true
df.style.apply(highlight_bigger_std)
cm = cluster_and_plot('pbr', production=False, msf=0.1, dynamic=True,
radii=False)
cm = cluster_and_plot('pbr',eps=20, production=False, msf=0.1, dynamic=True,
radii=True)
cm.db
imgid = '1at'
imgid = 'dch'
imgid = 'bvc'
imgid = '1dr'
imgid = '1fe'
imgid = 'br5'
imgid = 'ek1'
p4id = markings.ImageID(imgid, scope='planet4')
data = p4id.get_blotches()
from planet4.dbscan import DBScanner
current_X = data[['x','y']].values
clusterer = DBScanner(current_X, eps=15, min_samples=3)
clusterer.n_clusters_
cluster = data.loc[clusterer.clustered_indices[0]]
p4id.plot_blotches(blotches=cluster,with_center=True)
cluster[blotchcols]
indices = clustering.cluster_angles(cluster, 'blotch', eps_blotchangle=10)
indices
angle_cluster_data = cluster.loc[indices[0], blotchcols +['user_name']]
angle_cluster_data
df = angle_cluster_data[blotchcols]
df[df.apply(lambda x: np.abs(x - x.mean()) / x.std() < 1).all(axis=1)]
clustering.get_average_object(angle_cluster_data[blotchcols], 'blotch')
p4id.plot_blotches(blotches=cluster.loc[indices[0]], with_center=True)
df = cluster.loc[indices[0]][blotchcols]
df['area'] = df.apply(lambda x: np.pi*x.radius_1*x.radius_2, axis=1)
df
col='radius_1'
df.radius_1.std()
df[np.abs(df[col]-df[col].mean())<=(1*df[col].std())]
df[df.apply(lambda x: np.abs(x - x.mean()) / x.std() < 1).all(axis=1)]
subclus
testblotch = markings.Blotch?
testblotchdata = dict(x=340, y=340, angle=127, radius_1=250, radius_2=186)
testblotch = markings.Blotch(
pd.DataFrame(
testblotchdata, index=[0]), scope='planet4')
fig, ax = plt.subplots()
ax.add_artist(testblotch)
ax.set_xlim(0, 800)
ax.set_ylim(0, 600)
testblotch = markings.Blotch(
pd.DataFrame(testblotchdata, index=[0]),
scope='planet4')
p4id.plot_blotches(blotches=[testblotch])
from sklearn.cluster import DBSCAN
class DBScanner(object):
"""Execute clustering and create mean cluster markings.
The instantiated object will execute:
* _run_DBSCAN() to perform the clustering itself
* _post_analysis() to create mean markings from the clustering results
Parameters
----------
current_X : numpy.array
array holding the data to be clustered, preprocessed in ClusterManager
eps : int, optional
Distance criterion for DBSCAN algorithm. Samples further away than this value don't
become members of the currently considered cluster. Default: 10
min_samples : int, optional
Mininum number of samples required for a cluster to be created. Default: 3
"""
def __init__(self, X, eps=15, min_samples=3, only_core=False):
self.X = X
self.eps = eps
self.min_samples = min_samples
self.only_core = only_core
# these lines execute the clustering
self._run_DBSCAN()
def _run_DBSCAN(self):
"""Perform the DBSCAN clustering."""
db = DBSCAN(self.eps, self.min_samples).fit(self.X)
core_samples_mask = np.zeros_like(db.labels_, dtype=bool)
core_samples_mask[db.core_sample_indices_] = True
labels = db.labels_
unique_labels = set(labels)
colors = plt.cm.Spectral(np.linspace(0, 1, len(unique_labels)))
self.n_clusters_ = len(unique_labels) - (1 if -1 in labels else 0)
self.clustered_indices = [] # list of `kind` cluster average objects
self.n_rejected = 0
# loop over unique labels.
for k, col in zip(unique_labels, colors):
# get indices for members of this cluster
class_member_mask = (labels == k)
if self.only_core:
cluster_members = (class_member_mask & core_samples_mask)
else:
cluster_members = class_member_mask
if k == -1:
col = 'black'
self.n_rejected = len(cluster_members)
else:
xy = self.X[cluster_members]
if xy.shape[1] > 1:
y = xy[:, 1]
else:
y = [0] * xy.shape[0]
plt.plot(
xy[:, 0],
y,
'o',
markerfacecolor=col,
markeredgecolor='black',
markersize=14)
xy = self.X[class_member_mask & ~core_samples_mask]
if xy.shape[1] > 1:
y = xy[:, 1]
else:
y = [0] * xy.shape[0]
plt.plot(
xy[:, 0],
y,
'o',
markerfacecolor=col,
markeredgecolor='black',
markersize=6)
self.clustered_indices.append(cluster_members)
plt.gca().invert_yaxis()
plt.title('Estimated number of clusters: %d' % self.n_clusters_)
self.db = db
cluster[blotchcols]
xy_angles = clustering.angle_to_xy(cluster.angle, 'blotch')
xy_angles
xy_angles.shape
plt.figure(figsize=(5*1.3,5))
clusterer = DBScanner(xy_angles, eps=20*np.pi/360, min_samples=3)
data.loc[clusterer.clustered_indices[1]]
for cluster_members in clusterer.clustered_indices:
clusterdata = data.loc[cluster_members, blotchcols + ['user_name']]
print(len(clusterdata))
angle_clustered = clustering.cluster_angles(clusterdata, 'blotch')
for indices in angle_clustered:
angle_clusterdata = clusterdata.loc[indices, blotchcols +
['user_name']]
filtered = angle_clusterdata.groupby('user_name').first()
print(len(filtered))
cm.min_samples
30* cm.min_samples_factor
cm.reduced_data['blotch']
cm.cluster_angles
db = clustering.cluster_angles(cluster, 'blotch')
len(db[0])
len(cluster)
filtered = cluster.groupby('user_name').first()
plt.figure()
filtered.angle.hist()
toprint = cluster2[markings.Fan.to_average + ['user_name', 'marking', 'classification_id']]
toprint.to_clipboard(index=False)
def add_angle_vector(df):
new = df.copy()
new['xang'] = np.cos(np.deg2rad(df.angle))
new['yang'] = np.sin(np.deg2rad(df.angle))
return new
cluster2 = add_angle_vector(cluster2)
cluster2
|
notebooks/clustering development.ipynb
|
michaelaye/planet4
|
isc
|
testing angle deltas
|
def angle_to_xy(angle):
x = np.cos(np.deg2rad(angle))
y = np.sin(np.deg2rad(angle))
return np.vstack([x,y]).T
def cluster_angles(angles, delta_angle):
dist_per_degree = 0.017453070996747883
X = angle_to_xy(angles)
clusterer = DBScanner(X, eps=delta_angle*dist_per_degree, min_samples=3)
return clusterer
clusterer = cluster_angles(cluster.angle, 10)
clusterer.db.core_sample_indices_
clusterer.db.labels_
cluster.shape
clusterer.clustered_indices
cluster2.iloc[clusterer.clustered_data[0]]
dbscanner.reduced_data[0]
|
notebooks/clustering development.ipynb
|
michaelaye/planet4
|
isc
|
this means all ellipses were clustered together. eps=10 picks 3 out of these 6.
|
clusterdata = data.iloc[dbscanner.reduced_data[0]]
|
notebooks/clustering development.ipynb
|
michaelaye/planet4
|
isc
|
so clusterdata is just the same as the input data, i just repeat the exact same code steps here for consistency.
|
clusterdata[blotchcols]
meandata = clusterdata.mean()
meandata
from scipy.stats import circmean
meandata.angle = circmean(clusterdata.angle, high=180)
meandata
n_class_old = data.classification_id.nunique()
n_class_old
# number of classifications that include fan and blotches
f1 = data.marking == 'fan'
f2 = data.marking == 'blotch'
n_class_fb = data[f1 | f2].classification_id.nunique()
n_class_fb
data=data[data.marking=='blotch']
plotting.plot_raw_blotches('bvc')
fans.plot(kind='scatter', x='x',y='y')
plt.gca().invert_yaxis()
fx1 = data.x < 400
fx2 = data.x > 300
fy1 = data.y_R > 300
fy2 = data.y_R < 400
data = data.reset_index()
data[fx1 & fx2 & fy1 & fy2].angle
cm.dbscanner.reduced_data
|
notebooks/clustering development.ipynb
|
michaelaye/planet4
|
isc
|
testing cluster_image_name
|
dbscanner = dbscan.DBScanner()
db = io.DBManager()
data = db.get_obsid_markings('ESP_020568_0950')
image_ids = data.image_id.unique()
%matplotlib nbagg
import seaborn as sns
sns.set_context('notebook')
p4id = markings.ImageID(image_ids[0])
p4id.plot_fans()
p4id.plot_fans(data=p4id.data.query('angle>180'))
p4id.imgid
data[data.marking=='fan'].angle.describe()
dbscanner.cluster_image_name('PSP_002622_0945')
db = io.DBManager()
db.get_image_name_markings('PSP_002622_0945')
|
notebooks/clustering development.ipynb
|
michaelaye/planet4
|
isc
|
Cluster random samples of obsids
|
obsids = 'ESP_020476_0950, ESP_011931_0945, ESP_012643_0945, ESP_020783_0950'.split(', ')
obsids
def process_obsid(obsid):
from planet4.catalog_production import do_cluster_obsids
do_cluster_obsids(obsid, savedir=obsid)
return obsid
from nbtools import execute_in_parallel
execute_in_parallel(process_obsid, obsids)
db = io.DBManager()
for obsid in obsids:
data = db.get_image_name_markings(obsid)
image_ids = data.image_id.drop_duplicates().sample(n=50)
for id_ in image_ids:
print(id_)
plotting.plot_image_id_pipeline(id_, datapath=obsid, save=True,
saveroot=f'plots/{obsid}',
via_obsid=True)
plt.close('all')
plotting.plot_finals('prv', datapath=obsids[0], via_obsid=True)
|
notebooks/clustering development.ipynb
|
michaelaye/planet4
|
isc
|
Counts
Make bar chart to see structure of the dataset.
|
counts = collections.Counter(map(lambda x: x['label'], labels))
print(counts)
for label, cnt in counts.items():
percents = cnt / len(labels) * 100
print('{} is {}%'.format(label, round(percents, 2)))
idx = np.arange(len(counts))
rects = plt.bar(idx, list(map(lambda x: x[1], sorted(counts.items()))))
plt.xticks(idx, ('emission', 'absorption', 'unknown', 'double-peak'))
plt.ylabel('number of spectra')
plt.xlabel('class')
plt.title('portion of each class in Ondřejov dataset');
|
notebooks/03-labeled-data.ipynb
|
podondra/bt-spectraldl
|
gpl-3.0
|
Classes Preview
|
f = h5py.File('data/data.hdf5')
spectra = f['spectra']
def plot_class(spectrum, ax, class_name):
ax.plot(spectrum[0], spectrum[1])
ax.set_title(class_name)
ax.set_xlabel('wavelength (Angstrom)')
ax.set_ylabel('flux')
ax.axvline(x=6562.8, color='black', label='H-alpha', alpha=0.25)
ax.legend()
fig, axs = plt.subplots(3, 1)
idents = ['lb160035', 'a201403300026', 'si220021']
classes = ['emission', 'absorption', 'double-peak']
for ident, ax, cl in zip(idents, axs, classes):
plot_class(spectra[ident], ax, cl)
fig.tight_layout()
|
notebooks/03-labeled-data.ipynb
|
podondra/bt-spectraldl
|
gpl-3.0
|
Let's Add Labels
Add labels to the HDF5 file.
|
for spectrum in labels:
ident = spectrum['id'].split('/')[-1]
spectra[ident].attrs['label'] = int(spectrum['label'])
|
notebooks/03-labeled-data.ipynb
|
podondra/bt-spectraldl
|
gpl-3.0
|
Vizualize All Spectra in a Class
|
fig, (ax0, ax1, ax3) = plt.subplots(3, 1)
axs = [ax0, ax1, None, ax3]
for ident, data in spectra.items():
label = spectra[ident].attrs['label']
if label == 2:
continue
axs[label].plot(data[0], data[1], alpha=0.1, lw=0.5)
fig.tight_layout()
|
notebooks/03-labeled-data.ipynb
|
podondra/bt-spectraldl
|
gpl-3.0
|
Wavelength Ranges
Infimum
This analysis shows that the infimum from starting wavelengths is 6518.4272.
That is pretty high but H-alpha is 6562.8 and H-alpha is the main feature.
It may shorten the range of value and thus speed up training.
I also reviewed some spectra and it is far enough from H-alpha.
Therefore 6519 Angstrom should be choosen as starting wavelength.
|
# find spectrum which start with highest value
# x is tuple x[1] are values, [0, 0] is first wavelength
wave_starts = dict(map(lambda x: (x[0], x[1][0, 0]), spectra.items()))
starts_n, starts_bins, _ = plt.hist(list(wave_starts.values()))
plt.title('wavelenght starts')
starts_n, starts_bins
infimum = list(reversed(sorted(wave_starts.items(), key=lambda x: x[1])))[0][1]
print('infimum:', math.ceil(infimum), 'Angstrom')
list(reversed(sorted(wave_starts.items(), key=lambda x: x[1])))[:10]
def plot_spectrum(ident):
spectrum = spectra[ident]
plt.plot(spectrum[0], spectrum[1], label=ident)
plot_spectrum('la220044')
plot_spectrum('a201504060008')
plot_spectrum('a201504060037')
plot_spectrum('td210007')
plot_spectrum('qd260023')
plt.legend();
|
notebooks/03-labeled-data.ipynb
|
podondra/bt-spectraldl
|
gpl-3.0
|
Supremum
At ends there is no problem because most of spectra are is first bar.
|
# find spectrum which end with lowest value
# x is tuple x[1] are values, [0, 0] is first wavelength
wave_ends = dict(map(lambda x: (x[0], x[1][0, -1]), spectra.items()))
ends_n, ends_bins, _ = plt.hist(list(wave_ends.values()))
plt.title('wavelenght ends')
ends_n, ends_bins
supremum = list(sorted(wave_ends.items(), key=lambda x: x[1]))[0][1]
print('supremum:', math.floor(supremum), 'Angstrom')
list(sorted(wave_ends.items(), key=lambda x: x[1]))[:10]
plot_spectrum('pb060015')
plot_spectrum('lb160035')
plt.legend();
f.close()
|
notebooks/03-labeled-data.ipynb
|
podondra/bt-spectraldl
|
gpl-3.0
|
Vertex AI: Vertex AI Migration: AutoML Text Entity Extraction
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/ai-platform-samples/blob/master/vertex-ai-samples/tree/master/notebooks/official/migration/UJ7%20Vertex%20SDK%20AutoML%20Text%20Entity%20Extraction.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/ai-platform-samples/blob/master/vertex-ai-samples/tree/master/notebooks/official/migration/UJ7%20Vertex%20SDK%20AutoML%20Text%20Entity%20Extraction.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
</table>
<br/><br/><br/>
Dataset
The dataset used for this tutorial is the NCBI Disease Research Abstracts dataset from National Center for Biotechnology Information. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket.
Costs
This tutorial uses billable components of Google Cloud:
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Set up your local development environment
If you are using Colab or Google Cloud Notebooks, your environment already meets all the requirements to run this notebook. You can skip this step.
Otherwise, make sure your environment meets this notebook's requirements. You need the following:
The Cloud Storage SDK
Git
Python 3
virtualenv
Jupyter notebook running in a virtual environment with Python 3
The Cloud Storage guide to Setting up a Python development environment and the Jupyter installation guide provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions:
Install and initialize the SDK.
Install Python 3.
Install virtualenv and create a virtual environment that uses Python 3. Activate the virtual environment.
To install Jupyter, run pip3 install jupyter on the command-line in a terminal shell.
To launch Jupyter, run jupyter notebook on the command-line in a terminal shell.
Open this notebook in the Jupyter Notebook Dashboard.
Installation
Install the latest version of Vertex SDK for Python.
|
import os
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
|
notebooks/official/migration/UJ7 Vertex SDK AutoML Text Entity Extraction.ipynb
|
GoogleCloudPlatform/vertex-ai-samples
|
apache-2.0
|
Location of Cloud Storage training data.
Now set the variable IMPORT_FILE to the location of the JSONL index file in Cloud Storage.
|
IMPORT_FILE = "gs://cloud-samples-data/language/ucaip_ten_dataset.jsonl"
|
notebooks/official/migration/UJ7 Vertex SDK AutoML Text Entity Extraction.ipynb
|
GoogleCloudPlatform/vertex-ai-samples
|
apache-2.0
|
Create a dataset
datasets.create-dataset-api
Create the Dataset
Next, create the Dataset resource using the create method for the TextDataset class, which takes the following parameters:
display_name: The human readable name for the Dataset resource.
gcs_source: A list of one or more dataset index files to import the data items into the Dataset resource.
import_schema_uri: The data labeling schema for the data items.
This operation may take several minutes.
|
dataset = aip.TextDataset.create(
display_name="NCBI Biomedical" + "_" + TIMESTAMP,
gcs_source=[IMPORT_FILE],
import_schema_uri=aip.schema.dataset.ioformat.text.extraction,
)
print(dataset.resource_name)
|
notebooks/official/migration/UJ7 Vertex SDK AutoML Text Entity Extraction.ipynb
|
GoogleCloudPlatform/vertex-ai-samples
|
apache-2.0
|
Example Output:
INFO:google.cloud.aiplatform.datasets.dataset:Creating TextDataset
INFO:google.cloud.aiplatform.datasets.dataset:Create TextDataset backing LRO: projects/759209241365/locations/us-central1/datasets/3704325042721521664/operations/3193181053544038400
INFO:google.cloud.aiplatform.datasets.dataset:TextDataset created. Resource name: projects/759209241365/locations/us-central1/datasets/3704325042721521664
INFO:google.cloud.aiplatform.datasets.dataset:To use this TextDataset in another session:
INFO:google.cloud.aiplatform.datasets.dataset:ds = aiplatform.TextDataset('projects/759209241365/locations/us-central1/datasets/3704325042721521664')
INFO:google.cloud.aiplatform.datasets.dataset:Importing TextDataset data: projects/759209241365/locations/us-central1/datasets/3704325042721521664
INFO:google.cloud.aiplatform.datasets.dataset:Import TextDataset data backing LRO: projects/759209241365/locations/us-central1/datasets/3704325042721521664/operations/5152246891450204160
INFO:google.cloud.aiplatform.datasets.dataset:TextDataset data imported. Resource name: projects/759209241365/locations/us-central1/datasets/3704325042721521664
projects/759209241365/locations/us-central1/datasets/3704325042721521664
Train a model
training.automl-api
Create and run training pipeline
To train an AutoML model, you perform two steps: 1) create a training pipeline, and 2) run the pipeline.
Create training pipeline
An AutoML training pipeline is created with the AutoMLTextTrainingJob class, with the following parameters:
display_name: The human readable name for the TrainingJob resource.
prediction_type: The type task to train the model for.
classification: A text classification model.
sentiment: A text sentiment analysis model.
extraction: A text entity extraction model.
multi_label: If a classification task, whether single (False) or multi-labeled (True).
sentiment_max: If a sentiment analysis task, the maximum sentiment value.
The instantiated object is the DAG (directed acyclic graph) for the training pipeline.
|
dag = aip.AutoMLTextTrainingJob(
display_name="biomedical_" + TIMESTAMP, prediction_type="extraction"
)
print(dag)
|
notebooks/official/migration/UJ7 Vertex SDK AutoML Text Entity Extraction.ipynb
|
GoogleCloudPlatform/vertex-ai-samples
|
apache-2.0
|
Example output:
<google.cloud.aiplatform.training_jobs.AutoMLTextTrainingJob object at 0x7fc3b6c90f10>
Run the training pipeline
Next, you run the DAG to start the training job by invoking the method run, with the following parameters:
dataset: The Dataset resource to train the model.
model_display_name: The human readable name for the trained model.
training_fraction_split: The percentage of the dataset to use for training.
test_fraction_split: The percentage of the dataset to use for test (holdout data).
validation_fraction_split: The percentage of the dataset to use for validation.
The run method when completed returns the Model resource.
The execution of the training pipeline will take upto 20 minutes.
|
model = dag.run(
dataset=dataset,
model_display_name="biomedical_" + TIMESTAMP,
training_fraction_split=0.8,
validation_fraction_split=0.1,
test_fraction_split=0.1,
)
|
notebooks/official/migration/UJ7 Vertex SDK AutoML Text Entity Extraction.ipynb
|
GoogleCloudPlatform/vertex-ai-samples
|
apache-2.0
|
Example output:
INFO:google.cloud.aiplatform.training_jobs:View Training:
https://console.cloud.google.com/ai/platform/locations/us-central1/training/8859754745456230400?project=759209241365
INFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/759209241365/locations/us-central1/trainingPipelines/8859754745456230400 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/759209241365/locations/us-central1/trainingPipelines/8859754745456230400 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/759209241365/locations/us-central1/trainingPipelines/8859754745456230400 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/759209241365/locations/us-central1/trainingPipelines/8859754745456230400 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/759209241365/locations/us-central1/trainingPipelines/8859754745456230400 current state:
PipelineState.PIPELINE_STATE_RUNNING
...
INFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob run completed. Resource name: projects/759209241365/locations/us-central1/trainingPipelines/8859754745456230400
INFO:google.cloud.aiplatform.training_jobs:Model available at projects/759209241365/locations/us-central1/models/6389525951797002240
Evaluate the model
projects.locations.models.evaluations.list
Review model evaluation scores
After your model has finished training, you can review the evaluation scores for it.
First, you need to get a reference to the new model. As with datasets, you can either use the reference to the model variable you created when you deployed the model or you can list all of the models in your project.
|
# Get model resource ID
models = aip.Model.list(filter="display_name=biomedical_" + TIMESTAMP)
# Get a reference to the Model Service client
client_options = {"api_endpoint": f"{REGION}-aiplatform.googleapis.com"}
model_service_client = aip.gapic.ModelServiceClient(client_options=client_options)
model_evaluations = model_service_client.list_model_evaluations(
parent=models[0].resource_name
)
model_evaluation = list(model_evaluations)[0]
print(model_evaluation)
|
notebooks/official/migration/UJ7 Vertex SDK AutoML Text Entity Extraction.ipynb
|
GoogleCloudPlatform/vertex-ai-samples
|
apache-2.0
|
Example output:
name: "projects/759209241365/locations/us-central1/models/623915674158235648/evaluations/4280507618583117824"
metrics_schema_uri: "gs://google-cloud-aiplatform/schema/modelevaluation/classification_metrics_1.0.0.yaml"
metrics {
struct_value {
fields {
key: "auPrc"
value {
number_value: 0.9891107
}
}
fields {
key: "confidenceMetrics"
value {
list_value {
values {
struct_value {
fields {
key: "precision"
value {
number_value: 0.2
}
}
fields {
key: "recall"
value {
number_value: 1.0
}
}
}
}
Make batch predictions
predictions.batch-prediction
Make test items
You will use synthetic data as a test data items. Don't be concerned that we are using synthetic data -- we just want to demonstrate how to make a prediction.
|
test_item_1 = 'Molecular basis of hexosaminidase A deficiency and pseudodeficiency in the Berks County Pennsylvania Dutch.\tFollowing the birth of two infants with Tay-Sachs disease ( TSD ) , a non-Jewish , Pennsylvania Dutch kindred was screened for TSD carriers using the biochemical assay . A high frequency of individuals who appeared to be TSD heterozygotes was detected ( Kelly et al . , 1975 ) . Clinical and biochemical evidence suggested that the increased carrier frequency was due to at least two altered alleles for the hexosaminidase A alpha-subunit . We now report two mutant alleles in this Pennsylvania Dutch kindred , and one polymorphism . One allele , reported originally in a French TSD patient ( Akli et al . , 1991 ) , is a GT-- > AT transition at the donor splice-site of intron 9 . The second , a C-- > T transition at nucleotide 739 ( Arg247Trp ) , has been shown by Triggs-Raine et al . ( 1992 ) to be a clinically benign " pseudodeficient " allele associated with reduced enzyme activity against artificial substrate . Finally , a polymorphism [ G-- > A ( 759 ) ] , which leaves valine at codon 253 unchanged , is described'
test_item_2 = "Analysis of alkaptonuria (AKU) mutations and polymorphisms reveals that the CCC sequence motif is a mutational hot spot in the homogentisate 1,2 dioxygenase gene (HGO). We recently showed that alkaptonuria ( AKU ) is caused by loss-of-function mutations in the homogentisate 1 , 2 dioxygenase gene ( HGO ) . Herein we describe haplotype and mutational analyses of HGO in seven new AKU pedigrees . These analyses identified two novel single-nucleotide polymorphisms ( INV4 + 31A-- > G and INV11 + 18A-- > G ) and six novel AKU mutations ( INV1-1G-- > A , W60G , Y62C , A122D , P230T , and D291E ) , which further illustrates the remarkable allelic heterogeneity found in AKU . Reexamination of all 29 mutations and polymorphisms thus far described in HGO shows that these nucleotide changes are not randomly distributed ; the CCC sequence motif and its inverted complement , GGG , are preferentially mutated . These analyses also demonstrated that the nucleotide substitutions in HGO do not involve CpG dinucleotides , which illustrates important differences between HGO and other genes for the occurrence of mutation at specific short-sequence motifs . Because the CCC sequence motifs comprise a significant proportion ( 34 . 5 % ) of all mutated bases that have been observed in HGO , we conclude that the CCC triplet is a mutational hot spot in HGO ."
|
notebooks/official/migration/UJ7 Vertex SDK AutoML Text Entity Extraction.ipynb
|
GoogleCloudPlatform/vertex-ai-samples
|
apache-2.0
|
Example Output:
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob created. Resource name: projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328
INFO:google.cloud.aiplatform.jobs:To use this BatchPredictionJob in another session:
INFO:google.cloud.aiplatform.jobs:bpj = aiplatform.BatchPredictionJob('projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328')
INFO:google.cloud.aiplatform.jobs:View Batch Prediction Job:
https://console.cloud.google.com/ai/platform/locations/us-central1/batch-predictions/181835033978339328?project=759209241365
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_SUCCEEDED
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob run completed. Resource name: projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328
Get the predictions
Next, get the results from the completed batch prediction job.
The results are written to the Cloud Storage output bucket you specified in the batch prediction request. You call the method iter_outputs() to get a list of each Cloud Storage file generated with the results. Each file contains one or more prediction requests in a JSON format:
content: The prediction request.
prediction: The prediction response.
ids: The internal assigned unique identifiers for each prediction request.
displayNames: The class names for each class label.
confidences: The predicted confidence, between 0 and 1, per class label.
textSegmentStartOffsets: The character offset in the text to the start of the entity.
textSegmentEndOffsets: The character offset in the text to the end of the entity.
|
import json
import tensorflow as tf
bp_iter_outputs = batch_predict_job.iter_outputs()
prediction_results = list()
for blob in bp_iter_outputs:
if blob.name.split("/")[-1].startswith("prediction"):
prediction_results.append(blob.name)
tags = list()
for prediction_result in prediction_results:
gfile_name = f"gs://{bp_iter_outputs.bucket.name}/{prediction_result}"
with tf.io.gfile.GFile(name=gfile_name, mode="r") as gfile:
for line in gfile.readlines():
line = json.loads(line)
print(line)
break
|
notebooks/official/migration/UJ7 Vertex SDK AutoML Text Entity Extraction.ipynb
|
GoogleCloudPlatform/vertex-ai-samples
|
apache-2.0
|
Example Output:
{'instance': {'content': 'gs://andy-1234-221921aip-20210811180202/test2.txt', 'mimeType': 'text/plain'}, 'prediction': {'ids': ['2208238262504390656', '2208238262504390656', '4827081445820334080', '4827081445820334080', '2208238262504390656', '4827081445820334080', '4827081445820334080'], 'displayNames': ['SpecificDisease', 'SpecificDisease', 'Modifier', 'Modifier', 'SpecificDisease', 'Modifier', 'Modifier'], 'textSegmentStartOffsets': ['208', '193', '381', '522', '670', '26', '12'], 'textSegmentEndOffsets': ['210', '204', '383', '524', '672', '28', '23'], 'confidences': [0.99951637, 0.9994987, 0.9994574, 0.9994488, 0.99924797, 0.9969406, 0.9692179]}}
Make online predictions
predictions.deploy-model-api
Deploy the model
Next, deploy your model for online prediction. To deploy the model, you invoke the deploy method.
|
endpoint = model.deploy()
|
notebooks/official/migration/UJ7 Vertex SDK AutoML Text Entity Extraction.ipynb
|
GoogleCloudPlatform/vertex-ai-samples
|
apache-2.0
|
Example output:
INFO:google.cloud.aiplatform.models:Creating Endpoint
INFO:google.cloud.aiplatform.models:Create Endpoint backing LRO: projects/759209241365/locations/us-central1/endpoints/4867177336350441472/operations/4087251132693348352
INFO:google.cloud.aiplatform.models:Endpoint created. Resource name: projects/759209241365/locations/us-central1/endpoints/4867177336350441472
INFO:google.cloud.aiplatform.models:To use this Endpoint in another session:
INFO:google.cloud.aiplatform.models:endpoint = aiplatform.Endpoint('projects/759209241365/locations/us-central1/endpoints/4867177336350441472')
INFO:google.cloud.aiplatform.models:Deploying model to Endpoint : projects/759209241365/locations/us-central1/endpoints/4867177336350441472
INFO:google.cloud.aiplatform.models:Deploy Endpoint model backing LRO: projects/759209241365/locations/us-central1/endpoints/4867177336350441472/operations/1691336130932244480
INFO:google.cloud.aiplatform.models:Endpoint model deployed. Resource name: projects/759209241365/locations/us-central1/endpoints/4867177336350441472
predictions.online-prediction-automl
Make test item
You will use synthetic data as a test data item. Don't be concerned that we are using synthetic data -- we just want to demonstrate how to make a prediction.
|
test_item = 'Molecular basis of hexosaminidase A deficiency and pseudodeficiency in the Berks County Pennsylvania Dutch.\tFollowing the birth of two infants with Tay-Sachs disease ( TSD ) , a non-Jewish , Pennsylvania Dutch kindred was screened for TSD carriers using the biochemical assay . A high frequency of individuals who appeared to be TSD heterozygotes was detected ( Kelly et al . , 1975 ) . Clinical and biochemical evidence suggested that the increased carrier frequency was due to at least two altered alleles for the hexosaminidase A alpha-subunit . We now report two mutant alleles in this Pennsylvania Dutch kindred , and one polymorphism . One allele , reported originally in a French TSD patient ( Akli et al . , 1991 ) , is a GT-- > AT transition at the donor splice-site of intron 9 . The second , a C-- > T transition at nucleotide 739 ( Arg247Trp ) , has been shown by Triggs-Raine et al . ( 1992 ) to be a clinically benign " pseudodeficient " allele associated with reduced enzyme activity against artificial substrate . Finally , a polymorphism [ G-- > A ( 759 ) ] , which leaves valine at codon 253 unchanged , is described'
|
notebooks/official/migration/UJ7 Vertex SDK AutoML Text Entity Extraction.ipynb
|
GoogleCloudPlatform/vertex-ai-samples
|
apache-2.0
|
Example output:
Prediction(predictions=[{'displayNames': ['SpecificDisease', 'SpecificDisease', 'SpecificDisease', 'Modifier', 'Modifier', 'Modifier'], 'confidences': [0.9995822906494141, 0.999564528465271, 0.9995641708374023, 0.9993661046028137, 0.9993420839309692, 0.9993830323219299], 'textSegmentStartOffsets': [19.0, 148.0, 168.0, 235.0, 329.0, 687.0], 'textSegmentEndOffsets': [46.0, 165.0, 171.0, 238.0, 332.0, 690.0], 'ids': ['1746900775675625472', '1746900775675625472', '1746900775675625472', '8664429803316707328', '8664429803316707328', '8664429803316707328']}], deployed_model_id='7103029833386426368', explanations=None)
Undeploy the model
When you are done doing predictions, you undeploy the model from the Endpoint resouce. This deprovisions all compute resources and ends billing for the deployed model.
|
endpoint.undeploy_all()
|
notebooks/official/migration/UJ7 Vertex SDK AutoML Text Entity Extraction.ipynb
|
GoogleCloudPlatform/vertex-ai-samples
|
apache-2.0
|
Subsampling
Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by
$$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$
where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset.
I'm going to leave this up to you as an exercise. This is more of a programming challenge, than about deep learning specifically. But, being able to prepare your data for your network is an important skill to have. Check out my solution to see how I did it.
Exercise: Implement subsampling for the words in int_words. That is, go through int_words and discard each word given the probablility $P(w_i)$ shown above. Note that $P(w_i)$ is the probability that a word is discarded. Assign the subsampled data to train_words.
|
## Your code here
import random
threshold = 1e-5
wordsInt = sorted(int_words)
print(wordsInt[:30])
pass
bins = np.bincount(wordsInt)
print(bins[:30])
frequencies = np.zeros(len(words), dtype=float)
for index, singlebin in enumerate(bins):
frequencies[index] = singlebin / len(int_words)
print(frequencies[:30])
probs = np.zeros(len(words), dtype=float)
for index, singlefrequency in enumerate(frequencies):
probs[index] = 1 - np.sqrt(threshold/singlefrequency)
print(probs[:30])
# Discard some word considering single word discarding probability
train_words = []
for int_word in int_words:
discardRandom = random.random()
if probs[int_word] > discardRandom:
print("Skip one occurence of " + int_to_vocab[int_word])
else:
train_words.append(int_word)
print(train_words[:30])
print(len(train_words))
#Solution (faster and cleaner)
from collections import Counter
import random
threshold_2 = 1e-5
word_counts = Counter(int_words)
total_count = len(int_words)
freqs = {word: count/total_count for word, count in word_counts.items()}
p_drop = {word: 1 - np.sqrt(threshold_2/freqs[word]) for word in word_counts}
train_words_2 = [word for word in int_words if p_drop[word] < random.random()]
|
embeddings/Skip-Gram_word2vec.ipynb
|
SlipknotTN/udacity-deeplearning-nanodegree
|
mit
|
Making batches
Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$.
From Mikolov et al.:
"Since the more distant words are usually less related to the current word than those close to it, we give less weight to the distant words by sampling less from those words in our training examples... If we choose $C = 5$, for each training word we will select randomly a number $R$ in range $< 1; C >$, and then use $R$ words from history and $R$ words from the future of the current word as correct labels."
Exercise: Implement a function get_target that receives a list of words, an index, and a window size, then returns a list of words in the window around the index. Make sure to use the algorithm described above, where you choose a random number of words from the window.
|
def get_target(words, idx, window_size=5):
''' Get a list of words in a window around an index. '''
# My wrong implementation
#C = random.uniform(1,window_size,1)
#return words[idx-C:idx-1] + words[idx+1:idx+C]
#Solution
R = np.random.randint(1, window_size+1)
start = idx - R if (idx - R) > 0 else 0
stop = idx + R
target_words = set(words[start:idx] + words[idx+1:stop+1])
return list(target_words)
|
embeddings/Skip-Gram_word2vec.ipynb
|
SlipknotTN/udacity-deeplearning-nanodegree
|
mit
|
Embedding
The embedding matrix has a size of the number of words by the number of units in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using tokenized data for our inputs, usually as integers, where the number of tokens is the number of words in our vocabulary.
Exercise: Tensorflow provides a convenient function tf.nn.embedding_lookup that does this lookup for us. You pass in the embedding matrix and a tensor of integers, then it returns rows in the matrix corresponding to those integers. Below, set the number of embedding features you'll use (200 is a good start), create the embedding matrix variable, and use tf.nn.embedding_lookup to get the embedding tensors. For the embedding matrix, I suggest you initialize it with a uniform random numbers between -1 and 1 using tf.random_uniform.
|
n_vocab = len(int_to_vocab)
n_embedding = 200
with train_graph.as_default():
embedding = tf.Variable(tf.random_uniform(shape=(n_vocab, n_embedding), minval=-1.0, maxval=1.0))
embed = tf.nn.embedding_lookup(embedding, inputs)
|
embeddings/Skip-Gram_word2vec.ipynb
|
SlipknotTN/udacity-deeplearning-nanodegree
|
mit
|
Negative sampling
For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called "negative sampling". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss.
Exercise: Below, create weights and biases for the softmax layer. Then, use tf.nn.sampled_softmax_loss to calculate the loss. Be sure to read the documentation to figure out how it works.
|
# Number of negative labels to sample
n_sampled = 100
with train_graph.as_default():
softmax_w = tf.Variable(tf.truncated_normal(shape=(n_embedding, n_vocab), mean=0.0, stddev=0.01))
softmax_b = tf.Variable(tf.zeros(n_vocab))
# Calculate the loss using negative sampling
loss = tf.nn.sampled_softmax_loss(weights=tf.transpose(softmax_w), biases=softmax_b,
labels=labels, inputs=embed,
num_sampled=100, num_classes=n_vocab)
cost = tf.reduce_mean(loss)
optimizer = tf.train.AdamOptimizer().minimize(cost)
|
embeddings/Skip-Gram_word2vec.ipynb
|
SlipknotTN/udacity-deeplearning-nanodegree
|
mit
|
Traitement de données classique
Nous allons voir dans cet exemple comment utiliser la bibliothèque numpy pour récupérer des valeurs dans un fichier csv et commencer à les traiter.
Nous allons utiliser le module csv qui permet de lire un fichier csv et d'en extraire les valeurs lignes par lignes.
Nous allons travailler sur le fichier de données d'entraînement du Titanic. Le but est de prédire les chance de survie à bord du bateau. Il faut récupérer le fichier train.csv (voir le premier cours ou téléchargez le depuis https://www.kaggle.com/c/titanic-gettingStarted/data ) et le sauvegarder dans le répertoire dans lequel le notebook s'éxecute. Vous pouvez utiliser la commande pwd pour connaître ce répertoire. Sinon, vous pouvez déplacer le répertoire courant pour rejoindre l'endroit où vous avez sauvegardé votre fichier avec la commande cd.
|
import csv
import numpy as np
fichier_csv = csv.reader(open('train.csv', 'r'))
entetes = fichier_csv.__next__() # on récupère la première ligne qui contient les entetes
donnees = list() # on crée la liste qui va servir à récupérer les données
for ligne in fichier_csv: # pour chaque ligne lue dans le fichier csv
donnees.append(ligne) # on ajoute les valeurs lues dans le tableau donness
#entete = donnees[0]
#donnees[0] = []
donnees = np.array(donnees) # le tableau donnees est transformé en numpy array
|
2-AnalyseDuTitanic.ipynb
|
sylvchev/coursMLpython
|
unlicense
|
Regardons comment sont stockées les données en mémoire:
|
print (donnees)
|
2-AnalyseDuTitanic.ipynb
|
sylvchev/coursMLpython
|
unlicense
|
Regardons maintenant la colonne de l'âge, n'affichons que les 15 premières valeurs:
|
print (donnees[1:15, 5])
|
2-AnalyseDuTitanic.ipynb
|
sylvchev/coursMLpython
|
unlicense
|
On peut donc remarquer que les âges sont stockés comme des chaîne de caractères. Transformons les en réels :
|
donnees[1:15, 5].astype(np.int)
|
2-AnalyseDuTitanic.ipynb
|
sylvchev/coursMLpython
|
unlicense
|
Numpy ne sait pas convertir la chaîne de caractère vide '' (en 6e position dans notre liste) en réels. Pour traiter ces données, il faudrait écrire un petit algorithme. Nous allons voir comment on peut utiliser pandas pour faire ces traitements beaucoup plus facilement.
Traiter et manipuler les données avec pandas
|
import pandas as pd
import numpy as np
|
2-AnalyseDuTitanic.ipynb
|
sylvchev/coursMLpython
|
unlicense
|
Pour lire le fichier csv nous allons utiliser la fonction read_csv
|
df = pd.read_csv('train.csv')
|
2-AnalyseDuTitanic.ipynb
|
sylvchev/coursMLpython
|
unlicense
|
Pour vérifier si cela a bien fonctionné, affichons les premières valeurs. On voit apparaître l'identifiant du passager, s'il a survécu, sa classe, son nom, son sexe, son âge, le nombre de frères/soeurs/époux/épouse sur le bâteau, le nombre de parents ou d'enfants, le numéro de ticket, le prix, le numéro de cabine et le port d'embarquement.
|
df.head(6)
|
2-AnalyseDuTitanic.ipynb
|
sylvchev/coursMLpython
|
unlicense
|
Comparons le type de donnees, obtenu précédemment. C'est un numpy array. Le type de df est un objet spécifique à pandas.
|
type(donnees)
type(df)
|
2-AnalyseDuTitanic.ipynb
|
sylvchev/coursMLpython
|
unlicense
|
Nous avions vu qu'avec numpy, toutes les valeurs importées étaient des chaînes de caractères. Vérifions ce qu'il en est avec pandas
|
df.dtypes
|
2-AnalyseDuTitanic.ipynb
|
sylvchev/coursMLpython
|
unlicense
|
On peut voir que Pandas a détecté automatiquement le types des données de notre fichier csv: soit des entiers, soit des réels, soit des objets (chaînes de caractères). Il y a deux commandes importantes à connaître, c'est df.info() et df.describe()
|
df.info()
|
2-AnalyseDuTitanic.ipynb
|
sylvchev/coursMLpython
|
unlicense
|
L'âge n'est pas renseigné pour tous les passagers, seulement pour 714 passagers sur 891. Idem pour le numéro de cabine et le port d'embarquement. On peut également utiliser describe() pour calculer plusieurs indicateurs statistiques utiles.
|
df.describe()
|
2-AnalyseDuTitanic.ipynb
|
sylvchev/coursMLpython
|
unlicense
|
On peut voir que pandas a calculé automatiquement les indicateurs statistiques en tenant compte uniquement des données renseignées. Par exemple, il a calculé la moyenne d'âge uniquement sur les 714 valeurs connues. pandas a laissé de coté les valeurs non-numériques (nom, sexe, ticket, cabine, port d'embarquement).
Pour aller un peu plus loin avec pandas
Référencement et filtrage
Pour afficher uniquement les 15 premières valeurs de la colonne âge :
|
df['Age'][0:15]
|
2-AnalyseDuTitanic.ipynb
|
sylvchev/coursMLpython
|
unlicense
|
On peut également utiliser la syntaxe
|
df.Age[0:15]
|
2-AnalyseDuTitanic.ipynb
|
sylvchev/coursMLpython
|
unlicense
|
On peut calculer des critères statistiques directement sur les colonnes
|
df.Age.mean()
|
2-AnalyseDuTitanic.ipynb
|
sylvchev/coursMLpython
|
unlicense
|
On peut voir que c'est la même valeur que celle affichée dans describe. Cette syntaxe permet d'utiliser facilement la valeur de la moyenne dans des calculs ou des algorithmes.
Pour filtrer les données, on va passer la liste de colonnes désirées:
|
colonnes_interessantes = ['Sex', 'Pclass', 'Age']
df[ colonnes_interessantes ]
|
2-AnalyseDuTitanic.ipynb
|
sylvchev/coursMLpython
|
unlicense
|
En analyse, on est souvent intéressé par filtrer les données en fonction de certains critères. Par exemple, l'âge maximum est 80 ans. On peut examiner les informations relatives aux personnes âgées :
|
df[df['Age'] > 60]
|
2-AnalyseDuTitanic.ipynb
|
sylvchev/coursMLpython
|
unlicense
|
Comme on a trop d'informations, on peut les filtrer:
|
df[df['Age'] > 60][['Pclass', 'Sex', 'Age', 'Survived']]
|
2-AnalyseDuTitanic.ipynb
|
sylvchev/coursMLpython
|
unlicense
|
On peut voir que parmis les persones âges, il y a principalement des hommes. Les personnes qui ont survécues était principalement des femmes.
Nous allons maintenant voir comment traiter les valeurs manquantes pour l'âge. Nous allons filtrer les données pour afficher uniquement les valeurs manquantes
|
df[df.Age.isnull()][['Sex', 'Pclass', 'Age']]
|
2-AnalyseDuTitanic.ipynb
|
sylvchev/coursMLpython
|
unlicense
|
Pour combiner des filtres, on peut utiliser '&'. Affichons le nombre d'hommes dans chaque classe
|
for i in range(1, 4):
print ("Dans la classe", i, ", il y a", len( df[ (df['Sex'] == 'male') & (df['Pclass'] == i) ]), "hommes")
print ("Dans la classe", i, ", il y a", len( df[ (df['Sex'] == 'female') & (df['Pclass'] == i) ]), "femmes")
|
2-AnalyseDuTitanic.ipynb
|
sylvchev/coursMLpython
|
unlicense
|
Visualisons maintenant l'histogramme de répartition des âges.
|
df.Age.hist(bins=20, range=(0,80))
|
2-AnalyseDuTitanic.ipynb
|
sylvchev/coursMLpython
|
unlicense
|
Créations et modifications des colonnes
Pour pouvoir exploiter les informations sur le sexe des personnes, nous allons ajouter une nouvelle colonne, appellée genre, qui vaudra 1 pour les hommes et 0 pour les femmes.
|
df['Gender'] = 4 # on ajoute une nouvelle colonne dans laquelle toutes les valeurs sont à 4
df.head()
df['Gender'] = df['Sex'].map( {'female': 0, 'male': 1} ) # la colonne Gender prend 0 pour les femmes et 1 pour les hommes
df.head()
|
2-AnalyseDuTitanic.ipynb
|
sylvchev/coursMLpython
|
unlicense
|
Pour créer et renommer de nouvelles colonnes, on peut également agréger des informations issues de différentes colonnes. Créons par exemple une colonne pour stocker les nombre de personnes de la même famille à bord du Titanic.
|
df['FamilySize'] = df.SibSp + df.Parch
df.head()
|
2-AnalyseDuTitanic.ipynb
|
sylvchev/coursMLpython
|
unlicense
|
Nous allons remplir les valeurs manquantes de l'âge avec la valeur médiane dépendant de la classe et du sexe.
|
ages_medians = np.zeros((2, 3))
ages_medians
for i in range(0,2):
for j in range(0,3):
ages_medians[i,j] = df[ (df['Gender'] == i) & (df['Pclass'] == j+1) ]['Age'].median()
ages_medians
|
2-AnalyseDuTitanic.ipynb
|
sylvchev/coursMLpython
|
unlicense
|
On va créer une nouvelle colonne AgeFill qui va utiliser ces âges médians
|
for i in range(0, 2):
for j in range (0, 3):
df.loc[ (df.Age.isnull()) & (df.Gender == i) & (df.Pclass == j+1), 'AgeFill'] = ages_medians[i,j]
# pour afficher les 10 premières valeurs qui sont complétées
df [df.Age.isnull()][['Gender', 'Pclass', 'Age', 'AgeFill']].head(10)
|
2-AnalyseDuTitanic.ipynb
|
sylvchev/coursMLpython
|
unlicense
|
Pour sauvegarder votre travail, vous pouvez utiliser le module pickle qui compresse et sauvegarde vos données :
|
import pickle
f = open('masauvegarde.pck', 'wb')
pickle.dump(df, f)
f.close()
|
2-AnalyseDuTitanic.ipynb
|
sylvchev/coursMLpython
|
unlicense
|
Pour récuperer votre travail, on utilise l'opération inverse, toujours avec pickle
|
with open('masauvegarde.pck', 'rb') as f:
dff = pickle.load(f)
|
2-AnalyseDuTitanic.ipynb
|
sylvchev/coursMLpython
|
unlicense
|
Retour à numpy pour l'apprentissage
Pour faire de l'apprentissage, et prédire la survie des passagers du Titanic, on peut utiliser scikit-learn. Ce dernier prend en entrée des données sous forme de numpy array, la conversion se fait simplement :
|
ex = df[ ['Gender', 'Pclass'] ] # on choisit seulement quelques features.
X = ex.as_matrix() # on convertit en numpy array
print(ex.head(5))
print(X[:5,:])
|
2-AnalyseDuTitanic.ipynb
|
sylvchev/coursMLpython
|
unlicense
|
On cherche à prévoir la survie, on extrait donc l'information utile :
|
y = df['Survived'].as_matrix()
print (y[:5])
from sklearn import svm
clf = svm.SVC()
clf.fit(X,y)
|
2-AnalyseDuTitanic.ipynb
|
sylvchev/coursMLpython
|
unlicense
|
L'apprentissage du classifieur est fait, c'est-à-dire que nous avons entraîné une SVM sur nos données $X$ pour qu'elle soit capable de prédire la survie $y$. Pour vérifier que notre SVM a bien appris à prédire la survie des passagers, nous pouvons utiliser la méthode predict() et comparer visuellement pour les dix premières valeurs prédite par la SVM et la survie réelle des passagers.
|
print(clf.predict(X[:10,:]))
print (y[:10])
|
2-AnalyseDuTitanic.ipynb
|
sylvchev/coursMLpython
|
unlicense
|
La SVM a bien appris à prédire ce que nous lui avons montré. Cela ne permet pas cependant d'évaluer sa capacité à généraliser à des cas qu'elle n'a pas vu. Pour ce faire, une approche classique est de faire de la validation croisée, c'est à dire qu'on entraîne le classifieur sur une partie des données et qu'on le teste sur une autre. Scikit-learn en donne une implémentation très simple d'utilisation.
|
from sklearn import cross_validation
scores = cross_validation.cross_val_score(clf, X, y, cv=7)
print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
|
2-AnalyseDuTitanic.ipynb
|
sylvchev/coursMLpython
|
unlicense
|
Sur les 7 partitions de nos données, la SVM prédit la survie des passagers dans 77% des cas, avec un écart-type de 0,04.
Pour améliorer les résultats, nous pouvons rajouter l'âge dans les features. Il faut cependant faire attention aux valeurs non-renseignées NaN, nous allons donc utiliser une nouvelle colonne AgeFilled avec l'âge ou la médiane.
|
df['AgeFilled'] = df.Age # on recopie la colonne Age
df.loc[df.AgeFilled.isnull(), 'AgeFilled'] = df[df.Age.isnull()]['AgeFill'] # on met l'age médian pour les valeurs non renseignées
|
2-AnalyseDuTitanic.ipynb
|
sylvchev/coursMLpython
|
unlicense
|
Nous pouvons maintenant créer un nouveau $X$ incluant l'âge, en plus du sexe et de la classe, et vérifier si cela améliore les performances de la SVM.
|
X = df[['Gender', 'Pclass', 'AgeFilled']].as_matrix()
scores = cross_validation.cross_val_score(svm.SVC(), X, y, cv=7)
print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
|
2-AnalyseDuTitanic.ipynb
|
sylvchev/coursMLpython
|
unlicense
|
Now we need to generate regressors (X) and target variable (y) for train and validation. 2-D array of regressor and 1-D array of target is created from the original 1-D array of columm log_PRES in the DataFrames. For the time series forecasting model, Past seven days of observations are used to predict for the next day. This is equivalent to a AR(7) model. We define a function which takes the original time series and the number of timesteps in regressors as input to generate the arrays of X and y.
|
def makeXy(ts, nb_timesteps):
"""
Input:
ts: original time series
nb_timesteps: number of time steps in the regressors
Output:
X: 2-D array of regressors
y: 1-D array of target
"""
X = []
y = []
for i in range(nb_timesteps, ts.shape[0]):
X.append(list(ts.loc[i-nb_timesteps:i-1]))
y.append(ts.loc[i])
X, y = np.array(X), np.array(y)
return X, y
X_train, y_train = makeXy(df_train['scaled_PRES'], 7)
print('Shape of train arrays:', X_train.shape, y_train.shape)
X_val, y_val = makeXy(df_val['scaled_PRES'], 7)
print('Shape of validation arrays:', X_val.shape, y_val.shape)
|
time series regression/DL aproach for timeseries/Air Pressure LSTM.ipynb
|
Diyago/Machine-Learning-scripts
|
apache-2.0
|
The input to RNN layers must be of shape (number of samples, number of timesteps, number of features per timestep). In this case we are modeling only pm2.5 hence number of features per timestep is one. Number of timesteps is seven and number of samples is same as the number of samples in X_train and X_val, which are reshaped to 3D arrays.
|
X_train, X_val = X_train.reshape((X_train.shape[0], X_train.shape[1], 1)),\
X_val.reshape((X_val.shape[0], X_val.shape[1], 1))
print('Shape of 3D arrays:', X_train.shape, X_val.shape)
|
time series regression/DL aproach for timeseries/Air Pressure LSTM.ipynb
|
Diyago/Machine-Learning-scripts
|
apache-2.0
|
Now we define the MLP using the Keras Functional API. In this approach a layer can be declared as the input of the following layer at the time of defining the next layer.
|
from keras.layers import Dense, Input, Dropout
from keras.layers.recurrent import LSTM
from keras.optimizers import SGD
from keras.models import Model
from keras.models import load_model
from keras.callbacks import ModelCheckpoint
#Define input layer which has shape (None, 7) and of type float32. None indicates the number of instances
input_layer = Input(shape=(7,1), dtype='float32')
#LSTM layer is defined for seven timesteps
lstm_layer = LSTM(64, input_shape=(7,1), return_sequences=False)(input_layer)
dropout_layer = Dropout(0.2)(lstm_layer)
#Finally the output layer gives prediction for the next day's air pressure.
output_layer = Dense(1, activation='linear')(dropout_layer)
|
time series regression/DL aproach for timeseries/Air Pressure LSTM.ipynb
|
Diyago/Machine-Learning-scripts
|
apache-2.0
|
The input, dense and output layers will now be packed inside a Model, which is wrapper class for training and making
predictions. Mean square error (mse) is used as the loss function.
The network's weights are optimized by the Adam algorithm. Adam stands for adaptive moment estimation
and has been a popular choice for training deep neural networks. Unlike, stochastic gradient descent, adam uses
different learning rates for each weight and separately updates the same as the training progresses. The learning rate of a weight is updated based on exponentially weighted moving averages of the weight's gradients and the squared gradients.
|
ts_model = Model(inputs=input_layer, outputs=output_layer)
ts_model.compile(loss='mae', optimizer='adam')
ts_model.summary()
"""
The model is trained by calling the fit function on the model object and passing the X_train and y_train. The training
is done for a predefined number of epochs. Additionally, batch_size defines the number of samples of train set to be
used for a instance of back propagation.The validation dataset is also passed to evaluate the model after every epoch
completes. A ModelCheckpoint object tracks the loss function on the validation set and saves the model for the epoch,
at which the loss function has been minimum.
"""
save_weights_at = os.path.join('keras_models', 'PRSA_data_Air_Pressure_LSTM_weights.{epoch:02d}-{val_loss:.4f}.hdf5')
save_best = ModelCheckpoint(save_weights_at, monitor='val_loss', verbose=0,
save_best_only=True, save_weights_only=False, mode='min',
period=1)
ts_model.fit(x=X_train, y=y_train, batch_size=16, epochs=20,
verbose=1, callbacks=[save_best], validation_data=(X_val, y_val),
shuffle=True)
|
time series regression/DL aproach for timeseries/Air Pressure LSTM.ipynb
|
Diyago/Machine-Learning-scripts
|
apache-2.0
|
Prediction are made for the air pressure from the best saved model. The model's predictions, which are on the scaled air-pressure, are inverse transformed to get predictions on original air pressure. The goodness-of-fit, R-squared is also calculated for the predictions on the original variable.
|
best_model = load_model(os.path.join('keras_models', 'PRSA_data_Air_Pressure_LSTM_weights.06-0.0087.hdf5'))
preds = best_model.predict(X_val)
pred_PRES = scaler.inverse_transform(preds)
pred_PRES = np.squeeze(pred_PRES)
from sklearn.metrics import r2_score
r2 = r2_score(df_val['PRES'].loc[7:], pred_PRES)
print('R-squared on validation set of the original air pressure:', r2)
#Let's plot the first 50 actual and predicted values of air pressure.
plt.figure(figsize=(5.5, 5.5))
plt.plot(range(50), df_val['PRES'].loc[7:56], linestyle='-', marker='*', color='r')
plt.plot(range(50), pred_PRES[:50], linestyle='-', marker='.', color='b')
plt.legend(['Actual','Predicted'], loc=2)
plt.title('Actual vs Predicted Air Pressure')
plt.ylabel('Air Pressure')
plt.xlabel('Index')
plt.savefig('plots/ch5/B07887_05_11.png', format='png', dpi=300)
|
time series regression/DL aproach for timeseries/Air Pressure LSTM.ipynb
|
Diyago/Machine-Learning-scripts
|
apache-2.0
|
1. Load DataBunch
You have to redefine or import any custom you defined in the last step, because the data bunch is going to look for that.
|
def pass_through(x):
return x
data_lm = load_data(path, bs=120)
|
Issue_Embeddings/notebooks/03_Create_Model.ipynb
|
kubeflow/code-intelligence
|
mit
|
2. Instantiate Language Model
We are going to use the awd_lstm with the default parameters listed in the config file below:
|
learn = language_model_learner(data=data_lm,
arch=AWD_LSTM,
pretrained=False)
|
Issue_Embeddings/notebooks/03_Create_Model.ipynb
|
kubeflow/code-intelligence
|
mit
|
3. Train Language Model
Find the best learning rate
|
learn.lr_find()
learn.recorder.plot()
best_lr = 1e-2 * 2
|
Issue_Embeddings/notebooks/03_Create_Model.ipynb
|
kubeflow/code-intelligence
|
mit
|
Define callbacks
|
escb = EarlyStoppingCallback(learn=learn, patience=5)
smcb = SaveModelCallback(learn=learn)
rpcb = ReduceLROnPlateauCallback(learn=learn, patience=3)
sgcb = ShowGraph(learn=learn)
csvcb = CSVLogger(learn=learn)
callbacks = [escb, smcb, rpcb, sgcb, csvcb]
|
Issue_Embeddings/notebooks/03_Create_Model.ipynb
|
kubeflow/code-intelligence
|
mit
|
Train Model
Note: I don't actually do this in a notebook. I execute training from a shell script at the root of this repository run_train.sh
|
learn.fit_one_cycle(cyc_len=1,
max_lr=1e-3,
tot_epochs=10,
callbacks=callbacks)
|
Issue_Embeddings/notebooks/03_Create_Model.ipynb
|
kubeflow/code-intelligence
|
mit
|
Ученые провели секвенирование биологического материала испытуемых, чтобы понять, какие из этих генов наиболее активны в клетках больных людей.
Секвенирование — это определение степени активности генов в анализируемом образце с помощью подсчёта количества соответствующей каждому гену РНК.
В данных для этого задания представлена именно эта количественная мера активности каждого из 15748 генов у каждого из 72 человек, принимавших участие в эксперименте.
Нужно будет определить те гены, активность которых у людей в разных стадиях заболевания отличается статистически значимо.
Кроме того, нужно будет оценить не только статистическую, но и практическую значимость этих результатов, которая часто используется в подобных исследованиях.
Диагноз человека содержится в столбце под названием "Diagnosis".
Практическая значимость изменения
Цель исследований — найти гены, средняя экспрессия которых отличается не только статистически значимо, но и достаточно сильно. В экспрессионных исследованиях для этого часто используется метрика, которая называется fold change (кратность изменения). Определяется она следующим образом:
Fc(C,T)=T/C при T>C и -T/C при T<C,
где C,T — средние значения экспрессии гена в control и treatment группах соответственно. По сути, fold change показывает, во сколько раз отличаются средние двух выборок.
Часть 1: применение t-критерия Стьюдента
В первой части нужно применить критерий Стьюдента для проверки гипотезы о равенстве средних в двух независимых выборках. Применить критерий для каждого гена нужно будет дважды:
для групп normal (control) и early neoplasia (treatment)
для групп early neoplasia (control) и cancer (treatment)
В качестве ответа в этой части задания необходимо указать количество статистически значимых отличий, которые мы нашли с помощью t-критерия Стьюдента, то есть число генов, у которых p-value этого теста оказался меньше, чем уровень значимости.
|
#Diagnosis types
types
#Split data by groups
gen_normal = gen.loc[gen.Diagnosis == 'normal']
gen_neoplasia = gen.loc[gen.Diagnosis == 'early neoplasia']
gen_cancer = gen.loc[gen.Diagnosis == 'cancer']
|
4 Stats for data analysis/Homework/15 project genom cancer/Genom cancer.ipynb
|
maxis42/ML-DA-Coursera-Yandex-MIPT
|
mit
|
Для того, чтобы использовать двухвыборочный критерий Стьюдента, убедимся, что распределения в выборках существенно не отличаются от нормальных, применив критерий Шапиро-Уилка.
|
#Shapiro-Wilk test for samples
print('Shapiro-Wilk test for samples')
sw_normal = gen_normal.iloc[:,2:].apply(stats.shapiro, axis=0)
sw_normal_p = [p for _, p in sw_normal]
_, sw_normal_p_corr, _, _ = multipletests(sw_normal_p, method='fdr_bh')
sw_neoplasia = gen_neoplasia.iloc[:,2:].apply(stats.shapiro, axis=0)
sw_neoplasia_p = [p for _, p in sw_neoplasia]
_, sw_neoplasia_p_corr, _, _ = multipletests(sw_neoplasia_p, method='fdr_bh')
sw_cancer = gen_cancer.iloc[:,2:].apply(stats.shapiro, axis=0)
sw_cancer_p = [p for _, p in sw_cancer]
_, sw_cancer_p_corr, _, _ = multipletests(sw_cancer_p, method='fdr_bh')
print('Mean corrected p-value for "normal": %.4f' % sw_normal_p_corr.mean())
print('Mean corrected p-value for "early neoplasia": %.4f' % sw_neoplasia_p_corr.mean())
print('Mean corrected p-value for "cancer": %.4f' % sw_cancer_p_corr.mean())
|
4 Stats for data analysis/Homework/15 project genom cancer/Genom cancer.ipynb
|
maxis42/ML-DA-Coursera-Yandex-MIPT
|
mit
|
Так как среднее значение p-value >> 0.05, то будем применять критерий Стьюдента.
|
tt_ind_normal_neoplasia = stats.ttest_ind(gen_normal.iloc[:,2:], gen_neoplasia.iloc[:,2:], equal_var = False)
tt_ind_normal_neoplasia_p = tt_ind_normal_neoplasia[1]
tt_ind_neoplasia_cancer = stats.ttest_ind(gen_neoplasia.iloc[:,2:], gen_cancer.iloc[:,2:], equal_var = False)
tt_ind_neoplasia_cancer_p = tt_ind_neoplasia_cancer[1]
tt_ind_normal_neoplasia_p_5 = tt_ind_normal_neoplasia_p[np.where(tt_ind_normal_neoplasia_p < 0.05)].shape[0]
tt_ind_neoplasia_cancer_p_5 = tt_ind_neoplasia_cancer_p[np.where(tt_ind_neoplasia_cancer_p < 0.05)].shape[0]
print('Normal vs neoplasia samples p-values number below 0.05: %d' % tt_ind_normal_neoplasia_p_5)
print('Neoplasia vs cancer samples p-values number below 0.05: %d' % tt_ind_neoplasia_cancer_p_5)
with open('answer1.txt', 'w') as fout:
fout.write(str(tt_ind_normal_neoplasia_p_5))
with open('answer2.txt', 'w') as fout:
fout.write(str(tt_ind_neoplasia_cancer_p_5))
|
4 Stats for data analysis/Homework/15 project genom cancer/Genom cancer.ipynb
|
maxis42/ML-DA-Coursera-Yandex-MIPT
|
mit
|
Часть 2: поправка методом Холма
Для этой части задания нам понадобится модуль multitest из statsmodels.
В этой части задания нужно будет применить поправку Холма для получившихся двух наборов достигаемых уровней значимости из предыдущей части. Обратим внимание, что поскольку мы будем делать поправку для каждого из двух наборов p-value отдельно, то проблема, связанная с множественной проверкой останется.
Для того, чтобы ее устранить, достаточно воспользоваться поправкой Бонферрони, то есть использовать уровень значимости 0.05 / 2 вместо 0.05 для дальнейшего уточнения значений p-value c помощью метода Холма.
В качестве ответа к этому заданию требуется ввести количество значимых отличий в каждой группе после того, как произведена коррекция Холма-Бонферрони. Причем это число нужно ввести с учетом практической значимости: посчитать для каждого значимого изменения fold change и выписать в ответ число таких значимых изменений, абсолютное значение fold change которых больше, чем 1.5.
Обратим внимание, что
применять поправку на множественную проверку нужно ко всем значениям достигаемых уровней значимости, а не только для тех, которые меньше значения уровня доверия;
при использовании поправки на уровне значимости 0.025 меняются значения достигаемого уровня значимости, но не меняется значение уровня доверия (то есть для отбора значимых изменений скорректированные значения уровня значимости нужно сравнивать с порогом 0.025, а не 0.05)!
|
#Holm correction
_, tt_ind_normal_neoplasia_p_corr, _, _ = multipletests(tt_ind_normal_neoplasia_p, method='holm')
_, tt_ind_neoplasia_cancer_p_corr, _, _ = multipletests(tt_ind_neoplasia_cancer_p, method='holm')
#Bonferroni correction
p_corr = np.array([tt_ind_normal_neoplasia_p_corr, tt_ind_neoplasia_cancer_p_corr])
_, p_corr_bonf, _, _ = multipletests(p_corr, is_sorted=True, method='bonferroni')
p_corr_bonf_normal_neoplasia_p_5 = p_corr_bonf[0][np.where(p_corr_bonf[0] < 0.05)].shape[0]
p_corr_bonf_neoplasia_cancer_p_5 = p_corr_bonf[1][np.where(p_corr_bonf[1] < 0.05)].shape[0]
print('Normal vs neoplasia samples p-values number below 0.05: %d' % p_corr_bonf_normal_neoplasia_p_5)
print('Neoplasia vs cancer samples p-values number below 0.05: %d' % p_corr_bonf_neoplasia_cancer_p_5)
def fold_change(C, T, limit=1.5):
'''
C - control sample
T - treatment sample
'''
if T >= C:
fc_stat = T / C
else:
fc_stat = -C / T
return (np.abs(fc_stat) > limit), fc_stat
#Normal vs neoplasia samples
gen_p_corr_bonf_normal_p_5 = gen_normal.iloc[:,2:].iloc[:, np.where(p_corr_bonf[0] < 0.05)[0]]
gen_p_corr_bonf_neoplasia0_p_5 = gen_neoplasia.iloc[:,2:].iloc[:, np.where(p_corr_bonf[0] < 0.05)[0]]
fc_corr_bonf_normal_neoplasia_p_5 = 0
for norm, neopl in zip(gen_p_corr_bonf_normal_p_5.mean(), gen_p_corr_bonf_neoplasia0_p_5.mean()):
accept, _ = fold_change(norm, neopl)
if accept: fc_corr_bonf_normal_neoplasia_p_5 += 1
#Neoplasia vs cancer samples
gen_p_corr_bonf_neoplasia1_p_5 = gen_neoplasia.iloc[:,2:].iloc[:, np.where(p_corr_bonf[1] < 0.05)[0]]
gen_p_corr_bonf_cancer_p_5 = gen_cancer.iloc[:,2:].iloc[:, np.where(p_corr_bonf[1] < 0.05)[0]]
fc_corr_bonf_neoplasia_cancer_p_5 = 0
for neopl, canc in zip(gen_p_corr_bonf_neoplasia1_p_5.mean(), gen_p_corr_bonf_cancer_p_5.mean()):
accept, _ = fold_change(neopl, canc)
if accept: fc_corr_bonf_neoplasia_cancer_p_5 += 1
print('Normal vs neoplasia samples fold change above 1.5: %d' % fc_corr_bonf_normal_neoplasia_p_5)
print('Neoplasia vs cancer samples fold change above 1.5: %d' % fc_corr_bonf_neoplasia_cancer_p_5)
with open('answer3.txt', 'w') as fout:
fout.write(str(fc_corr_bonf_normal_neoplasia_p_5))
with open('answer4.txt', 'w') as fout:
fout.write(str(fc_corr_bonf_neoplasia_cancer_p_5))
|
4 Stats for data analysis/Homework/15 project genom cancer/Genom cancer.ipynb
|
maxis42/ML-DA-Coursera-Yandex-MIPT
|
mit
|
Часть 3: поправка методом Бенджамини-Хохберга
Данная часть задания аналогична второй части за исключением того, что нужно будет использовать метод Бенджамини-Хохберга.
Обратим внимание, что методы коррекции, которые контролируют FDR, допускает больше ошибок первого рода и имеют большую мощность, чем методы, контролирующие FWER. Большая мощность означает, что эти методы будут совершать меньше ошибок второго рода (то есть будут лучше улавливать отклонения от H0, когда они есть, и будут чаще отклонять H0, когда отличий нет).
В качестве ответа к этому заданию требуется ввести количество значимых отличий в каждой группе после того, как произведена коррекция Бенджамини-Хохберга, причем так же, как и во второй части, считать только такие отличия, у которых abs(fold change) > 1.5.
|
#Benjamini-Hochberg correction
_, tt_ind_normal_neoplasia_p_corr, _, _ = multipletests(tt_ind_normal_neoplasia_p, method='fdr_bh')
_, tt_ind_neoplasia_cancer_p_corr, _, _ = multipletests(tt_ind_neoplasia_cancer_p, method='fdr_bh')
#Bonferroni correction
p_corr = np.array([tt_ind_normal_neoplasia_p_corr, tt_ind_neoplasia_cancer_p_corr])
_, p_corr_bonf, _, _ = multipletests(p_corr, is_sorted=True, method='bonferroni')
p_corr_bonf_normal_neoplasia_p_5 = p_corr_bonf[0][np.where(p_corr_bonf[0] < 0.05)].shape[0]
p_corr_bonf_neoplasia_cancer_p_5 = p_corr_bonf[1][np.where(p_corr_bonf[1] < 0.05)].shape[0]
print('Normal vs neoplasia samples p-values number below 0.05: %d' % p_corr_bonf_normal_neoplasia_p_5)
print('Neoplasia vs cancer samples p-values number below 0.05: %d' % p_corr_bonf_neoplasia_cancer_p_5)
#Normal vs neoplasia samples
gen_p_corr_bonf_normal_p_5 = gen_normal.iloc[:,2:].iloc[:, np.where(p_corr_bonf[0] < 0.05)[0]]
gen_p_corr_bonf_neoplasia0_p_5 = gen_neoplasia.iloc[:,2:].iloc[:, np.where(p_corr_bonf[0] < 0.05)[0]]
fc_corr_bonf_normal_neoplasia_p_5 = 0
for norm, neopl in zip(gen_p_corr_bonf_normal_p_5.mean(), gen_p_corr_bonf_neoplasia0_p_5.mean()):
accept, _ = fold_change(norm, neopl)
if accept: fc_corr_bonf_normal_neoplasia_p_5 += 1
#Neoplasia vs cancer samples
gen_p_corr_bonf_neoplasia1_p_5 = gen_neoplasia.iloc[:,2:].iloc[:, np.where(p_corr_bonf[1] < 0.05)[0]]
gen_p_corr_bonf_cancer_p_5 = gen_cancer.iloc[:,2:].iloc[:, np.where(p_corr_bonf[1] < 0.05)[0]]
fc_corr_bonf_neoplasia_cancer_p_5 = 0
for neopl, canc in zip(gen_p_corr_bonf_neoplasia1_p_5.mean(), gen_p_corr_bonf_cancer_p_5.mean()):
accept, _ = fold_change(neopl, canc)
if accept: fc_corr_bonf_neoplasia_cancer_p_5 += 1
print('Normal vs neoplasia samples fold change above 1.5: %d' % fc_corr_bonf_normal_neoplasia_p_5)
print('Neoplasia vs cancer samples fold change above 1.5: %d' % fc_corr_bonf_neoplasia_cancer_p_5)
with open('answer5.txt', 'w') as fout:
fout.write(str(fc_corr_bonf_normal_neoplasia_p_5))
with open('answer6.txt', 'w') as fout:
fout.write(str(fc_corr_bonf_neoplasia_cancer_p_5))
|
4 Stats for data analysis/Homework/15 project genom cancer/Genom cancer.ipynb
|
maxis42/ML-DA-Coursera-Yandex-MIPT
|
mit
|
사용자 정의 층
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https://www.tensorflow.org/tutorials/customization/custom_layers"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org에서 보기</a> </td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/tutorials/customization/custom_layers.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab에서 실행</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/tutorials/customization/custom_layers.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub에서 소스 보기</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/tutorials/customization/custom_layers.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">노트북 다운로드</a></td>
</table>
신경망을 구축하기 위해서 고수준 API인 tf.keras를 사용하길 권합니다. 대부분의 텐서플로 API는 즉시 실행(eager execution)과 함께 사용할 수 있습니다.
|
import tensorflow as tf
print(tf.config.list_physical_devices('GPU'))
|
site/ko/tutorials/customization/custom_layers.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
층: 유용한 연산자 집합
머신러닝을 위한 코드를 작성하는 대부분의 경우에 개별적인 연산과 변수를 조작하는 것보다는 높은 수준의 추상화 도구를 사용할 것입니다.
많은 머신러닝 모델은 비교적 단순한 층(layer)을 조합하고 쌓아서 표현가능합니다. 또한 텐서플로는 여러 표준형 층을 제공하므로 사용자 고유의 응용 프로그램에 특화된 층을 처음부터 작성하거나, 기존 층의 조합으로 쉽게 만들 수 있습니다.
텐서플로는 케라스의 모든 API를 tf.keras 패키지에 포함하고 있습니다. 케라스 층은 모델을 구축하는데 매우 유용합니다.
|
# In the tf.keras.layers package, layers are objects. To construct a layer,
# simply construct the object. Most layers take as a first argument the number
# of output dimensions / channels.
layer = tf.keras.layers.Dense(100)
# The number of input dimensions is often unnecessary, as it can be inferred
# the first time the layer is used, but it can be provided if you want to
# specify it manually, which is useful in some complex models.
layer = tf.keras.layers.Dense(10, input_shape=(None, 5))
|
site/ko/tutorials/customization/custom_layers.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
미리 구성되어있는 층은 다음 문서에서 확인할 수 있습니다. Dense(완전 연결 층), Conv2D, LSTM, BatchNormalization, Dropout, 등을 포함하고 있습니다.
|
# To use a layer, simply call it.
layer(tf.zeros([10, 5]))
# Layers have many useful methods. For example, you can inspect all variables
# in a layer using `layer.variables` and trainable variables using
# `layer.trainable_variables`. In this case a fully-connected layer
# will have variables for weights and biases.
layer.variables
# The variables are also accessible through nice accessors
layer.kernel, layer.bias
|
site/ko/tutorials/customization/custom_layers.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
사용자 정의 층 구현
사용자 정의 층을 구현하는 가장 좋은 방법은 tf.keras.Layer 클래스를 상속하고 다음과 같이 구현하는 것입니다.
__init__: 모든 입력 독립적 초기화를 수행할 수 있습니다.
build: 입력 텐서의 형상을 알고 나머지 초기화 작업을 수행할 수 있습니다.
call: 순방향 계산을 수행합니다.
변수를 생성하기 위해 build가 호출되길 기다릴 필요가 없다는 것에 주목하세요. 또한 변수를 __init__에 생성할 수도 있습니다. 그러나 build에 변수를 생성하는 유리한 점은 층이 작동할 입력의 크기를 기준으로 나중에 변수를 만들 수 있다는 것입니다. 반면에, __init__에 변수를 생성하는 것은 변수 생성에 필요한 크기가 명시적으로 지정되어야 함을 의미합니다.
|
class MyDenseLayer(tf.keras.layers.Layer):
def __init__(self, num_outputs):
super(MyDenseLayer, self).__init__()
self.num_outputs = num_outputs
def build(self, input_shape):
self.kernel = self.add_weight("kernel",
shape=[int(input_shape[-1]),
self.num_outputs])
def call(self, inputs):
return tf.matmul(inputs, self.kernel)
layer = MyDenseLayer(10)
_ = layer(tf.zeros([10, 5])) # Calling the layer `.builds` it.
print([var.name for var in layer.trainable_variables])
|
site/ko/tutorials/customization/custom_layers.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
코드를 읽는 사람이 표준형 층의 동작을 잘 알고 있을 것이므로, 가능한 경우 표준형 층을 사용하는것이 전체 코드를 읽고 유지하는데 더 쉽습니다. 만약 tf.keras.layers 에 없는 층을 사용하기 원하면 깃허브에 이슈화하거나, 풀 리퀘스트(pull request)를 보내세요.
모델: 층 구성
머신러닝 모델에서 대부분의 재미있는 많은 것들은 기존의 층을 조합하여 구현됩니다. 예를 들어, 레즈넷(resnet)의 각 잔여 블록(residual block)은 합성곱(convolution), 배치 정규화(batch normalization), 쇼트컷(shortcut) 등으로 구성되어 있습니다.
Model.fit, Model.evaluate 및 Model.save와 같은 모델 메서드가 필요할 때 일반적으로 keras.Model에서 상속합니다(자세한 내용은 사용자 지정 Keras 레이어 및 모델 참조).
keras.Model(keras.layers.Layer가 아니라)에 의해 제공되는 또 다른 특성은 변수를 추적하는 외에 keras.Model이 내부 레이어도 추적하여 검사하기 더 쉽게 해준다는 점입니다.
예를 들어 다음은 ResNet 블록입니다.
|
class ResnetIdentityBlock(tf.keras.Model):
def __init__(self, kernel_size, filters):
super(ResnetIdentityBlock, self).__init__(name='')
filters1, filters2, filters3 = filters
self.conv2a = tf.keras.layers.Conv2D(filters1, (1, 1))
self.bn2a = tf.keras.layers.BatchNormalization()
self.conv2b = tf.keras.layers.Conv2D(filters2, kernel_size, padding='same')
self.bn2b = tf.keras.layers.BatchNormalization()
self.conv2c = tf.keras.layers.Conv2D(filters3, (1, 1))
self.bn2c = tf.keras.layers.BatchNormalization()
def call(self, input_tensor, training=False):
x = self.conv2a(input_tensor)
x = self.bn2a(x, training=training)
x = tf.nn.relu(x)
x = self.conv2b(x)
x = self.bn2b(x, training=training)
x = tf.nn.relu(x)
x = self.conv2c(x)
x = self.bn2c(x, training=training)
x += input_tensor
return tf.nn.relu(x)
block = ResnetIdentityBlock(1, [1, 2, 3])
_ = block(tf.zeros([1, 2, 3, 3]))
block.layers
len(block.variables)
block.summary()
|
site/ko/tutorials/customization/custom_layers.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
그러나 대부분의 경우에, 많은 층으로 구성된 모델은 단순하게 순서대로 층을 하나씩 호출합니다. 이는 tf.keras.Sequential 사용하여 간단한 코드로 구현 가능합니다.
|
my_seq = tf.keras.Sequential([tf.keras.layers.Conv2D(1, (1, 1),
input_shape=(
None, None, 3)),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Conv2D(2, 1,
padding='same'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Conv2D(3, (1, 1)),
tf.keras.layers.BatchNormalization()])
my_seq(tf.zeros([1, 2, 3, 3]))
my_seq.summary()
|
site/ko/tutorials/customization/custom_layers.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
With this many layers, it's going to take a lot of iterations for this network to learn. By the time you're done training these 800 batches, your final test and validation accuracies probably won't be much better than 10%. (It will be different each time, but will most likely be less than 15%.)
Using batch normalization, you'll be able to train this same network to over 90% in that same number of batches.
Add batch normalization
We've copied the previous three cells to get you started. Edit these cells to add batch normalization to the network. For this exercise, you should use tf.layers.batch_normalization to handle most of the math, but you'll need to make a few other changes to your network to integrate batch normalization. You may want to refer back to the lesson notebook to remind yourself of important things, like how your graph operations need to know whether or not you are performing training or inference.
If you get stuck, you can check out the Batch_Normalization_Solutions notebook to see how we did things.
TODO: Modify fully_connected to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.
|
def fully_connected(prev_layer, num_units):
"""
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
"""
layer = tf.layers.dense(prev_layer, num_units,use_bias=False)
layer = tf.layers.batch_normalization(layer,training=is_training)
return tf.nn.relu(layer)
|
batch-norm/Batch_Normalization_Exercises_MySol.ipynb
|
guyk1971/deep-learning
|
mit
|
TODO: Modify conv_layer to add batch normalization to the convolutional layers it creates. Feel free to change the function's parameters if it helps.
|
def conv_layer(prev_layer, layer_depth):
"""
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
"""
strides = 2 if layer_depth % 3 == 0 else 1
conv_layer = tf.layers.conv2d(prev_layer, layer_depth * 4, 3, strides, 'same',use_bias=False)
conv_layer = tf.layers.batch_normalization(conv_layer,training=is_training)
return tf.nn.relu(conv_layer)
|
batch-norm/Batch_Normalization_Exercises_MySol.ipynb
|
guyk1971/deep-learning
|
mit
|
TODO: Edit the train function to support batch normalization. You'll need to make sure the network knows whether or not it is training, and you'll need to make sure it updates and uses its population statistics correctly.
|
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
is_training = tf.placeholder(tf.bool,name='is_training')
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i,is_training)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100,is_training)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
# Tell Tensorflow to update hte population statistics while training
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(labels, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys, is_training: True})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels,
is_training:False})
print(
'Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys, is_training:False})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels,
is_training: False})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels,
is_training: False})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy, feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]],
is_training: False})
print("Accuracy on 100 samples:", correct / 100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
|
batch-norm/Batch_Normalization_Exercises_MySol.ipynb
|
guyk1971/deep-learning
|
mit
|
Load into individual parcel tables in chunks
|
def chunks(l, n):
"""Yield successive n-sized chunks from l."""
for i in range(0, len(l), n):
yield l[i:i + n]
with conn.cursor() as cur:
cur.execute("select table_name from information_schema.tables where table_schema = 'core_logic_2018'")
res = cur.fetchall()
tables = [x[0] for x in res]
res = []
with conn.cursor() as cur:
for t in tables:
cur.execute("select count(1) from core_logic_2018.{}".format(t))
res.append({'table': t, 'count': cur.fetchone()[0]})
to_do = filter(lambda x: x['count'] == 0, res)
for table_list in chunks(map(lambda x: x['table'], to_do), 50):
tables = ' '.join(table_list)
print "Loading {}".format(tables)
!GDAL_MAX_DATASET_POOL_SIZE=100 ogr2ogr -f "PostgreSQL" PG:"$conn_str" ut_parcel_premium.gdb/ $tables -progress -lco SCHEMA=core_logic_2018 -lco OVERWRITE=yes --config PG_USE_COPY YES
|
sources/parcels/notebooks/parcel-loading.ipynb
|
FireCARES/data
|
mit
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.