markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
MTF plot of the first lens
l1.ipzCaptureWindow('Mtf', percent=15, gamma=0.5)
Examples/IPNotebooks/00 Using ZEMAX and PyZDDE with IPython notebook.ipynb
indranilsinharoy/PyZDDE
mit
Executing ZPL macro Lastly, here is an example of how execute a ZPL macro using the PyZDDE. Since ZEMAX can execute ZPL macros present in a set folder (generally the default macro folder in the data folder), the appropriate macro folder path needs to be set if it is not the default macro folder path.
l1.zSetMacroPath(r"C:\PROGRAMSANDEXPERIMENTS\ZEMAX\Macros")
Examples/IPNotebooks/00 Using ZEMAX and PyZDDE with IPython notebook.ipynb
indranilsinharoy/PyZDDE
mit
The following command executes the ZPL macro 'GLOBAL' provided by ZEMAX. The macro computes the global vertex coordinates or orientations surface by surface by surface, and outputs a text window within the ZEMAX environment. Maximize (if required) the ZEMAX application window to see the output after executing the follo...
l1.zExecuteZPLMacro('GLO')
Examples/IPNotebooks/00 Using ZEMAX and PyZDDE with IPython notebook.ipynb
indranilsinharoy/PyZDDE
mit
Close the DDE links
pyz.closeLink() # Also, l1.close(); l2.close()
Examples/IPNotebooks/00 Using ZEMAX and PyZDDE with IPython notebook.ipynb
indranilsinharoy/PyZDDE
mit
Berry phase calculation for graphene This tutorial will describe a complete walk-through of how to calculate the Berry phase for graphene. Creating the geometry to investigate Our system of interest will be the pristine graphene system with the on-site terms shifted by $\pm\delta$.
graphene = geom.graphene() H = Hamiltonian(graphene) H.construct([(0.1, 1.44), (0, -2.7)])
docs/tutorials/tutorial_es_2.ipynb
zerothi/sids
lgpl-3.0
H now contains the pristine graphene tight-binding model. The anti-symmetric Hamiltonian is constructed like this:
H_bp = H.copy() # an exact copy H_bp[0, 0] = 0.1 H_bp[1, 1] = -0.1
docs/tutorials/tutorial_es_2.ipynb
zerothi/sids
lgpl-3.0
Comparing electronic structures Before proceeding to the Berry phase calculation lets compare the band structure and DOS of the two models. The anti-symmetric Hamiltonian opens a gap around the Dirac cone. A zoom on the Dirac cone shows this.
band = BandStructure(H, [[0, 0.5, 0], [1/3, 2/3, 0], [0.5, 0.5, 0]], 400, [r"$M$", r"$K$", r"$M'$"]) band.set_parent(H) band_array = band.apply.array bs = band_array.eigh() band.set_parent(H_bp) bp_bs = band_array.eigh() lk, kt, kl = band.lineark(True) plt.xticks(kt, kl) plt.xlim(0, lk[-1]) plt.ylim([-.3, .3]) plt.yl...
docs/tutorials/tutorial_es_2.ipynb
zerothi/sids
lgpl-3.0
The gap opened is equal to the difference between the two on-site terms. In this case it equals $0.2\mathrm{eV}$. Lets, for completeness sake calculate the DOS close to the Dirac point for the two systems. To resolve the gap the distribution function (in this case the Gaussian) needs to have a small smearing value to e...
bz = MonkhorstPack(H, [41, 41, 1], displacement=[1/3, 2/3, 0], size=[.125, .125, 1]) bz_average = bz.apply.average # specify the Brillouin zone to perform an average
docs/tutorials/tutorial_es_2.ipynb
zerothi/sids
lgpl-3.0
The above MonkhorstPack grid initialization is creating a Monkhorst-Pack grid centered on the $K$ point with a reduced Brillouin zone size of $1/8$th of the entire Brillouin zone. Essentially this only calculates the DOS in a small $k$-region around the $K$-point. Since in this case we know the electronic structure of ...
plt.scatter(bz.k[:, 0], bz.k[:, 1], 2); plt.xlabel(r'$k_x$ [$b_x$]'); plt.ylabel(r'$k_y$ [$b_y$]');
docs/tutorials/tutorial_es_2.ipynb
zerothi/sids
lgpl-3.0
Before proceeding to the Berry phase calculation we calculate the DOS in an energy region around the Dirac-point to confirm the band-gap.
E = np.linspace(-0.5, 0.5, 1000) dist = get_distribution('gaussian', 0.03) bz.set_parent(H) plt.plot(E, bz_average.DOS(E, distribution=dist), label='Graphene'); bz.set_parent(H_bp) plt.plot(E, bz_average.DOS(E, distribution=dist), label='Graphene anti'); plt.legend() plt.ylim([0, None]) plt.xlabel('$E - E_F$ [eV]'); pl...
docs/tutorials/tutorial_es_2.ipynb
zerothi/sids
lgpl-3.0
Berry phase calculation To calculate the Berry phase we are going to perform a discretized integration of the Bloch states on a closed loop. We are going to calculate it around the $K$-point with a given radius. After having created the
# Number of discretizations N = 50 # Circle radius in 1/Ang kR = 0.01 # Normal vector (in units of reciprocal lattice vectors) normal = [0, 0, 1] # Origo (in units of reciprocal lattice vectors) origin = [1/3, 2/3, 0] circle = BrillouinZone.param_circle(H, N, kR, normal, origin) plt.plot(circle.k[:, 0], circle.k[:, 1])...
docs/tutorials/tutorial_es_2.ipynb
zerothi/sids
lgpl-3.0
The above plot shows a skewed circle because the $k$-points in the Brillouin zone object is stored in units of reciprocal lattice vectors. I.e. the circle is perfect in the reciprocal space. Note that the below Berry phase calculation ensures the loop is completed by also taking into account the first and last point. T...
k = circle.tocartesian(circle.k) plt.plot(k[:, 0], k[:, 1]); plt.xlabel(r'$k_x$ [1/Ang]') plt.ylabel(r'$k_y$ [1/Ang]') plt.gca().set_aspect('equal');
docs/tutorials/tutorial_es_2.ipynb
zerothi/sids
lgpl-3.0
Now we are ready to calculate the Berry phase. We calculate it for both graphene and the anti-symmetric graphene using only the first, second and both bands:
circle.set_parent(H) print('Pristine graphene (0): {:.5f} rad'.format(electron.berry_phase(circle, sub=0))) print('Pristine graphene (1): {:.5f} rad'.format(electron.berry_phase(circle, sub=1))) print('Pristine graphene (:): {:.5f} rad'.format(electron.berry_phase(circle))) circle.set_parent(H_bp) print('Anti-symmetric...
docs/tutorials/tutorial_es_2.ipynb
zerothi/sids
lgpl-3.0
We now plot the Berry phase as a function of integration radius with a somewhat constant discretization. In addition we calculate the Berry phase in the skewed circle in reciprocal space and perfectly circular in the units of the reciprocal lattice vectors. This enables a comparison of the integration path.
kRs = np.linspace(0.001, 0.2, 70) dk = 0.0001 bp = np.empty([2, len(kRs)]) for i, kR in enumerate(kRs): circle = BrillouinZone.param_circle(H_bp, dk, kR, normal, origin) bp[0, i] = electron.berry_phase(circle, sub=0) circle_other = BrillouinZone.param_circle(utils.mathematics.fnorm(H_bp.rcell), dk, kR, norm...
docs/tutorials/tutorial_es_2.ipynb
zerothi/sids
lgpl-3.0
Setup ids
def get_gold_ids(person): """Get gold data Pararemeters ------------ person : {"GP", "MES", "KMA", "common_gold_data"} Returns ------- pd.Series """ path = Path("/Users/klay6683/Dropbox/Documents/latex_docs/p4_paper1/gold_data") return pd.read_csv(path / f"{person}.txt"...
notebooks/clustering development.ipynb
michaelaye/planet4
isc
Here's my comments from the review" APF0000br5 - seems like the big blotch should have been seen APF0000bu5 - seems like middle fan should be there - seems too strict a cut not clustering issue? APF0000ek1- yellow final blotch comes out of no where APF0000pbr - bottom right blotch seems like it should have survived APF...
results = execute_in_parallel(process_id, combined) for id_ in ids: print(id_) for kind in ['blotch']: print(kind) dbscanner = DBScanner(savedir='do_cluster_on_large', do_large_run=True) # dbscanner.parameter_scan(kind, [0.1, 0.13], [30, 50, 70]) # for blotch: dbscanner....
notebooks/clustering development.ipynb
michaelaye/planet4
isc
single item checking
%matplotlib ipympl from planet4.catalog_production import ReleaseManager rm = ReleaseManager('v1.0') rm.savefolder db = DBScanner(savedir='examples_for_paper', do_large_run=True) db.eps_values db.cluster_and_plot('arp', 'fan') plotting.plot_image_id_pipeline('gr0', datapath='gold_per_obsid', via_obsid=True) plt....
notebooks/clustering development.ipynb
michaelaye/planet4
isc
testing angle deltas
def angle_to_xy(angle): x = np.cos(np.deg2rad(angle)) y = np.sin(np.deg2rad(angle)) return np.vstack([x,y]).T def cluster_angles(angles, delta_angle): dist_per_degree = 0.017453070996747883 X = angle_to_xy(angles) clusterer = DBScanner(X, eps=delta_angle*dist_per_degree, min_samples=3) retu...
notebooks/clustering development.ipynb
michaelaye/planet4
isc
this means all ellipses were clustered together. eps=10 picks 3 out of these 6.
clusterdata = data.iloc[dbscanner.reduced_data[0]]
notebooks/clustering development.ipynb
michaelaye/planet4
isc
so clusterdata is just the same as the input data, i just repeat the exact same code steps here for consistency.
clusterdata[blotchcols] meandata = clusterdata.mean() meandata from scipy.stats import circmean meandata.angle = circmean(clusterdata.angle, high=180) meandata n_class_old = data.classification_id.nunique() n_class_old # number of classifications that include fan and blotches f1 = data.marking == 'fan' f2 = data....
notebooks/clustering development.ipynb
michaelaye/planet4
isc
testing cluster_image_name
dbscanner = dbscan.DBScanner() db = io.DBManager() data = db.get_obsid_markings('ESP_020568_0950') image_ids = data.image_id.unique() %matplotlib nbagg import seaborn as sns sns.set_context('notebook') p4id = markings.ImageID(image_ids[0]) p4id.plot_fans() p4id.plot_fans(data=p4id.data.query('angle>180')) p4id.i...
notebooks/clustering development.ipynb
michaelaye/planet4
isc
Cluster random samples of obsids
obsids = 'ESP_020476_0950, ESP_011931_0945, ESP_012643_0945, ESP_020783_0950'.split(', ') obsids def process_obsid(obsid): from planet4.catalog_production import do_cluster_obsids do_cluster_obsids(obsid, savedir=obsid) return obsid from nbtools import execute_in_parallel execute_in_parallel(process_obs...
notebooks/clustering development.ipynb
michaelaye/planet4
isc
Counts Make bar chart to see structure of the dataset.
counts = collections.Counter(map(lambda x: x['label'], labels)) print(counts) for label, cnt in counts.items(): percents = cnt / len(labels) * 100 print('{} is {}%'.format(label, round(percents, 2))) idx = np.arange(len(counts)) rects = plt.bar(idx, list(map(lambda x: x[1], sorted(counts.items())))) plt.xtick...
notebooks/03-labeled-data.ipynb
podondra/bt-spectraldl
gpl-3.0
Classes Preview
f = h5py.File('data/data.hdf5') spectra = f['spectra'] def plot_class(spectrum, ax, class_name): ax.plot(spectrum[0], spectrum[1]) ax.set_title(class_name) ax.set_xlabel('wavelength (Angstrom)') ax.set_ylabel('flux') ax.axvline(x=6562.8, color='black', label='H-alpha', alpha=0.25) ax.legend() ...
notebooks/03-labeled-data.ipynb
podondra/bt-spectraldl
gpl-3.0
Let's Add Labels Add labels to the HDF5 file.
for spectrum in labels: ident = spectrum['id'].split('/')[-1] spectra[ident].attrs['label'] = int(spectrum['label'])
notebooks/03-labeled-data.ipynb
podondra/bt-spectraldl
gpl-3.0
Vizualize All Spectra in a Class
fig, (ax0, ax1, ax3) = plt.subplots(3, 1) axs = [ax0, ax1, None, ax3] for ident, data in spectra.items(): label = spectra[ident].attrs['label'] if label == 2: continue axs[label].plot(data[0], data[1], alpha=0.1, lw=0.5) fig.tight_layout()
notebooks/03-labeled-data.ipynb
podondra/bt-spectraldl
gpl-3.0
Wavelength Ranges Infimum This analysis shows that the infimum from starting wavelengths is 6518.4272. That is pretty high but H-alpha is 6562.8 and H-alpha is the main feature. It may shorten the range of value and thus speed up training. I also reviewed some spectra and it is far enough from H-alpha. Therefore 6519 ...
# find spectrum which start with highest value # x is tuple x[1] are values, [0, 0] is first wavelength wave_starts = dict(map(lambda x: (x[0], x[1][0, 0]), spectra.items())) starts_n, starts_bins, _ = plt.hist(list(wave_starts.values())) plt.title('wavelenght starts') starts_n, starts_bins infimum = list(reversed(so...
notebooks/03-labeled-data.ipynb
podondra/bt-spectraldl
gpl-3.0
Supremum At ends there is no problem because most of spectra are is first bar.
# find spectrum which end with lowest value # x is tuple x[1] are values, [0, 0] is first wavelength wave_ends = dict(map(lambda x: (x[0], x[1][0, -1]), spectra.items())) ends_n, ends_bins, _ = plt.hist(list(wave_ends.values())) plt.title('wavelenght ends') ends_n, ends_bins supremum = list(sorted(wave_ends.items(), ...
notebooks/03-labeled-data.ipynb
podondra/bt-spectraldl
gpl-3.0
Vertex AI: Vertex AI Migration: AutoML Text Entity Extraction <table align="left"> <td> <a href="https://colab.research.google.com/github/GoogleCloudPlatform/ai-platform-samples/blob/master/vertex-ai-samples/tree/master/notebooks/official/migration/UJ7%20Vertex%20SDK%20AutoML%20Text%20Entity%20Extraction.ipynb"> ...
import os # Google Cloud Notebook if os.path.exists("/opt/deeplearning/metadata/env_version"): USER_FLAG = "--user" else: USER_FLAG = "" ! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
notebooks/official/migration/UJ7 Vertex SDK AutoML Text Entity Extraction.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Location of Cloud Storage training data. Now set the variable IMPORT_FILE to the location of the JSONL index file in Cloud Storage.
IMPORT_FILE = "gs://cloud-samples-data/language/ucaip_ten_dataset.jsonl"
notebooks/official/migration/UJ7 Vertex SDK AutoML Text Entity Extraction.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Create a dataset datasets.create-dataset-api Create the Dataset Next, create the Dataset resource using the create method for the TextDataset class, which takes the following parameters: display_name: The human readable name for the Dataset resource. gcs_source: A list of one or more dataset index files to import the ...
dataset = aip.TextDataset.create( display_name="NCBI Biomedical" + "_" + TIMESTAMP, gcs_source=[IMPORT_FILE], import_schema_uri=aip.schema.dataset.ioformat.text.extraction, ) print(dataset.resource_name)
notebooks/official/migration/UJ7 Vertex SDK AutoML Text Entity Extraction.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example Output: INFO:google.cloud.aiplatform.datasets.dataset:Creating TextDataset INFO:google.cloud.aiplatform.datasets.dataset:Create TextDataset backing LRO: projects/759209241365/locations/us-central1/datasets/3704325042721521664/operations/3193181053544038400 INFO:google.cloud.aiplatform.datasets.dataset:TextDatas...
dag = aip.AutoMLTextTrainingJob( display_name="biomedical_" + TIMESTAMP, prediction_type="extraction" ) print(dag)
notebooks/official/migration/UJ7 Vertex SDK AutoML Text Entity Extraction.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: &lt;google.cloud.aiplatform.training_jobs.AutoMLTextTrainingJob object at 0x7fc3b6c90f10&gt; Run the training pipeline Next, you run the DAG to start the training job by invoking the method run, with the following parameters: dataset: The Dataset resource to train the model. model_display_name: The hu...
model = dag.run( dataset=dataset, model_display_name="biomedical_" + TIMESTAMP, training_fraction_split=0.8, validation_fraction_split=0.1, test_fraction_split=0.1, )
notebooks/official/migration/UJ7 Vertex SDK AutoML Text Entity Extraction.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: INFO:google.cloud.aiplatform.training_jobs:View Training: https://console.cloud.google.com/ai/platform/locations/us-central1/training/8859754745456230400?project=759209241365 INFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/759209241365/locations/us-central1/trainingPipelines/88...
# Get model resource ID models = aip.Model.list(filter="display_name=biomedical_" + TIMESTAMP) # Get a reference to the Model Service client client_options = {"api_endpoint": f"{REGION}-aiplatform.googleapis.com"} model_service_client = aip.gapic.ModelServiceClient(client_options=client_options) model_evaluations = m...
notebooks/official/migration/UJ7 Vertex SDK AutoML Text Entity Extraction.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: name: "projects/759209241365/locations/us-central1/models/623915674158235648/evaluations/4280507618583117824" metrics_schema_uri: "gs://google-cloud-aiplatform/schema/modelevaluation/classification_metrics_1.0.0.yaml" metrics { struct_value { fields { key: "auPrc" value { numbe...
test_item_1 = 'Molecular basis of hexosaminidase A deficiency and pseudodeficiency in the Berks County Pennsylvania Dutch.\tFollowing the birth of two infants with Tay-Sachs disease ( TSD ) , a non-Jewish , Pennsylvania Dutch kindred was screened for TSD carriers using the biochemical assay . A high frequency of indivi...
notebooks/official/migration/UJ7 Vertex SDK AutoML Text Entity Extraction.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example Output: INFO:google.cloud.aiplatform.jobs:BatchPredictionJob created. Resource name: projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 INFO:google.cloud.aiplatform.jobs:To use this BatchPredictionJob in another session: INFO:google.cloud.aiplatform.jobs:bpj = aiplatform.BatchPre...
import json import tensorflow as tf bp_iter_outputs = batch_predict_job.iter_outputs() prediction_results = list() for blob in bp_iter_outputs: if blob.name.split("/")[-1].startswith("prediction"): prediction_results.append(blob.name) tags = list() for prediction_result in prediction_results: gfile_...
notebooks/official/migration/UJ7 Vertex SDK AutoML Text Entity Extraction.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example Output: {'instance': {'content': 'gs://andy-1234-221921aip-20210811180202/test2.txt', 'mimeType': 'text/plain'}, 'prediction': {'ids': ['2208238262504390656', '2208238262504390656', '4827081445820334080', '4827081445820334080', '2208238262504390656', '4827081445820334080', '4827081445820334080'], 'displayNames'...
endpoint = model.deploy()
notebooks/official/migration/UJ7 Vertex SDK AutoML Text Entity Extraction.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: INFO:google.cloud.aiplatform.models:Creating Endpoint INFO:google.cloud.aiplatform.models:Create Endpoint backing LRO: projects/759209241365/locations/us-central1/endpoints/4867177336350441472/operations/4087251132693348352 INFO:google.cloud.aiplatform.models:Endpoint created. Resource name: projects/75...
test_item = 'Molecular basis of hexosaminidase A deficiency and pseudodeficiency in the Berks County Pennsylvania Dutch.\tFollowing the birth of two infants with Tay-Sachs disease ( TSD ) , a non-Jewish , Pennsylvania Dutch kindred was screened for TSD carriers using the biochemical assay . A high frequency of individu...
notebooks/official/migration/UJ7 Vertex SDK AutoML Text Entity Extraction.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: Prediction(predictions=[{'displayNames': ['SpecificDisease', 'SpecificDisease', 'SpecificDisease', 'Modifier', 'Modifier', 'Modifier'], 'confidences': [0.9995822906494141, 0.999564528465271, 0.9995641708374023, 0.9993661046028137, 0.9993420839309692, 0.9993830323219299], 'textSegmentStartOffsets': [19.0...
endpoint.undeploy_all()
notebooks/official/migration/UJ7 Vertex SDK AutoML Text Entity Extraction.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Subsampling Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ i...
## Your code here import random threshold = 1e-5 wordsInt = sorted(int_words) print(wordsInt[:30]) pass bins = np.bincount(wordsInt) print(bins[:30]) frequencies = np.zeros(len(words), dtype=float) for index, singlebin in enumerate(bins): frequencies[index] = singlebin / len(int_words) print(frequencies...
embeddings/Skip-Gram_word2vec.ipynb
SlipknotTN/udacity-deeplearning-nanodegree
mit
Making batches Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$. From Mikolov et al.: "Since the more distant words are usually l...
def get_target(words, idx, window_size=5): ''' Get a list of words in a window around an index. ''' # My wrong implementation #C = random.uniform(1,window_size,1) #return words[idx-C:idx-1] + words[idx+1:idx+C] #Solution R = np.random.randint(1, window_size+1) start = idx - R if (i...
embeddings/Skip-Gram_word2vec.ipynb
SlipknotTN/udacity-deeplearning-nanodegree
mit
Embedding The embedding matrix has a size of the number of words by the number of units in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using tokenized data for our inputs, usually as integers, where the number of tokens is the n...
n_vocab = len(int_to_vocab) n_embedding = 200 with train_graph.as_default(): embedding = tf.Variable(tf.random_uniform(shape=(n_vocab, n_embedding), minval=-1.0, maxval=1.0)) embed = tf.nn.embedding_lookup(embedding, inputs)
embeddings/Skip-Gram_word2vec.ipynb
SlipknotTN/udacity-deeplearning-nanodegree
mit
Negative sampling For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from th...
# Number of negative labels to sample n_sampled = 100 with train_graph.as_default(): softmax_w = tf.Variable(tf.truncated_normal(shape=(n_embedding, n_vocab), mean=0.0, stddev=0.01)) softmax_b = tf.Variable(tf.zeros(n_vocab)) # Calculate the loss using negative sampling loss = tf.nn.sampled_softmax...
embeddings/Skip-Gram_word2vec.ipynb
SlipknotTN/udacity-deeplearning-nanodegree
mit
Traitement de données classique Nous allons voir dans cet exemple comment utiliser la bibliothèque numpy pour récupérer des valeurs dans un fichier csv et commencer à les traiter. Nous allons utiliser le module csv qui permet de lire un fichier csv et d'en extraire les valeurs lignes par lignes. Nous allons travailler ...
import csv import numpy as np fichier_csv = csv.reader(open('train.csv', 'r')) entetes = fichier_csv.__next__() # on récupère la première ligne qui contient les entetes donnees = list() # on crée la liste qui va servir à récupérer les données for ligne in fichier_csv: # pour chaque ligne lue dans le...
2-AnalyseDuTitanic.ipynb
sylvchev/coursMLpython
unlicense
Regardons comment sont stockées les données en mémoire:
print (donnees)
2-AnalyseDuTitanic.ipynb
sylvchev/coursMLpython
unlicense
Regardons maintenant la colonne de l'âge, n'affichons que les 15 premières valeurs:
print (donnees[1:15, 5])
2-AnalyseDuTitanic.ipynb
sylvchev/coursMLpython
unlicense
On peut donc remarquer que les âges sont stockés comme des chaîne de caractères. Transformons les en réels :
donnees[1:15, 5].astype(np.int)
2-AnalyseDuTitanic.ipynb
sylvchev/coursMLpython
unlicense
Numpy ne sait pas convertir la chaîne de caractère vide '' (en 6e position dans notre liste) en réels. Pour traiter ces données, il faudrait écrire un petit algorithme. Nous allons voir comment on peut utiliser pandas pour faire ces traitements beaucoup plus facilement. Traiter et manipuler les données avec pandas
import pandas as pd import numpy as np
2-AnalyseDuTitanic.ipynb
sylvchev/coursMLpython
unlicense
Pour lire le fichier csv nous allons utiliser la fonction read_csv
df = pd.read_csv('train.csv')
2-AnalyseDuTitanic.ipynb
sylvchev/coursMLpython
unlicense
Pour vérifier si cela a bien fonctionné, affichons les premières valeurs. On voit apparaître l'identifiant du passager, s'il a survécu, sa classe, son nom, son sexe, son âge, le nombre de frères/soeurs/époux/épouse sur le bâteau, le nombre de parents ou d'enfants, le numéro de ticket, le prix, le numéro de cabine et le...
df.head(6)
2-AnalyseDuTitanic.ipynb
sylvchev/coursMLpython
unlicense
Comparons le type de donnees, obtenu précédemment. C'est un numpy array. Le type de df est un objet spécifique à pandas.
type(donnees) type(df)
2-AnalyseDuTitanic.ipynb
sylvchev/coursMLpython
unlicense
Nous avions vu qu'avec numpy, toutes les valeurs importées étaient des chaînes de caractères. Vérifions ce qu'il en est avec pandas
df.dtypes
2-AnalyseDuTitanic.ipynb
sylvchev/coursMLpython
unlicense
On peut voir que Pandas a détecté automatiquement le types des données de notre fichier csv: soit des entiers, soit des réels, soit des objets (chaînes de caractères). Il y a deux commandes importantes à connaître, c'est df.info() et df.describe()
df.info()
2-AnalyseDuTitanic.ipynb
sylvchev/coursMLpython
unlicense
L'âge n'est pas renseigné pour tous les passagers, seulement pour 714 passagers sur 891. Idem pour le numéro de cabine et le port d'embarquement. On peut également utiliser describe() pour calculer plusieurs indicateurs statistiques utiles.
df.describe()
2-AnalyseDuTitanic.ipynb
sylvchev/coursMLpython
unlicense
On peut voir que pandas a calculé automatiquement les indicateurs statistiques en tenant compte uniquement des données renseignées. Par exemple, il a calculé la moyenne d'âge uniquement sur les 714 valeurs connues. pandas a laissé de coté les valeurs non-numériques (nom, sexe, ticket, cabine, port d'embarquement). Pour...
df['Age'][0:15]
2-AnalyseDuTitanic.ipynb
sylvchev/coursMLpython
unlicense
On peut également utiliser la syntaxe
df.Age[0:15]
2-AnalyseDuTitanic.ipynb
sylvchev/coursMLpython
unlicense
On peut calculer des critères statistiques directement sur les colonnes
df.Age.mean()
2-AnalyseDuTitanic.ipynb
sylvchev/coursMLpython
unlicense
On peut voir que c'est la même valeur que celle affichée dans describe. Cette syntaxe permet d'utiliser facilement la valeur de la moyenne dans des calculs ou des algorithmes. Pour filtrer les données, on va passer la liste de colonnes désirées:
colonnes_interessantes = ['Sex', 'Pclass', 'Age'] df[ colonnes_interessantes ]
2-AnalyseDuTitanic.ipynb
sylvchev/coursMLpython
unlicense
En analyse, on est souvent intéressé par filtrer les données en fonction de certains critères. Par exemple, l'âge maximum est 80 ans. On peut examiner les informations relatives aux personnes âgées :
df[df['Age'] > 60]
2-AnalyseDuTitanic.ipynb
sylvchev/coursMLpython
unlicense
Comme on a trop d'informations, on peut les filtrer:
df[df['Age'] > 60][['Pclass', 'Sex', 'Age', 'Survived']]
2-AnalyseDuTitanic.ipynb
sylvchev/coursMLpython
unlicense
On peut voir que parmis les persones âges, il y a principalement des hommes. Les personnes qui ont survécues était principalement des femmes. Nous allons maintenant voir comment traiter les valeurs manquantes pour l'âge. Nous allons filtrer les données pour afficher uniquement les valeurs manquantes
df[df.Age.isnull()][['Sex', 'Pclass', 'Age']]
2-AnalyseDuTitanic.ipynb
sylvchev/coursMLpython
unlicense
Pour combiner des filtres, on peut utiliser '&amp;'. Affichons le nombre d'hommes dans chaque classe
for i in range(1, 4): print ("Dans la classe", i, ", il y a", len( df[ (df['Sex'] == 'male') & (df['Pclass'] == i) ]), "hommes") print ("Dans la classe", i, ", il y a", len( df[ (df['Sex'] == 'female') & (df['Pclass'] == i) ]), "femmes")
2-AnalyseDuTitanic.ipynb
sylvchev/coursMLpython
unlicense
Visualisons maintenant l'histogramme de répartition des âges.
df.Age.hist(bins=20, range=(0,80))
2-AnalyseDuTitanic.ipynb
sylvchev/coursMLpython
unlicense
Créations et modifications des colonnes Pour pouvoir exploiter les informations sur le sexe des personnes, nous allons ajouter une nouvelle colonne, appellée genre, qui vaudra 1 pour les hommes et 0 pour les femmes.
df['Gender'] = 4 # on ajoute une nouvelle colonne dans laquelle toutes les valeurs sont à 4 df.head() df['Gender'] = df['Sex'].map( {'female': 0, 'male': 1} ) # la colonne Gender prend 0 pour les femmes et 1 pour les hommes df.head()
2-AnalyseDuTitanic.ipynb
sylvchev/coursMLpython
unlicense
Pour créer et renommer de nouvelles colonnes, on peut également agréger des informations issues de différentes colonnes. Créons par exemple une colonne pour stocker les nombre de personnes de la même famille à bord du Titanic.
df['FamilySize'] = df.SibSp + df.Parch df.head()
2-AnalyseDuTitanic.ipynb
sylvchev/coursMLpython
unlicense
Nous allons remplir les valeurs manquantes de l'âge avec la valeur médiane dépendant de la classe et du sexe.
ages_medians = np.zeros((2, 3)) ages_medians for i in range(0,2): for j in range(0,3): ages_medians[i,j] = df[ (df['Gender'] == i) & (df['Pclass'] == j+1) ]['Age'].median() ages_medians
2-AnalyseDuTitanic.ipynb
sylvchev/coursMLpython
unlicense
On va créer une nouvelle colonne AgeFill qui va utiliser ces âges médians
for i in range(0, 2): for j in range (0, 3): df.loc[ (df.Age.isnull()) & (df.Gender == i) & (df.Pclass == j+1), 'AgeFill'] = ages_medians[i,j] # pour afficher les 10 premières valeurs qui sont complétées df [df.Age.isnull()][['Gender', 'Pclass', 'Age', 'AgeFill']].head(10)
2-AnalyseDuTitanic.ipynb
sylvchev/coursMLpython
unlicense
Pour sauvegarder votre travail, vous pouvez utiliser le module pickle qui compresse et sauvegarde vos données :
import pickle f = open('masauvegarde.pck', 'wb') pickle.dump(df, f) f.close()
2-AnalyseDuTitanic.ipynb
sylvchev/coursMLpython
unlicense
Pour récuperer votre travail, on utilise l'opération inverse, toujours avec pickle
with open('masauvegarde.pck', 'rb') as f: dff = pickle.load(f)
2-AnalyseDuTitanic.ipynb
sylvchev/coursMLpython
unlicense
Retour à numpy pour l'apprentissage Pour faire de l'apprentissage, et prédire la survie des passagers du Titanic, on peut utiliser scikit-learn. Ce dernier prend en entrée des données sous forme de numpy array, la conversion se fait simplement :
ex = df[ ['Gender', 'Pclass'] ] # on choisit seulement quelques features. X = ex.as_matrix() # on convertit en numpy array print(ex.head(5)) print(X[:5,:])
2-AnalyseDuTitanic.ipynb
sylvchev/coursMLpython
unlicense
On cherche à prévoir la survie, on extrait donc l'information utile :
y = df['Survived'].as_matrix() print (y[:5]) from sklearn import svm clf = svm.SVC() clf.fit(X,y)
2-AnalyseDuTitanic.ipynb
sylvchev/coursMLpython
unlicense
L'apprentissage du classifieur est fait, c'est-à-dire que nous avons entraîné une SVM sur nos données $X$ pour qu'elle soit capable de prédire la survie $y$. Pour vérifier que notre SVM a bien appris à prédire la survie des passagers, nous pouvons utiliser la méthode predict() et comparer visuellement pour les dix prem...
print(clf.predict(X[:10,:])) print (y[:10])
2-AnalyseDuTitanic.ipynb
sylvchev/coursMLpython
unlicense
La SVM a bien appris à prédire ce que nous lui avons montré. Cela ne permet pas cependant d'évaluer sa capacité à généraliser à des cas qu'elle n'a pas vu. Pour ce faire, une approche classique est de faire de la validation croisée, c'est à dire qu'on entraîne le classifieur sur une partie des données et qu'on le teste...
from sklearn import cross_validation scores = cross_validation.cross_val_score(clf, X, y, cv=7) print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
2-AnalyseDuTitanic.ipynb
sylvchev/coursMLpython
unlicense
Sur les 7 partitions de nos données, la SVM prédit la survie des passagers dans 77% des cas, avec un écart-type de 0,04. Pour améliorer les résultats, nous pouvons rajouter l'âge dans les features. Il faut cependant faire attention aux valeurs non-renseignées NaN, nous allons donc utiliser une nouvelle colonne AgeFille...
df['AgeFilled'] = df.Age # on recopie la colonne Age df.loc[df.AgeFilled.isnull(), 'AgeFilled'] = df[df.Age.isnull()]['AgeFill'] # on met l'age médian pour les valeurs non renseignées
2-AnalyseDuTitanic.ipynb
sylvchev/coursMLpython
unlicense
Nous pouvons maintenant créer un nouveau $X$ incluant l'âge, en plus du sexe et de la classe, et vérifier si cela améliore les performances de la SVM.
X = df[['Gender', 'Pclass', 'AgeFilled']].as_matrix() scores = cross_validation.cross_val_score(svm.SVC(), X, y, cv=7) print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
2-AnalyseDuTitanic.ipynb
sylvchev/coursMLpython
unlicense
Now we need to generate regressors (X) and target variable (y) for train and validation. 2-D array of regressor and 1-D array of target is created from the original 1-D array of columm log_PRES in the DataFrames. For the time series forecasting model, Past seven days of observations are used to predict for the next day...
def makeXy(ts, nb_timesteps): """ Input: ts: original time series nb_timesteps: number of time steps in the regressors Output: X: 2-D array of regressors y: 1-D array of target """ X = [] y = [] for i in range(nb_timesteps, ts.shape[0]): ...
time series regression/DL aproach for timeseries/Air Pressure LSTM.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
The input to RNN layers must be of shape (number of samples, number of timesteps, number of features per timestep). In this case we are modeling only pm2.5 hence number of features per timestep is one. Number of timesteps is seven and number of samples is same as the number of samples in X_train and X_val, which are re...
X_train, X_val = X_train.reshape((X_train.shape[0], X_train.shape[1], 1)),\ X_val.reshape((X_val.shape[0], X_val.shape[1], 1)) print('Shape of 3D arrays:', X_train.shape, X_val.shape)
time series regression/DL aproach for timeseries/Air Pressure LSTM.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Now we define the MLP using the Keras Functional API. In this approach a layer can be declared as the input of the following layer at the time of defining the next layer.
from keras.layers import Dense, Input, Dropout from keras.layers.recurrent import LSTM from keras.optimizers import SGD from keras.models import Model from keras.models import load_model from keras.callbacks import ModelCheckpoint #Define input layer which has shape (None, 7) and of type float32. None indicates the nu...
time series regression/DL aproach for timeseries/Air Pressure LSTM.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
The input, dense and output layers will now be packed inside a Model, which is wrapper class for training and making predictions. Mean square error (mse) is used as the loss function. The network's weights are optimized by the Adam algorithm. Adam stands for adaptive moment estimation and has been a popular choice for ...
ts_model = Model(inputs=input_layer, outputs=output_layer) ts_model.compile(loss='mae', optimizer='adam') ts_model.summary() """ The model is trained by calling the fit function on the model object and passing the X_train and y_train. The training is done for a predefined number of epochs. Additionally, batch_size de...
time series regression/DL aproach for timeseries/Air Pressure LSTM.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Prediction are made for the air pressure from the best saved model. The model's predictions, which are on the scaled air-pressure, are inverse transformed to get predictions on original air pressure. The goodness-of-fit, R-squared is also calculated for the predictions on the original variable.
best_model = load_model(os.path.join('keras_models', 'PRSA_data_Air_Pressure_LSTM_weights.06-0.0087.hdf5')) preds = best_model.predict(X_val) pred_PRES = scaler.inverse_transform(preds) pred_PRES = np.squeeze(pred_PRES) from sklearn.metrics import r2_score r2 = r2_score(df_val['PRES'].loc[7:], pred_PRES) print('R-squa...
time series regression/DL aproach for timeseries/Air Pressure LSTM.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
1. Load DataBunch You have to redefine or import any custom you defined in the last step, because the data bunch is going to look for that.
def pass_through(x): return x data_lm = load_data(path, bs=120)
Issue_Embeddings/notebooks/03_Create_Model.ipynb
kubeflow/code-intelligence
mit
2. Instantiate Language Model We are going to use the awd_lstm with the default parameters listed in the config file below:
learn = language_model_learner(data=data_lm, arch=AWD_LSTM, pretrained=False)
Issue_Embeddings/notebooks/03_Create_Model.ipynb
kubeflow/code-intelligence
mit
3. Train Language Model Find the best learning rate
learn.lr_find() learn.recorder.plot() best_lr = 1e-2 * 2
Issue_Embeddings/notebooks/03_Create_Model.ipynb
kubeflow/code-intelligence
mit
Define callbacks
escb = EarlyStoppingCallback(learn=learn, patience=5) smcb = SaveModelCallback(learn=learn) rpcb = ReduceLROnPlateauCallback(learn=learn, patience=3) sgcb = ShowGraph(learn=learn) csvcb = CSVLogger(learn=learn) callbacks = [escb, smcb, rpcb, sgcb, csvcb]
Issue_Embeddings/notebooks/03_Create_Model.ipynb
kubeflow/code-intelligence
mit
Train Model Note: I don't actually do this in a notebook. I execute training from a shell script at the root of this repository run_train.sh
learn.fit_one_cycle(cyc_len=1, max_lr=1e-3, tot_epochs=10, callbacks=callbacks)
Issue_Embeddings/notebooks/03_Create_Model.ipynb
kubeflow/code-intelligence
mit
Ученые провели секвенирование биологического материала испытуемых, чтобы понять, какие из этих генов наиболее активны в клетках больных людей. Секвенирование — это определение степени активности генов в анализируемом образце с помощью подсчёта количества соответствующей каждому гену РНК. В данных для этого задания пред...
#Diagnosis types types #Split data by groups gen_normal = gen.loc[gen.Diagnosis == 'normal'] gen_neoplasia = gen.loc[gen.Diagnosis == 'early neoplasia'] gen_cancer = gen.loc[gen.Diagnosis == 'cancer']
4 Stats for data analysis/Homework/15 project genom cancer/Genom cancer.ipynb
maxis42/ML-DA-Coursera-Yandex-MIPT
mit
Для того, чтобы использовать двухвыборочный критерий Стьюдента, убедимся, что распределения в выборках существенно не отличаются от нормальных, применив критерий Шапиро-Уилка.
#Shapiro-Wilk test for samples print('Shapiro-Wilk test for samples') sw_normal = gen_normal.iloc[:,2:].apply(stats.shapiro, axis=0) sw_normal_p = [p for _, p in sw_normal] _, sw_normal_p_corr, _, _ = multipletests(sw_normal_p, method='fdr_bh') sw_neoplasia = gen_neoplasia.iloc[:,2:].apply(stats.shapiro, axis=0) sw_n...
4 Stats for data analysis/Homework/15 project genom cancer/Genom cancer.ipynb
maxis42/ML-DA-Coursera-Yandex-MIPT
mit
Так как среднее значение p-value >> 0.05, то будем применять критерий Стьюдента.
tt_ind_normal_neoplasia = stats.ttest_ind(gen_normal.iloc[:,2:], gen_neoplasia.iloc[:,2:], equal_var = False) tt_ind_normal_neoplasia_p = tt_ind_normal_neoplasia[1] tt_ind_neoplasia_cancer = stats.ttest_ind(gen_neoplasia.iloc[:,2:], gen_cancer.iloc[:,2:], equal_var = False) tt_ind_neoplasia_cancer_p = tt_ind_neoplasia...
4 Stats for data analysis/Homework/15 project genom cancer/Genom cancer.ipynb
maxis42/ML-DA-Coursera-Yandex-MIPT
mit
Часть 2: поправка методом Холма Для этой части задания нам понадобится модуль multitest из statsmodels. В этой части задания нужно будет применить поправку Холма для получившихся двух наборов достигаемых уровней значимости из предыдущей части. Обратим внимание, что поскольку мы будем делать поправку для каждого из двух...
#Holm correction _, tt_ind_normal_neoplasia_p_corr, _, _ = multipletests(tt_ind_normal_neoplasia_p, method='holm') _, tt_ind_neoplasia_cancer_p_corr, _, _ = multipletests(tt_ind_neoplasia_cancer_p, method='holm') #Bonferroni correction p_corr = np.array([tt_ind_normal_neoplasia_p_corr, tt_ind_neoplasia_cancer_p_corr])...
4 Stats for data analysis/Homework/15 project genom cancer/Genom cancer.ipynb
maxis42/ML-DA-Coursera-Yandex-MIPT
mit
Часть 3: поправка методом Бенджамини-Хохберга Данная часть задания аналогична второй части за исключением того, что нужно будет использовать метод Бенджамини-Хохберга. Обратим внимание, что методы коррекции, которые контролируют FDR, допускает больше ошибок первого рода и имеют большую мощность, чем методы, контролирую...
#Benjamini-Hochberg correction _, tt_ind_normal_neoplasia_p_corr, _, _ = multipletests(tt_ind_normal_neoplasia_p, method='fdr_bh') _, tt_ind_neoplasia_cancer_p_corr, _, _ = multipletests(tt_ind_neoplasia_cancer_p, method='fdr_bh') #Bonferroni correction p_corr = np.array([tt_ind_normal_neoplasia_p_corr, tt_ind_neoplas...
4 Stats for data analysis/Homework/15 project genom cancer/Genom cancer.ipynb
maxis42/ML-DA-Coursera-Yandex-MIPT
mit
사용자 정의 층 <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/tutorials/customization/custom_layers"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org에서 보기</a> </td> <td><a target="_blank" href="https://colab.research.google.com/...
import tensorflow as tf print(tf.config.list_physical_devices('GPU'))
site/ko/tutorials/customization/custom_layers.ipynb
tensorflow/docs-l10n
apache-2.0
층: 유용한 연산자 집합 머신러닝을 위한 코드를 작성하는 대부분의 경우에 개별적인 연산과 변수를 조작하는 것보다는 높은 수준의 추상화 도구를 사용할 것입니다. 많은 머신러닝 모델은 비교적 단순한 층(layer)을 조합하고 쌓아서 표현가능합니다. 또한 텐서플로는 여러 표준형 층을 제공하므로 사용자 고유의 응용 프로그램에 특화된 층을 처음부터 작성하거나, 기존 층의 조합으로 쉽게 만들 수 있습니다. 텐서플로는 케라스의 모든 API를 tf.keras 패키지에 포함하고 있습니다. 케라스 층은 모델을 구축하는데 매우 유용합니다.
# In the tf.keras.layers package, layers are objects. To construct a layer, # simply construct the object. Most layers take as a first argument the number # of output dimensions / channels. layer = tf.keras.layers.Dense(100) # The number of input dimensions is often unnecessary, as it can be inferred # the first time t...
site/ko/tutorials/customization/custom_layers.ipynb
tensorflow/docs-l10n
apache-2.0
미리 구성되어있는 층은 다음 문서에서 확인할 수 있습니다. Dense(완전 연결 층), Conv2D, LSTM, BatchNormalization, Dropout, 등을 포함하고 있습니다.
# To use a layer, simply call it. layer(tf.zeros([10, 5])) # Layers have many useful methods. For example, you can inspect all variables # in a layer using `layer.variables` and trainable variables using # `layer.trainable_variables`. In this case a fully-connected layer # will have variables for weights and biases. l...
site/ko/tutorials/customization/custom_layers.ipynb
tensorflow/docs-l10n
apache-2.0
사용자 정의 층 구현 사용자 정의 층을 구현하는 가장 좋은 방법은 tf.keras.Layer 클래스를 상속하고 다음과 같이 구현하는 것입니다. __init__: 모든 입력 독립적 초기화를 수행할 수 있습니다. build: 입력 텐서의 형상을 알고 나머지 초기화 작업을 수행할 수 있습니다. call: 순방향 계산을 수행합니다. 변수를 생성하기 위해 build가 호출되길 기다릴 필요가 없다는 것에 주목하세요. 또한 변수를 __init__에 생성할 수도 있습니다. 그러나 build에 변수를 생성하는 유리한 점은 층이 작동할 입력의 크기를 기준으로 나중에 변수를 만들 수...
class MyDenseLayer(tf.keras.layers.Layer): def __init__(self, num_outputs): super(MyDenseLayer, self).__init__() self.num_outputs = num_outputs def build(self, input_shape): self.kernel = self.add_weight("kernel", shape=[int(input_shape[-1]), ...
site/ko/tutorials/customization/custom_layers.ipynb
tensorflow/docs-l10n
apache-2.0
코드를 읽는 사람이 표준형 층의 동작을 잘 알고 있을 것이므로, 가능한 경우 표준형 층을 사용하는것이 전체 코드를 읽고 유지하는데 더 쉽습니다. 만약 tf.keras.layers 에 없는 층을 사용하기 원하면 깃허브에 이슈화하거나, 풀 리퀘스트(pull request)를 보내세요. 모델: 층 구성 머신러닝 모델에서 대부분의 재미있는 많은 것들은 기존의 층을 조합하여 구현됩니다. 예를 들어, 레즈넷(resnet)의 각 잔여 블록(residual block)은 합성곱(convolution), 배치 정규화(batch normalization), 쇼트컷(shortcut) 등...
class ResnetIdentityBlock(tf.keras.Model): def __init__(self, kernel_size, filters): super(ResnetIdentityBlock, self).__init__(name='') filters1, filters2, filters3 = filters self.conv2a = tf.keras.layers.Conv2D(filters1, (1, 1)) self.bn2a = tf.keras.layers.BatchNormalization() self.conv2b = tf....
site/ko/tutorials/customization/custom_layers.ipynb
tensorflow/docs-l10n
apache-2.0
그러나 대부분의 경우에, 많은 층으로 구성된 모델은 단순하게 순서대로 층을 하나씩 호출합니다. 이는 tf.keras.Sequential 사용하여 간단한 코드로 구현 가능합니다.
my_seq = tf.keras.Sequential([tf.keras.layers.Conv2D(1, (1, 1), input_shape=( None, None, 3)), tf.keras.layers.BatchNormalization(), tf.keras.layers.Conv2...
site/ko/tutorials/customization/custom_layers.ipynb
tensorflow/docs-l10n
apache-2.0
With this many layers, it's going to take a lot of iterations for this network to learn. By the time you're done training these 800 batches, your final test and validation accuracies probably won't be much better than 10%. (It will be different each time, but will most likely be less than 15%.) Using batch normalizatio...
def fully_connected(prev_layer, num_units): """ Create a fully connectd layer with the given layer as input and the given number of neurons. :param prev_layer: Tensor The Tensor that acts as input into this layer :param num_units: int The size of the layer. That is, the number of un...
batch-norm/Batch_Normalization_Exercises_MySol.ipynb
guyk1971/deep-learning
mit
TODO: Modify conv_layer to add batch normalization to the convolutional layers it creates. Feel free to change the function's parameters if it helps.
def conv_layer(prev_layer, layer_depth): """ Create a convolutional layer with the given layer as input. :param prev_layer: Tensor The Tensor that acts as input into this layer :param layer_depth: int We'll set the strides and number of feature maps based on the layer's depth in the...
batch-norm/Batch_Normalization_Exercises_MySol.ipynb
guyk1971/deep-learning
mit
TODO: Edit the train function to support batch normalization. You'll need to make sure the network knows whether or not it is training, and you'll need to make sure it updates and uses its population statistics correctly.
def train(num_batches, batch_size, learning_rate): # Build placeholders for the input samples and labels inputs = tf.placeholder(tf.float32, [None, 28, 28, 1]) labels = tf.placeholder(tf.float32, [None, 10]) is_training = tf.placeholder(tf.bool,name='is_training') # Feed the inputs into a series o...
batch-norm/Batch_Normalization_Exercises_MySol.ipynb
guyk1971/deep-learning
mit
Load into individual parcel tables in chunks
def chunks(l, n): """Yield successive n-sized chunks from l.""" for i in range(0, len(l), n): yield l[i:i + n] with conn.cursor() as cur: cur.execute("select table_name from information_schema.tables where table_schema = 'core_logic_2018'") res = cur.fetchall() tables = [x[0] for x in res] re...
sources/parcels/notebooks/parcel-loading.ipynb
FireCARES/data
mit