markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Mientras que la siguiente dará un error:
M + b[:2]
_____no_output_____
MIT
01_numpy.ipynb
ejdecena/herramientas
El error es claro, *NumPy* no sabe cómo hacer para encajar las dimensiones de estos dos arreglos. Más detalles sobre broadcasting [aquí](https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html). Comparaciones y máscaras de booleanos. Así como es posible sumar un número a un arreglo, también es posible hacer com...
M > 3
_____no_output_____
MIT
01_numpy.ipynb
ejdecena/herramientas
Es muy común usar el resultado de tal comparación para obtener valores de un arreglo que cumplan con cierto criterio, como:
M[M > 3]
_____no_output_____
MIT
01_numpy.ipynb
ejdecena/herramientas
O incluso combinando arreglos, como:
M[a == 2]
_____no_output_____
MIT
01_numpy.ipynb
ejdecena/herramientas
Medidas de centralidad y dispersión. *NumPy* nos permite calcular la media, la mediana y la varianza a partir de arrays de forma muy simple. Por ejemplo para calcular la media podemos usar la función np.mean():
np.mean(v)
_____no_output_____
MIT
01_numpy.ipynb
ejdecena/herramientas
Una forma alternativa es usar el método *`.mean()`* de un objeto *array*:
v.mean()
_____no_output_____
MIT
01_numpy.ipynb
ejdecena/herramientas
Las funciones y métodos *`var`* y *`std`* calculan la *varianza* y la *desviación* de un *array*:
np.var(v) # Varianza de los elementos de v. v.var() # Forma alternativa. np.std(v) # Desviación estándar de v. v.std() # Forma alternativa.
_____no_output_____
MIT
01_numpy.ipynb
ejdecena/herramientas
Existen otras medidas para caracterizar los datos, llamadas de forma, como son la [*curtosis*](https://es.wikipedia.org/wiki/Curtosis) y el [*sesgo*](https://es.wikipedia.org/wiki/Sesgo_estadístico) (o asimetría estadística).Estás medidas son menos usadas en parte porque su interpretación es menos intuitiva que otras m...
x = np.random.rand(100) np.percentile(x , [25, 50, 75])
_____no_output_____
MIT
01_numpy.ipynb
ejdecena/herramientas
Z-score. El *Z-score* es una cantidad adimensional que expresa el número de desviaciones estándar que un dato está por encima o por debajo de la media. Si el *Z-score* es positivo el dato está por encima de la media, y cuando es negativo está por debajo de la media. Se calcula como:$$z = \frac{x - \mu}{\sigma}$$Donde:...
np.dot(a, b) # Producto escalar de los vectores a y b. a@b # Forma alternativa del producto escalar de los vectores a y b con el operador @. I = np.identity(3) # Retorna la matriz identidad de 3x3. I np.dot(M, I) # Multiplicación de las matrices M e I. M@I # Forma alternativa de la multiplicación de las matrices M e I ...
_____no_output_____
MIT
01_numpy.ipynb
ejdecena/herramientas
BASIC SETUP
# BASIC SETUP ! [ ! -z "$COLAB_GPU" ] && pip install torch skorch && pip install neptune-client !cp "drive/My Drive/dl_project_data/repo/data_loading.py" . !mkdir ./helper_scripts/ !cp "drive/My Drive/dl_project_data/repo/helper_scripts/visual_helpers.py" ./helper_scripts !cp "drive/My Drive/dl_project_data/repo/archi...
Requirement already satisfied: torch in /usr/local/lib/python3.6/dist-packages (1.5.1+cu101) Collecting skorch [?25l Downloading https://files.pythonhosted.org/packages/42/21/4936b881b33de285faa0b36209afe4f9724a0875b2225abdc63b23d384a3/skorch-0.8.0-py3-none-any.whl (113kB)  |████████████████████████████████| 1...
MIT
notebooks/VGG19_training.ipynb
tranic/histopathology_cancer_detection
IMPORTS**You should not have to change anything here.**
# IMPORTS from collections import OrderedDict import torch from torch import nn from torchvision import models, transforms import skorch.callbacks as scb from skorch import NeuralNetBinaryClassifier import model_training as md import architecture as arch from data_loading import ToTensor, Normalize, RandomRotation, ...
_____no_output_____
MIT
notebooks/VGG19_training.ipynb
tranic/histopathology_cancer_detection
CLASSIFIER PARAMETRIZATIONHere you can parametrize your model and set loss, optimizer, learning rate, etc. For further information on what can be set and how, please refer to the [skorch documentation](https://skorch.readthedocs.io/en/stable/classifier.htmlskorch.classifier.NeuralNetClassifier).
# CLASSIFIER PARAMETRIZATION classifier = NeuralNetBinaryClassifier( arch.VGG19, optimizer = torch.optim.Adamax, max_epochs = 30, lr = 0.002, batch_size = 128, iterator_train__shuffle = True, # Shuffle training data on each epoch train_split = None, callbacks = [scb.LRScheduler(policy =...
_____no_output_____
MIT
notebooks/VGG19_training.ipynb
tranic/histopathology_cancer_detection
CLASSIFIER TRAININGAfter you have added the shared folder with the data to your drive as a shortcut, you should not have to change anything here. At least for now.**IF YOU WANT TO TRAIN WITH THE FULL DATASET, JUST REMOVE** *_small* **FROM THE CSV FILE.**
# CLASSIFIER TRAINING md.train_model(classifier, train_labels = "drive/My Drive/dl_project_data/train/train_split.csv", test_labels = "drive/My Drive/dl_project_data/train/test_split.csv", file_dir = "train", train_transform = transforms.Compose([transforms.ToPILImage...
https://ui.neptune.ai/elangenhan/hcd-experiments/e/HCDEX-3 Starting Training for <class 'architecture.VGG19'> Model-Params: Criterion: <class 'torch.nn.modules.loss.BCEWithLogitsLoss'> Optimizer: <class 'torch.optim.adamax.Adamax'> [1...
MIT
notebooks/VGG19_training.ipynb
tranic/histopathology_cancer_detection
Image recognition: recognizing hurricane damage Daniel Buscombe, MARDA Science![](https://mardascience.com/wp-content/uploads/2019/06/cropped-MardaScience_logo-5.png)MIT LicenseCopyright (c) 2020, Marda Science LLCPermission is hereby granted, free of charge, to any person obtaining a copyof this software and associat...
!pip install tf-nightly --quiet import requests, os import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers from tensorflow.keras import models tf.__version__ from glob import glob import matplotlib.pyplot as plt import numpy as np # from https://stackoverflow.com/questions/38511444/py...
_____no_output_____
MIT
MARDA_Image_recognition_3_hurricanes.ipynb
dbuscombe-usgs/HurricaneHarvey_buildingdamage
What class categories do I have?
!ls train
damage no_damage
MIT
MARDA_Image_recognition_3_hurricanes.ipynb
dbuscombe-usgs/HurricaneHarvey_buildingdamage
How many train files?
!ls train/damage | wc -l !ls train/no_damage | wc -l
5000
MIT
MARDA_Image_recognition_3_hurricanes.ipynb
dbuscombe-usgs/HurricaneHarvey_buildingdamage
How many test and validation files?
!ls test/damage | wc -l !ls test/no_damage | wc -l !ls validation/damage | wc -l !ls validation/no_damage | wc -l
1000
MIT
MARDA_Image_recognition_3_hurricanes.ipynb
dbuscombe-usgs/HurricaneHarvey_buildingdamage
Define text labels for our two classes
classes = ['damage', 'no_damage']
_____no_output_____
MIT
MARDA_Image_recognition_3_hurricanes.ipynb
dbuscombe-usgs/HurricaneHarvey_buildingdamage
Get rid of any corrupt jpegs
num_skipped = 0 for folder in ['test', 'train', 'validation']: for folder_name in classes: folder_path = os.path.join(folder, folder_name) for fname in os.listdir(folder_path): fpath = os.path.join(folder_path, fname) fobj = open(fpath, 'rb') if tf.compat.as_bytes('JFIF') not in fobj.peek(10...
_____no_output_____
MIT
MARDA_Image_recognition_3_hurricanes.ipynb
dbuscombe-usgs/HurricaneHarvey_buildingdamage
Augmenting the data
image_size = (128, 128) batch_size = 32 train_ds = tf.keras.preprocessing.image_dataset_from_directory( 'train', seed=2020, image_size=image_size, batch_size=batch_size) val_ds = tf.keras.preprocessing.image_dataset_from_directory( 'validation', seed=2020, image_size=image_size, batch_size=batch_size...
_____no_output_____
MIT
MARDA_Image_recognition_3_hurricanes.ipynb
dbuscombe-usgs/HurricaneHarvey_buildingdamage
Improve model throughput by using [pre-fetch](https://www.tensorflow.org/api_docs/python/tf/data/Datasetprefetch)
train_ds = train_ds.prefetch(buffer_size=batch_size) val_ds = val_ds.prefetch(buffer_size=batch_size) augmented_train_ds = train_ds.map( lambda x, y: (data_augmentation(x, training=True), y))
_____no_output_____
MIT
MARDA_Image_recognition_3_hurricanes.ipynb
dbuscombe-usgs/HurricaneHarvey_buildingdamage
Image classification using transfer learning Build a model Load the MobileNetV2 model trained on imagenet, but exclude the classification layers, because we want to add our own classification layers so we can retrain the model on our own categories We'll use one of the 'stock' models provided by `keras.applications` c...
def mobilenet_model(num_classes, input_shape): EXTRACTOR = MobileNetV2(include_top=False, weights="imagenet", input_shape=input_shape) EXTRACTOR.trainable = True # Construct the head of the model that will be placed on top of the # the base model class_head = EXTRACTOR.output ...
_____no_output_____
MIT
MARDA_Image_recognition_3_hurricanes.ipynb
dbuscombe-usgs/HurricaneHarvey_buildingdamage
Train the model
min_lr = 1e-4 patience = 5 factor = 0.8 cooldown = 3 epochs = 50 from tensorflow.keras.callbacks import ReduceLROnPlateau, EarlyStopping, ModelCheckpoint filepath = 'hurricanes_mn2_best_weights.h5' earlystop = EarlyStopping(monitor="val_loss", mode="min", patience=patience) # reduction...
WARNING:tensorflow:From <ipython-input-40-4cca302ae1c0>:4: Model.fit_generator (from tensorflow.python.keras.engine.training) is deprecated and will be removed in a future version. Instructions for updating: Please use Model.fit, which supports generators. Epoch 1/50 WARNING:tensorflow:Model was constructed with shape ...
MIT
MARDA_Image_recognition_3_hurricanes.ipynb
dbuscombe-usgs/HurricaneHarvey_buildingdamage
Run inference on new dataDropout are inactive at inference time, so that layer won't affect our model results
f = glob('test/no_damage/*.jpeg')[0] img = keras.preprocessing.image.load_img(f, target_size=image_size) img_array = keras.preprocessing.image.img_to_array(img) img_array = tf.expand_dims(img_array, 0) # Create batch axis scores = model2.predict(img_array).flatten() print(classes[np.argmax(scores)])
no_damage
MIT
MARDA_Image_recognition_3_hurricanes.ipynb
dbuscombe-usgs/HurricaneHarvey_buildingdamage
We can use the `model.evaluate()` function to evaluate the average accuracy for the entire test set
scores = model2.evaluate(val_ds)
2000/2000 [==============================] - 11s 5ms/step - loss: 0.0223 - accuracy: 0.9920
MIT
MARDA_Image_recognition_3_hurricanes.ipynb
dbuscombe-usgs/HurricaneHarvey_buildingdamage
Plotting the confusion matrix The confusion matrix is one of correspondences between actual and predicted labels, per class
from sklearn.metrics import confusion_matrix import seaborn as sns
/usr/local/lib/python3.6/dist-packages/statsmodels/tools/_testing.py:19: FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead. import pandas.util.testing as tm
MIT
MARDA_Image_recognition_3_hurricanes.ipynb
dbuscombe-usgs/HurricaneHarvey_buildingdamage
Get a new validation batch generator with a batch size of 1, and shuffling set to False because we want to pair each image with its class
val_ds = tf.keras.preprocessing.image_dataset_from_directory( 'test', seed=2020, shuffle=False, image_size=image_size, batch_size=1)
Found 2000 files belonging to 2 classes.
MIT
MARDA_Image_recognition_3_hurricanes.ipynb
dbuscombe-usgs/HurricaneHarvey_buildingdamage
Get the image class labels and store in a list `L`
L = [] for _, labels in val_ds: L.append(int(labels[0])) np.bincount(L)
_____no_output_____
MIT
MARDA_Image_recognition_3_hurricanes.ipynb
dbuscombe-usgs/HurricaneHarvey_buildingdamage
Use the trained model to make predictions on the test set
preds = model2.predict(val_ds) pred = np.argmax(preds, axis=1)
WARNING:tensorflow:Model was constructed with shape (None, 224, 224, 3) for input Tensor("input_1:0", shape=(None, 224, 224, 3), dtype=float32), but it was called on an input with incompatible shape (None, 128, 128, None).
MIT
MARDA_Image_recognition_3_hurricanes.ipynb
dbuscombe-usgs/HurricaneHarvey_buildingdamage
Get the confusion matrix (the matrix of label correspondences between ground truth and model prediction)
cm = confusion_matrix(np.asarray(L), pred) cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
_____no_output_____
MIT
MARDA_Image_recognition_3_hurricanes.ipynb
dbuscombe-usgs/HurricaneHarvey_buildingdamage
Make a plot of that matrix
plt.figure(figsize=(15,15)) sns.heatmap(cm, annot=True, cmap = sns.cubehelix_palette(dark=0, light=1, as_cmap=True)) tick_marks = np.arange(len(classes))+.5 plt.xticks(tick_marks, classes, rotation=45,fontsize=10) plt.yticks(tick_marks, classes,rotation=45, fontsize=10)
_____no_output_____
MIT
MARDA_Image_recognition_3_hurricanes.ipynb
dbuscombe-usgs/HurricaneHarvey_buildingdamage
Both classes are estimated to within 1%
_____no_output_____
MIT
MARDA_Image_recognition_3_hurricanes.ipynb
dbuscombe-usgs/HurricaneHarvey_buildingdamage
DIY Notebook Goals Understand the role of number of epochs in transfer learning Till what point increasing epochs helps in imporving acuracy How overtraining can result in overfitting the data You will be using skin-cancer mnist to train the classifiers Table of Contents [0. Install](0) [1. Train a resnet50 network...
!git clone https://github.com/Tessellate-Imaging/monk_v1.git # If using Colab install using the commands below !cd monk_v1/installation/Misc && pip install -r requirements_colab.txt # If using Kaggle uncomment the following command #!cd monk_v1/installation/Misc && pip install -r requirements_kaggle.txt # Select the ...
_____no_output_____
Apache-2.0
study_roadmaps/2_transfer_learning_roadmap/4_effect_of_training_epochs/3) Understand the effect of number of epochs in transfer learning - keras.ipynb
shubham7169/monk_v1
Dataset Details - Credits: https://www.kaggle.com/kmader/skin-cancer-mnist-ham10000 - Seven classes - benign_keratosis_like_lesions - melanocytic_nevi - dermatofibroma - melanoma - vascular_lesions - basal_cell_carcinoma - Bowens_disease Download the dataset
! wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1MRC58-oCdR1agFTWreDFqevjEOIWDnYZ' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=1MRC5...
_____no_output_____
Apache-2.0
study_roadmaps/2_transfer_learning_roadmap/4_effect_of_training_epochs/3) Understand the effect of number of epochs in transfer learning - keras.ipynb
shubham7169/monk_v1
Imports
# Monk import os import sys sys.path.append("monk_v1/monk/"); #Using keras backend from keras_prototype import prototype
_____no_output_____
Apache-2.0
study_roadmaps/2_transfer_learning_roadmap/4_effect_of_training_epochs/3) Understand the effect of number of epochs in transfer learning - keras.ipynb
shubham7169/monk_v1
Train a resnet50 network for 5 epochs Creating and managing experiments - Provide project name - Provide experiment name - For a specific data create a single project - Inside each project multiple experiments can be created - Every experiment can be have diferent hyper-parameters attached to it
gtf = prototype(verbose=1); gtf.Prototype("Project", "Epochs-5");
Keras Version: 2.2.5 Tensorflow Version: 1.12.0 Experiment Details Project: Project Experiment: Epochs-5 Dir: /home/abhi/Desktop/Work/tess_tool/gui/v0.3/finetune_models/Organization/development/v5.0_blocks/study_roadmap/change_post_num_layers/5_transfer_learning_params/3_training_epochs/workspace/Project/E...
Apache-2.0
study_roadmaps/2_transfer_learning_roadmap/4_effect_of_training_epochs/3) Understand the effect of number of epochs in transfer learning - keras.ipynb
shubham7169/monk_v1
This creates files and directories as per the following structure workspace | |--------Project | | |-----Freeze_Base_Network | |-----experiment-state.json ...
gtf.Default(dataset_path="skin_cancer_mnist_dataset/images", path_to_csv="skin_cancer_mnist_dataset/train_labels.csv", model_name="resnet50", freeze_base_network=True, num_epochs=5); #Set number of epochs here #Read the summary...
Dataset Details Train path: skin_cancer_mnist_dataset/images Val path: None CSV train path: skin_cancer_mnist_dataset/train_labels.csv CSV val path: None Dataset Params Input Size: 224 Batch Size: 4 Data Shuffle: True Processors: 4 Train-val split: 0.7 Delimiter...
Apache-2.0
study_roadmaps/2_transfer_learning_roadmap/4_effect_of_training_epochs/3) Understand the effect of number of epochs in transfer learning - keras.ipynb
shubham7169/monk_v1
From summary above Training params Num Epochs: 5 Train the classifier
#Start Training gtf.Train(); #Read the training summary generated once you run the cell and training is completed
_____no_output_____
Apache-2.0
study_roadmaps/2_transfer_learning_roadmap/4_effect_of_training_epochs/3) Understand the effect of number of epochs in transfer learning - keras.ipynb
shubham7169/monk_v1
Final training loss - Final validation loss - (You may get a different result) Re-Train a new experiment for 10 epochs Creating and managing experiments - Provide project name - Provide experiment name - For a specific data create a single project - Inside each project multiple experiments can be creat...
gtf = prototype(verbose=1); gtf.Prototype("Project", "Epochs-10");
Keras Version: 2.2.5 Tensorflow Version: 1.12.0 Experiment Details Project: Project Experiment: Epochs-10 Dir: /home/abhi/Desktop/Work/tess_tool/gui/v0.3/finetune_models/Organization/development/v5.0_blocks/study_roadmap/change_post_num_layers/5_transfer_learning_params/3_training_epochs/workspace/Project/...
Apache-2.0
study_roadmaps/2_transfer_learning_roadmap/4_effect_of_training_epochs/3) Understand the effect of number of epochs in transfer learning - keras.ipynb
shubham7169/monk_v1
This creates files and directories as per the following structure workspace | |--------Project | | |-----Epochs-5 (Previously created) | |-----experiment-state.json ...
gtf.Default(dataset_path="skin_cancer_mnist_dataset/images", path_to_csv="skin_cancer_mnist_dataset/train_labels.csv", model_name="resnet50", freeze_base_network=True, num_epochs=10); #Set number of epochs here #Read the summar...
Dataset Details Train path: skin_cancer_mnist_dataset/images Val path: None CSV train path: skin_cancer_mnist_dataset/train_labels.csv CSV val path: None Dataset Params Input Size: 224 Batch Size: 4 Data Shuffle: True Processors: 4 Train-val split: 0.7 Delimiter...
Apache-2.0
study_roadmaps/2_transfer_learning_roadmap/4_effect_of_training_epochs/3) Understand the effect of number of epochs in transfer learning - keras.ipynb
shubham7169/monk_v1
From summary above Training params Num Epochs: 10 Train the classifier
#Start Training gtf.Train(); #Read the training summary generated once you run the cell and training is completed
_____no_output_____
Apache-2.0
study_roadmaps/2_transfer_learning_roadmap/4_effect_of_training_epochs/3) Understand the effect of number of epochs in transfer learning - keras.ipynb
shubham7169/monk_v1
Final training loss - Final validation loss - (You may get a different result)
_____no_output_____
Apache-2.0
study_roadmaps/2_transfer_learning_roadmap/4_effect_of_training_epochs/3) Understand the effect of number of epochs in transfer learning - keras.ipynb
shubham7169/monk_v1
Re-Train a third experiment for 20 epochs Creating and managing experiments - Provide project name - Provide experiment name - For a specific data create a single project - Inside each project multiple experiments can be created - Every experiment can be have diferent hyper-parameters attached to it
gtf = prototype(verbose=1); gtf.Prototype("Project", "Epochs-20");
Keras Version: 2.2.5 Tensorflow Version: 1.12.0 Experiment Details Project: Project Experiment: Epochs-20 Dir: /home/abhi/Desktop/Work/tess_tool/gui/v0.3/finetune_models/Organization/development/v5.0_blocks/study_roadmap/change_post_num_layers/5_transfer_learning_params/3_training_epochs/workspace/Project/...
Apache-2.0
study_roadmaps/2_transfer_learning_roadmap/4_effect_of_training_epochs/3) Understand the effect of number of epochs in transfer learning - keras.ipynb
shubham7169/monk_v1
This creates files and directories as per the following structure workspace | |--------Project | | |-----Epochs-5 (Previously created) | |-----experiment-state.json ...
gtf.Default(dataset_path="skin_cancer_mnist_dataset/images", path_to_csv="skin_cancer_mnist_dataset/train_labels.csv", model_name="resnet50", freeze_base_network=True, num_epochs=20); #Set number of epochs here #Read the summar...
Dataset Details Train path: skin_cancer_mnist_dataset/images Val path: None CSV train path: skin_cancer_mnist_dataset/train_labels.csv CSV val path: None Dataset Params Input Size: 224 Batch Size: 4 Data Shuffle: True Processors: 4 Train-val split: 0.7 Delimiter...
Apache-2.0
study_roadmaps/2_transfer_learning_roadmap/4_effect_of_training_epochs/3) Understand the effect of number of epochs in transfer learning - keras.ipynb
shubham7169/monk_v1
From summary above Training params Num Epochs: 20 Train the classifier
#Start Training gtf.Train(); #Read the training summary generated once you run the cell and training is completed
_____no_output_____
Apache-2.0
study_roadmaps/2_transfer_learning_roadmap/4_effect_of_training_epochs/3) Understand the effect of number of epochs in transfer learning - keras.ipynb
shubham7169/monk_v1
Final training loss - Final validation loss - (You may get a different result) Compare the experiments
# Invoke the comparison class from compare_prototype import compare
_____no_output_____
Apache-2.0
study_roadmaps/2_transfer_learning_roadmap/4_effect_of_training_epochs/3) Understand the effect of number of epochs in transfer learning - keras.ipynb
shubham7169/monk_v1
Creating and managing comparison experiments - Provide project name
# Create a project gtf = compare(verbose=1); gtf.Comparison("Compare-effect-of-num-epochs");
_____no_output_____
Apache-2.0
study_roadmaps/2_transfer_learning_roadmap/4_effect_of_training_epochs/3) Understand the effect of number of epochs in transfer learning - keras.ipynb
shubham7169/monk_v1
This creates files and directories as per the following structure workspace | |--------comparison | | |-----Compare-effect-of-num-epochs | |------stats_best_va...
gtf.Add_Experiment("Project", "Epochs-5"); gtf.Add_Experiment("Project", "Epochs-10"); gtf.Add_Experiment("Project", "Epochs-20");
_____no_output_____
Apache-2.0
study_roadmaps/2_transfer_learning_roadmap/4_effect_of_training_epochs/3) Understand the effect of number of epochs in transfer learning - keras.ipynb
shubham7169/monk_v1
Run Analysis
gtf.Generate_Statistics();
_____no_output_____
Apache-2.0
study_roadmaps/2_transfer_learning_roadmap/4_effect_of_training_epochs/3) Understand the effect of number of epochs in transfer learning - keras.ipynb
shubham7169/monk_v1
Visualize and study comparison metrics Training Accuracy Curves
from IPython.display import Image Image(filename="workspace/comparison/Compare-effect-of-num-epochs/train_accuracy.png")
_____no_output_____
Apache-2.0
study_roadmaps/2_transfer_learning_roadmap/4_effect_of_training_epochs/3) Understand the effect of number of epochs in transfer learning - keras.ipynb
shubham7169/monk_v1
Training Loss Curves
from IPython.display import Image Image(filename="workspace/comparison/Compare-effect-of-num-epochs/train_loss.png")
_____no_output_____
Apache-2.0
study_roadmaps/2_transfer_learning_roadmap/4_effect_of_training_epochs/3) Understand the effect of number of epochs in transfer learning - keras.ipynb
shubham7169/monk_v1
Validation Accuracy Curves
from IPython.display import Image Image(filename="workspace/comparison/Compare-effect-of-num-epochs/val_accuracy.png")
_____no_output_____
Apache-2.0
study_roadmaps/2_transfer_learning_roadmap/4_effect_of_training_epochs/3) Understand the effect of number of epochs in transfer learning - keras.ipynb
shubham7169/monk_v1
Validation loss curves
from IPython.display import Image Image(filename="workspace/comparison/Compare-effect-of-num-epochs/val_loss.png")
_____no_output_____
Apache-2.0
study_roadmaps/2_transfer_learning_roadmap/4_effect_of_training_epochs/3) Understand the effect of number of epochs in transfer learning - keras.ipynb
shubham7169/monk_v1
Examine a sample imageMuch of what is describe here has been borrowed from the following resourceshttps://github.com/blaylockbk/pyBKB_v2/blob/master/BB_goes16/mapping_GOES16_data.ipynbhttp://edc.occ-data.org/goes16/python/http://www.ceda.ac.uk/static/media/uploads/ncas-reading-2015/10_read_netcdf_python.pdf Import the...
%matplotlib inline from netCDF4 import Dataset import matplotlib import matplotlib.pyplot as plt import numpy as np import os from pyproj import Proj import datetime from mpl_toolkits.basemap import Basemap from osgeo import gdal os.chdir("/Users/nathan/Documents/Projects/GOES_Fire_Growth/Raw_Data")
_____no_output_____
MIT
.ipynb_checkpoints/Raster_manipulation-checkpoint.ipynb
pavlovc2/goes_r_fire
Import dataset and examine dimensions
C_file = Dataset("OR_ABI-L2-CMIPC-M3C07_G16_s20172830932227_e20172830935012_c20172830935048.nc", 'r') ref_ch7 = C_file.variables['CMI'][:] #C_file.close() #C_file = None print C_file.file_format dims = C_file.dimensions.keys() print dims for dim in dims: print C_file.dimensions[dim]
NETCDF4 [u'y', u'x', u'number_of_time_bounds', u'band', u'number_of_image_bounds'] <type 'netCDF4._netCDF4.Dimension'>: name = 'y', size = 1500 <type 'netCDF4._netCDF4.Dimension'>: name = 'x', size = 2500 <type 'netCDF4._netCDF4.Dimension'>: name = 'number_of_time_bounds', size = 2 <type 'netCDF4._netCDF4.Dimension'...
MIT
.ipynb_checkpoints/Raster_manipulation-checkpoint.ipynb
pavlovc2/goes_r_fire
Examine Variables
print C_file.variables.keys() print C_file.variables["goes_imager_projection"]
[u'CMI', u'DQF', u't', u'y', u'x', u'time_bounds', u'goes_imager_projection', u'y_image', u'y_image_bounds', u'x_image', u'x_image_bounds', u'nominal_satellite_subpoint_lat', u'nominal_satellite_subpoint_lon', u'nominal_satellite_height', u'geospatial_lat_lon_extent', u'band_wavelength', u'band_id', u'total_number_of_p...
MIT
.ipynb_checkpoints/Raster_manipulation-checkpoint.ipynb
pavlovc2/goes_r_fire
Get time
# Data are stored as seconds since 2000-01-01 12:00:00 secs = C_file.variables['t'][0] img_date = datetime.datetime(2000, 1, 1, 12) + datetime.timedelta(seconds = secs)
_____no_output_____
MIT
.ipynb_checkpoints/Raster_manipulation-checkpoint.ipynb
pavlovc2/goes_r_fire
Get image data
b = C_file.variables['CMI'] # Plot it plt.figure(figsize = [8,8]) plt.imshow(b) plt.title(img_date) bt = np.array(b) > 295 # Plot it plt.figure(figsize = [8,8]) plt.imshow(bt) plt.title(img_date)
_____no_output_____
MIT
.ipynb_checkpoints/Raster_manipulation-checkpoint.ipynb
pavlovc2/goes_r_fire
Get projection and location info
sh = C_file.variables['goes_imager_projection'].perspective_point_height slon = C_file.variables['goes_imager_projection'].longitude_of_projection_origin ssweep = C_file.variables['goes_imager_projection'].sweep_angle_axis # Get coordinates xcoords = C_file.variables['x'][:] * sh ycoords = C_file.variables['y'][:] * s...
_____no_output_____
MIT
.ipynb_checkpoints/Raster_manipulation-checkpoint.ipynb
pavlovc2/goes_r_fire
Subset to North Bay area
nbb = b[325:475, 30:120] # Plot it plt.figure(figsize = [8,8]) plt.imshow(nbb) plt.title(img_date) # Histogram plt.hist(np.concatenate(nbb)) nbbt = nbb > 295 # Plot it plt.figure(figsize = [8,8]) plt.imshow(nbbt) plt.title(img_date)
_____no_output_____
MIT
.ipynb_checkpoints/Raster_manipulation-checkpoint.ipynb
pavlovc2/goes_r_fire
Try writing raster
driver = gdal.GetDriverByName('GTiff') new_file = driver.Create('test_band1.tif', C_file.dimensions['x'].size, # number of columns C_file.dimensions['y'].size, # number of rows 1, # number of bands gdal.GDT_Float32) # datat...
Help on getset descriptor _proj.Proj.srs: srs ['__call__', '__class__', '__delattr__', '__dict__', '__doc__', '__format__', '__getattribute__', '__hash__', '__init__', '__module__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', '_fwd',...
MIT
.ipynb_checkpoints/Raster_manipulation-checkpoint.ipynb
pavlovc2/goes_r_fire
Statistics from Stock DataIn this lab we will load stock data into a Pandas Dataframe and calculate some statistics on it. We will be working with stock data from Google, Apple, and Amazon. All the stock data was downloaded from yahoo finance in CSV format. In your workspace you should have a file named GOOG.csv conta...
# We import pandas into Python import pandas as pd # We read in a stock data data file into a data frame and see what it looks like df = pd.read_csv('./GOOG.csv') # We display the first 5 rows of the DataFrame df.head()
_____no_output_____
MIT
notes/02 Pandas Mini-Project/Statistics from Stock Data.ipynb
jedrzejpolaczek/AIPND
We clearly see that the Dataframe is has automatically labeled the row indices using integers and has labeled the columns of the DataFrame using the names of the columns in the CSV files. To DoYou will now load the stock data from Google, Apple, and Amazon into separte DataFrames. However, for each stock data you will ...
# We load the Google stock data into a DataFrame google_stock = pd.read_csv('./GOOG.csv', parse_dates=True) # We load the Apple stock data into a DataFrame apple_stock = pd.read_csv('./AAPL.csv', parse_dates=True) # We load the Amazon stock data into a DataFrame amazon_stock = pd.read_csv('./AMZN.csv', parse_dates=Tr...
_____no_output_____
MIT
notes/02 Pandas Mini-Project/Statistics from Stock Data.ipynb
jedrzejpolaczek/AIPND
You can check that you have loaded the data correctly by displaying the head of the DataFrames.
# We display the google_stock DataFrame google_stock.head()
_____no_output_____
MIT
notes/02 Pandas Mini-Project/Statistics from Stock Data.ipynb
jedrzejpolaczek/AIPND
You will now join the three DataFrames above to create a single new DataFrame that contains all the `Adj Close` for all the stocks. Let's start by creating an empty DataFrame that has as row indices calendar days between `2000-01-01` and `2016-12-31`. We will use the `pd.date_range()` function to create the calendar d...
# We create calendar dates between '2000-01-01' and '2016-12-31' dates = pd.date_range('2000-01-01', '2016-12-31') # We create and empty DataFrame that uses the above dates as indices all_stocks = pd.DataFrame(index = dates)
_____no_output_____
MIT
notes/02 Pandas Mini-Project/Statistics from Stock Data.ipynb
jedrzejpolaczek/AIPND
To DoYou will now join the the individual DataFrames, `google_stock`, `apple_stock`, and `amazon_stock`, to the `all_stocks` DataFrame. However, before you do this, it is necessary that you change the name of the columns in each of the three dataframes. This is because the column labels in the `all_stocks` dataframe m...
# Change the Adj Close column label to Google google_stock = google_stock.rename(columns = {'Adj Close': 'Google'}) # Change the Adj Close column label to Apple apple_stock = apple_stock.rename(columns = {'Adj Close': 'Apple'}) # Change the Adj Close column label to Amazon amazon_stock = amazon_stock.rename(columns =...
_____no_output_____
MIT
notes/02 Pandas Mini-Project/Statistics from Stock Data.ipynb
jedrzejpolaczek/AIPND
You can check that the column labels have been changed correctly by displaying the datadrames
# We display the google_stock DataFrame google_stock.head() # We display the apple_stock DataFrame apple_stock.head() # We display the amazon_stock DataFrame amazon_stock.head()
_____no_output_____
MIT
notes/02 Pandas Mini-Project/Statistics from Stock Data.ipynb
jedrzejpolaczek/AIPND
Now that we have unique column labels, we can join the individual DataFrames to the `all_stocks` DataFrame. For this we will use the `dataframe.join()` function. The function `dataframe1.join(dataframe2)` joins `dataframe1` with `dataframe2`. We will join each dataframe one by one to the `all_stocks` dataframe. Fill in...
# We join the Google stock to all_stocks all_stocks = all_stocks.join(google_stock, lsuffix="_all_stocks", rsuffix="_google") # We join the Apple stock to all_stocks all_stocks = all_stocks.join(apple_stock, lsuffix="_all_stocks", rsuffix="_google") # We join the Amazon stock to all_stocks all_stocks =all_stocks.join...
_____no_output_____
MIT
notes/02 Pandas Mini-Project/Statistics from Stock Data.ipynb
jedrzejpolaczek/AIPND
You can check that the dataframes have been joined correctly by displaying the `all_stocks` dataframe
# We display the all_stocks DataFrame all_stocks.head()
_____no_output_____
MIT
notes/02 Pandas Mini-Project/Statistics from Stock Data.ipynb
jedrzejpolaczek/AIPND
To DoBefore we proceed to get some statistics on the stock data, let's first check that we don't have any *NaN* values. In the space below check if there are any *NaN* values in the `all_stocks` dataframe. If there are any, remove any rows that have *NaN* values:
# Check if there are any NaN values in the all_stocks dataframe all_stocks.isnull().sum().sum() # Remove any rows that contain NaN values all_stocks.dropna(axis = 0)
_____no_output_____
MIT
notes/02 Pandas Mini-Project/Statistics from Stock Data.ipynb
jedrzejpolaczek/AIPND
You can check that the *NaN* values have been eliminated by displaying the `all_stocks` dataframe
# Check if there are any NaN values in the all_stocks dataframe
_____no_output_____
MIT
notes/02 Pandas Mini-Project/Statistics from Stock Data.ipynb
jedrzejpolaczek/AIPND
Display the `all_stocks` dataframe and verify that there are no *NaN* values
# We display the all_stocks DataFrame all_stocks.head()
_____no_output_____
MIT
notes/02 Pandas Mini-Project/Statistics from Stock Data.ipynb
jedrzejpolaczek/AIPND
Now that you have eliminated any *NaN* values we can now calculate some basic statistics on the stock prices. Fill in the code below
# Print the average stock price for each stock all_stocks.fillna(all_stocks.mean(), axis = 0) # Print the median stock price for each stock all_stocks.fillna(all_stocks.median(), axis = 0) # Print the standard deviation of the stock price for each stock all_stocks.fillna(all_stocks.std(), axis = 0) # Print the cor...
_____no_output_____
MIT
notes/02 Pandas Mini-Project/Statistics from Stock Data.ipynb
jedrzejpolaczek/AIPND
We will now look at how we can compute some rolling statistics, also known as moving statistics. We can calculate for example the rolling mean (moving average) of the Google stock price by using the Pandas `dataframe.rolling().mean()` method. The `dataframe.rolling(N).mean()` calculates the rolling mean over an `N`-day...
# We compute the rolling mean using a 150-Day window for Google stock rollingMean = dataframe.rolling(150).mean()
_____no_output_____
MIT
notes/02 Pandas Mini-Project/Statistics from Stock Data.ipynb
jedrzejpolaczek/AIPND
We can also visualize the rolling mean by plotting the data in our dataframe. In the following lessons you will learn how to use **Matplotlib** to visualize data. For now I will just import matplotlib and plot the Google stock data on top of the rolling mean. You can play around by changing the rolling mean window and ...
# This allows plots to be rendered in the notebook %matplotlib inline # We import matplotlib into Python import matplotlib.pyplot as plt # We plot the Google stock data plt.plot(all_stocks['Google']) # We plot the rolling mean ontop of our Google stock data plt.plot(rollingMean) plt.legend(['Google Stock Price', '...
_____no_output_____
MIT
notes/02 Pandas Mini-Project/Statistics from Stock Data.ipynb
jedrzejpolaczek/AIPND
异常值处理
def dropData(train): # 丢弃部分异常值 train = train[train.area <= 200] train = train[(train.tradeMoney <=16000) & (train.tradeMoney >=700)] train.drop(train[(train['totalFloor'] == 0)].index, inplace=True) # sns.regplot(x=data_train['area'],y=data_train['tradeMoney']) # plt.show() return train #数...
_____no_output_____
MIT
Trademoney_Prediction.ipynb
aomike/Team_Learning_RentMoney_Prediction
缺失值处理、数据变换
def preprocessingData(data): # 填充缺失值 data['rentType'][data['rentType'] == '--'] = '未知方式' # 转换object类型数据 columns = ['houseFloor', 'houseToward', 'houseDecoration', 'communityName', 'plate'] # 'rentType', 'houseType', for feature in columns: data[feature] = LabelEncoder().fit_transform(d...
_____no_output_____
MIT
Trademoney_Prediction.ipynb
aomike/Team_Learning_RentMoney_Prediction
特征选择
from sklearn.ensemble import RandomForestRegressor rf = RandomForestRegressor() # 训练随机森林模型,并通过feature_importances_属性获取每个特征的重要性分数。rf = RandomForestRegressor() train = data_train.drop('tradeMoney', axis=1) train = train.fillna(0) y_pred = data_train['tradeMoney'] rf.fit(train, y_pred) print("Features sorted by their scor...
_____no_output_____
MIT
Trademoney_Prediction.ipynb
aomike/Team_Learning_RentMoney_Prediction
-![image.png](attachment:image.png)
import warnings warnings.filterwarnings('ignore') from IPython.core.interactiveshell import InteractiveShell InteractiveShell.ast_node_interactivity = "all" from sklearn.linear_model import ElasticNetCV, LassoCV, RidgeCV from sklearn.ensemble import GradientBoostingRegressor from sklearn.kernel_ridge import KernelRidg...
_____no_output_____
MIT
Trademoney_Prediction.ipynb
aomike/Team_Learning_RentMoney_Prediction
You can put video within your notebook using `%%html`, writing literal strings or with `IPython.display.HTML`:
%%html <video width="480" controls poster="https://archive.org/download/WebmVp8Vorbis/webmvp8.gif" > <source src="https://archive.org/download/WebmVp8Vorbis/webmvp8.webm" type="video/webm"> <source src="https://archive.org/download/WebmVp8Vorbis/webmvp8_512kb.mp4" type="video/mp4"> <source ...
_____no_output_____
BSD-3-Clause
example-notebooks/exploring-elements.ipynb
hussainsultan/vdom
With vdom, we can create it declaratively
vid = video(source( src="https://archive.org/download/WebmVp8Vorbis/webmvp8.webm", type="video/webm"), source( src="https://archive.org/download/WebmVp8Vorbis/webmvp8_512kb.mp4", type="video/mp4"), source( src="https://archive.org/download/WebmVp8Vorbis/webmvp8.ogv", type="video/ogg"), "Yo...
_____no_output_____
BSD-3-Clause
example-notebooks/exploring-elements.ipynb
hussainsultan/vdom
and display it when we want
vid hand = display(vid, display_id=True)
_____no_output_____
BSD-3-Clause
example-notebooks/exploring-elements.ipynb
hussainsultan/vdom
Since you can change attributes of the element directly with `display` updates, we can turn controls off
attrs = vid.attributes.copy() attrs['controls'] = False attrs['autoplay'] = False hand.update(video(vid.children, **attrs))
_____no_output_____
BSD-3-Clause
example-notebooks/exploring-elements.ipynb
hussainsultan/vdom
There are many more elements available
em('what') strong('bad') abbr("lol", title="laugh out loud") time("10/05/13 at 10 PM", datetime="2013-10-05 22:00") p( "pay attention, because you'll find out that you can", mark(' highlight', style={'backgroundColor': 'blue', 'color': 'white'}), span(" to ", style={ 'backgroundColor': 'yellow' }), " yo...
_____no_output_____
BSD-3-Clause
example-notebooks/exploring-elements.ipynb
hussainsultan/vdom
* 範例目標: 1. 實做 groupby 函式實現資料科學的 Split-Apply-Combine 策略* 範例重點: 1. Groupby:可以同時針對多個欄位做 Group,並在Group中做運算 2. Split:將大的數據集拆成可獨立計算的小數據集 3. Apply:獨立計算各個小數據集 4. Combine:將小數據集運算結果合併
# 載入 NumPy, Pandas 套件 import numpy as np import pandas as pd # 檢查正確載入與版本 print(np) print(np.__version__) print(pd) print(pd.__version__)
<module 'numpy' from 'D:\\anaconda3\\lib\\site-packages\\numpy\\__init__.py'> 1.19.2 <module 'pandas' from 'D:\\anaconda3\\lib\\site-packages\\pandas\\__init__.py'> 1.1.3
MIT
Sample/Day_16_Sample.ipynb
sueshow/Data_Science_Marathon
【基礎16=進階15】
score_df = pd.DataFrame([[1,50,80,70,'boy'], [2,60,45,50,'boy'], [3,98,43,55,'boy'], [4,70,69,89,'boy'], [5,56,79,60,'girl'], [6,60,68,55,'girl'], [7,45,70,77,'girl'], [8,55,77,76,'girl'], [9,25,57,60,'girl'...
_____no_output_____
MIT
Sample/Day_16_Sample.ipynb
sueshow/Data_Science_Marathon
平均 * (法一)運用索引將資料分開
boy_score_df = score_df.loc[score_df.sex=='boy'] girl_score_df = score_df.loc[score_df.sex=='girl'] print(boy_score_df.mean()) print(girl_score_df.mean())
math_score 69.50 english_score 59.25 chinese_score 66.00 dtype: float64 math_score 54.833333 english_score 65.166667 chinese_score 61.833333 dtype: float64
MIT
Sample/Day_16_Sample.ipynb
sueshow/Data_Science_Marathon
* (法二)運用groupby方法
score_df.groupby('sex').mean()
_____no_output_____
MIT
Sample/Day_16_Sample.ipynb
sueshow/Data_Science_Marathon
* 新增欄位class
score_df['class'] = [1,2,1,2,1,2,1,2,1,2] score_df
_____no_output_____
MIT
Sample/Day_16_Sample.ipynb
sueshow/Data_Science_Marathon
Group By 對多個欄位做分析 * 寫法:你的 dataframe 變數名稱.groupby(['要分析之行的名稱', '可以多個']).運算函數名稱() * Split:將大的數據集拆成可獨立計算的小數據集,如:拆成男生、女生資料 * Apply:獨立計算各個小數據集,如成績取平均 * Combine:將小數據集運算結果合併
score_df.groupby(['sex','class']).mean()
_____no_output_____
MIT
Sample/Day_16_Sample.ipynb
sueshow/Data_Science_Marathon
對欄位做多個分析 * 寫法:你的 dataframe 變數名稱.groupby(['要分析之行的名稱']).agg(['運算函數名稱','可以多個運算函數'])
score_df.groupby(['sex']).agg(['mean','std'])
_____no_output_____
MIT
Sample/Day_16_Sample.ipynb
sueshow/Data_Science_Marathon
對多個欄位做多個分析 * 寫法:你的 dataframe 變數名稱.groupby(['要分析之行的名稱','可以多個']).agg(['運算函數名稱','可以多個運算函數'])
score_df.groupby(['sex','class']).agg(['mean','max'])
_____no_output_____
MIT
Sample/Day_16_Sample.ipynb
sueshow/Data_Science_Marathon
Copyright 2019 The TensorFlow Authors.
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under...
_____no_output_____
Apache-2.0
site/en/r2/tutorials/text/text_generation.ipynb
mullikine/tfdocs
Text generation with an RNNView on TensorFlow.org Run in Google ColabView source on GitHub This tutorial demonstrates how to generate text using a character-based RNN. We will work with a dataset of Shakespeare's writing from Andrej Karpathy's [The Unreasonable Effectiveness of Recurrent Neural Networks](http://kar...
from __future__ import absolute_import, division, print_function, unicode_literals !pip install tensorflow-gpu==2.0.0-alpha0 import tensorflow as tf import numpy as np import os import time
Collecting tensorflow-gpu==2.0.0-alpha0 Successfully installed google-pasta-0.1.4 tb-nightly-1.14.0a20190303 tensorflow-estimator-2.0-preview-1.14.0.dev2019030300 tensorflow-gpu==2.0.0-alpha0-2.0.0.dev20190303
Apache-2.0
site/en/r2/tutorials/text/text_generation.ipynb
mullikine/tfdocs
Download the Shakespeare datasetChange the following line to run this code on your own data.
path_to_file = tf.keras.utils.get_file('shakespeare.txt', 'https://storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt')
Downloading data from https://storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt 1122304/1115394 [==============================] - 0s 0us/step
Apache-2.0
site/en/r2/tutorials/text/text_generation.ipynb
mullikine/tfdocs
Read the dataFirst, look in the text:
# Read, then decode for py2 compat. text = open(path_to_file, 'rb').read().decode(encoding='utf-8') # length of text is the number of characters in it print ('Length of text: {} characters'.format(len(text))) # Take a look at the first 250 characters in text print(text[:250]) # The unique characters in the file vocab =...
65 unique characters
Apache-2.0
site/en/r2/tutorials/text/text_generation.ipynb
mullikine/tfdocs
Process the text Vectorize the textBefore training, we need to map strings to a numerical representation. Create two lookup tables: one mapping characters to numbers, and another for numbers to characters.
# Creating a mapping from unique characters to indices char2idx = {u:i for i, u in enumerate(vocab)} idx2char = np.array(vocab) text_as_int = np.array([char2idx[c] for c in text])
_____no_output_____
Apache-2.0
site/en/r2/tutorials/text/text_generation.ipynb
mullikine/tfdocs
Now we have an integer representation for each character. Notice that we mapped the character as indexes from 0 to `len(unique)`.
print('{') for char,_ in zip(char2idx, range(20)): print(' {:4s}: {:3d},'.format(repr(char), char2idx[char])) print(' ...\n}') # Show how the first 13 characters from the text are mapped to integers print ('{} ---- characters mapped to int ---- > {}'.format(repr(text[:13]), text_as_int[:13]))
'First Citizen' ---- characters mapped to int ---- > [18 47 56 57 58 1 15 47 58 47 64 43 52]
Apache-2.0
site/en/r2/tutorials/text/text_generation.ipynb
mullikine/tfdocs
The prediction task Given a character, or a sequence of characters, what is the most probable next character? This is the task we're training the model to perform. The input to the model will be a sequence of characters, and we train the model to predict the output—the following character at each time step.Since RNNs ...
# The maximum length sentence we want for a single input in characters seq_length = 100 examples_per_epoch = len(text)//seq_length # Create training examples / targets char_dataset = tf.data.Dataset.from_tensor_slices(text_as_int) for i in char_dataset.take(5): print(idx2char[i.numpy()])
F i r s t
Apache-2.0
site/en/r2/tutorials/text/text_generation.ipynb
mullikine/tfdocs
The `batch` method lets us easily convert these individual characters to sequences of the desired size.
sequences = char_dataset.batch(seq_length+1, drop_remainder=True) for item in sequences.take(5): print(repr(''.join(idx2char[item.numpy()])))
'First Citizen:\nBefore we proceed any further, hear me speak.\n\nAll:\nSpeak, speak.\n\nFirst Citizen:\nYou ' 'are all resolved rather to die than to famish?\n\nAll:\nResolved. resolved.\n\nFirst Citizen:\nFirst, you k' "now Caius Marcius is chief enemy to the people.\n\nAll:\nWe know't, we know't.\n\nFirst Citizen:\n...
Apache-2.0
site/en/r2/tutorials/text/text_generation.ipynb
mullikine/tfdocs