markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
========= To Do Training ==================Add the following features:* Replace embedding with GPU-compatible method* Decay training rate* Downsample frequent terms* Increase number of negative samples* Work over full corpus, then count iterations of full corpus =========================================
# Input data, labels train_inputs = tf.placeholder(tf.int32, shape=[batch_size]) train_labels = tf.placeholder(tf.int32, shape=[batch_size, 1]) # Embedding lookup table currently only implemented in CPU with tf.name_scope("embeddings"): embeddings = tf.Variable( tf.random_uniform([vocabulary_size, embedding...
_____no_output_____
MIT
Code/Archive/1_PrepCorpus-Copy1.ipynb
DannyGsGit/GTC2018Talk
Start-to-Finish Example: Scalar Field Collapse Authors: Leonardo Werneck & Zachariah B. Etienne This module sets up spherically symmetric, time-symmetric initial data for a scalar field collapse in Spherical coordinates, as [documented in this NRPy+ module](Tutorial-ADM_Initial_Data-ScalarField.ipynb) (the initial dat...
# Step P1: Import needed NRPy+ core modules: from outputC import lhrh,outputC,outCfunction # NRPy+: Core C code output module import NRPy_param_funcs as par # NRPy+: Parameter interface import sympy as sp # SymPy: The Python computer algebra package upon which NRPy+ depends import finite_difference as ...
_____no_output_____
BSD-2-Clause
Tutorial-Start_to_Finish-BSSNCurvilinear-ScalarField_Collapse.ipynb
fedelopezar/nrpytutorial
Step 1.a: Output needed C code for finding the minimum proper distance between grid points, needed for [CFL](https://en.wikipedia.org/w/index.php?title=Courant%E2%80%93Friedrichs%E2%80%93Lewy_condition&oldid=806430673)-limited timestep \[Back to [top](toc)\]$$\label{cfl}$$In order for our explicit-timestepping numeric...
# Output the find_timestep() function to a C file. rfm.out_timestep_func_to_file(os.path.join(Ccodesdir,"find_timestep.h"))
_____no_output_____
BSD-2-Clause
Tutorial-Start_to_Finish-BSSNCurvilinear-ScalarField_Collapse.ipynb
fedelopezar/nrpytutorial
Step 2: Set up ADM initial data for the Scalar Field \[Back to [top](toc)\]$$\label{initial_data}$$As documented [in the scalar field Gaussian pulse initial data NRPy+ tutorial notebook](TTutorial-ADM_Initial_Data-ScalarField.ipynb), we will now set up the scalar field initial data, storing the densely-sampled result ...
!pip install scipy numpy > /dev/null
_____no_output_____
BSD-2-Clause
Tutorial-Start_to_Finish-BSSNCurvilinear-ScalarField_Collapse.ipynb
fedelopezar/nrpytutorial
Next call the `ScalarField_InitialData()` function from the [ScalarField/ScalarField_InitialData.py](../edit/ScalarField/ScalarField_InitialData.py) NRPy+ module (see the [tutorial notebook](Tutorial-ADM_Initial_Data-ScalarField.ipynb)).
# Step 2.a: Import necessary Python and NRPy+ modules import ScalarField.ScalarField_InitialData as sfid # Step 2.b: Set the initial data parameters outputfilename = os.path.join(outdir,"SFID.txt") ID_Family = "Gaussian_pulse" pulse_amplitude = 0.4 pulse_center = 0 pulse_width = 1 Nr = 30000...
Generated the ADM initial data for the gravitational collapse of a massless scalar field in Spherical coordinates. Type of initial condition: Scalar field: "Gaussian" Shell ADM quantities: Time-symmetric Lapse condition: Pre-collapsed Parameters: amplitude = 0....
BSD-2-Clause
Tutorial-Start_to_Finish-BSSNCurvilinear-ScalarField_Collapse.ipynb
fedelopezar/nrpytutorial
Step 3: Convert ADM initial data to BSSN-in-curvilinear coordinates \[Back to [top](toc)\]$$\label{adm_id_spacetime}$$This is an automated process, taken care of by [`BSSN.ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear`](../edit/BSSN.ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear.py), and documented ...
import BSSN.ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear as AtoBnum AtoBnum.Convert_Spherical_or_Cartesian_ADM_to_BSSN_curvilinear("Spherical","ID_scalar_field_ADM_quantities", Ccodesdir=Ccodesdir,loopopts="")
Output C function ID_BSSN_lambdas() to file BSSN_ScalarFieldCollapse_Ccodes/ID_BSSN_lambdas.h Output C function ID_ADM_xx0xx1xx2_to_BSSN_xx0xx1xx2__ALL_BUT_LAMBDAs() to file BSSN_ScalarFieldCollapse_Ccodes/ID_ADM_xx0xx1xx2_to_BSSN_xx0xx1xx2__ALL_BUT_LAMBDAs.h Output C function ID_BSSN__ALL_BUT_LAMBDAs() to file BSSN_Sc...
BSD-2-Clause
Tutorial-Start_to_Finish-BSSNCurvilinear-ScalarField_Collapse.ipynb
fedelopezar/nrpytutorial
Step 4: Output C code for BSSN spacetime evolution \[Back to [top](toc)\]$$\label{bssn}$$ Step 4.a: Set up the BSSN and ScalarField right-hand-side (RHS) expressions, and add the *rescaled* $T^{\mu\nu}$ source terms \[Back to [top](toc)\]$$\label{bssnrhs}$$`BSSN.BSSN_RHSs()` sets up the RHSs assuming a spacetime vacuu...
import time import BSSN.BSSN_RHSs as rhs import BSSN.BSSN_gauge_RHSs as gaugerhs par.set_parval_from_str("BSSN.BSSN_gauge_RHSs::LapseEvolutionOption", LapseCondition) par.set_parval_from_str("BSSN.BSSN_gauge_RHSs::ShiftEvolutionOption", ShiftCondition) print("Generating symbolic expressions for BSSN RHSs...") start = ...
Generating symbolic expressions for BSSN RHSs... (BENCH) Finished BSSN symbolic expressions in 9.372083902359009 seconds.
BSD-2-Clause
Tutorial-Start_to_Finish-BSSNCurvilinear-ScalarField_Collapse.ipynb
fedelopezar/nrpytutorial
Step 4.b: Output the Hamiltonian constraint \[Back to [top](toc)\]$$\label{hamconstraint}$$Next output the C code for evaluating the Hamiltonian constraint [(**Tutorial**)](Tutorial-BSSN_constraints.ipynb). In the absence of numerical error, this constraint should evaluate to zero. However it does not due to numerical...
def Hamiltonian(): start = time.time() print("Generating optimized C code for Hamiltonian constraint. May take a while, depending on CoordSystem.") # Set up the C function for the Hamiltonian RHS desc="Evaluate the Hamiltonian constraint" name="Hamiltonian_constraint" outCfunction( outfi...
_____no_output_____
BSD-2-Clause
Tutorial-Start_to_Finish-BSSNCurvilinear-ScalarField_Collapse.ipynb
fedelopezar/nrpytutorial
Step 4.c: Enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint \[Back to [top](toc)\]$$\label{enforce3metric}$$Then enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint (Eq. 53 of [Ruchlin, Etienne, and Baumgarte (2018)](https://arxiv.org/abs/1712.0...
def gammadet(): start = time.time() print("Generating optimized C code for gamma constraint. May take a while, depending on CoordSystem.") # Set up the C function for the det(gammahat) = det(gammabar) EGC.output_Enforce_Detgammabar_Constraint_Ccode(Ccodesdir,exprs=enforce_detg_constraint_symb_expressio...
_____no_output_____
BSD-2-Clause
Tutorial-Start_to_Finish-BSSNCurvilinear-ScalarField_Collapse.ipynb
fedelopezar/nrpytutorial
Step 4.d: Generate C code kernels for BSSN expressions, in parallel if possible \[Back to [top](toc)\]$$\label{ccodegen}$$
# Step 4.d: C code kernel generation # Step 4.d.i: Create a list of functions we wish to evaluate in parallel funcs = [BSSN_plus_ScalarField_RHSs,Ricci,Hamiltonian,gammadet] try: if os.name == 'nt': # It's a mess to get working in Windows, so we don't bother. :/ # https://medium.com/@grvsinghal/sp...
Generating C code for BSSN RHSs in Spherical coordinates. Output C function rhs_eval() to file BSSN_ScalarFieldCollapse_Ccodes/rhs_eval.h (BENCH) Finished BSSN_RHS C codegen in 17.11899423599243 seconds. Generating C code for Ricci tensor in Spherical coordinates. Output C function Ricci_eval() to file BSSN_ScalarField...
BSD-2-Clause
Tutorial-Start_to_Finish-BSSNCurvilinear-ScalarField_Collapse.ipynb
fedelopezar/nrpytutorial
Step 4.e: Output C codes needed for declaring and setting Cparameters; also set `free_parameters.h` \[Back to [top](toc)\]$$\label{cparams_rfm_and_domainsize}$$Based on declared NRPy+ Cparameters, first we generate `declare_Cparameters_struct.h`, `set_Cparameters_default.h`, and `set_Cparameters[-SIMD].h`.Then we outp...
# Step 4.e.i: Generate declare_Cparameters_struct.h, set_Cparameters_default.h, and set_Cparameters[-SIMD].h par.generate_Cparameters_Ccodes(os.path.join(Ccodesdir)) # Step 4.e.ii: Set free_parameters.h # Output to $Ccodesdir/free_parameters.h reference metric parameters based on generic # domain_size,sinh_width,si...
_____no_output_____
BSD-2-Clause
Tutorial-Start_to_Finish-BSSNCurvilinear-ScalarField_Collapse.ipynb
fedelopezar/nrpytutorial
Step 5: Set up boundary condition functions for chosen singular, curvilinear coordinate system \[Back to [top](toc)\]$$\label{bc_functs}$$Next apply singular, curvilinear coordinate boundary conditions [as documented in the corresponding NRPy+ tutorial notebook](Tutorial-Start_to_Finish-Curvilinear_BCs.ipynb).
import CurviBoundaryConditions.CurviBoundaryConditions as cbcs cbcs.Set_up_CurviBoundaryConditions(os.path.join(Ccodesdir,"boundary_conditions/"),Cparamspath=os.path.join("../"))
Wrote to file "BSSN_ScalarFieldCollapse_Ccodes/boundary_conditions/parity_conditions_symbolic_dot_products.h" Evolved parity: ( aDD00:4, aDD01:5, aDD02:6, aDD11:7, aDD12:8, aDD22:9, alpha:0, betU0:1, betU1:2, betU2:3, cf:0, hDD00:4, hDD01:5, hDD02:6, hDD11:7, hDD12:8, hDD22:9, lambdaU0:1, lambdaU1:2, lambdaU2:3...
BSD-2-Clause
Tutorial-Start_to_Finish-BSSNCurvilinear-ScalarField_Collapse.ipynb
fedelopezar/nrpytutorial
Step 6: The main C code: `ScalarFieldCollapse_Playground.c` \[Back to [top](toc)\]$$\label{main_ccode}$$
# Part P0: Define REAL, set the number of ghost cells NGHOSTS (from NRPy+'s FD_CENTDERIVS_ORDER), # and set the CFL_FACTOR (which can be overwritten at the command line) with open(os.path.join(Ccodesdir,"ScalarFieldCollapse_Playground_REAL__NGHOSTS__CFL_FACTOR.h"), "w") as file: file.write(""" // Part P0....
Now compiling, should take ~10 seconds... Compiling executable... (EXEC): Executing `gcc -std=gnu99 -Ofast -fopenmp -march=native -funroll-loops BSSN_ScalarFieldCollapse_Ccodes/ScalarFieldCollapse_Playground.c -o BSSN_ScalarFieldCollapse_Ccodes/output/ScalarFieldCollapse_Playground -lm`... (BENCH): Finished executing ...
BSD-2-Clause
Tutorial-Start_to_Finish-BSSNCurvilinear-ScalarField_Collapse.ipynb
fedelopezar/nrpytutorial
Step 7: Visualization \[Back to [top](toc)\]$$\label{visualization}$$ Step 7.a: Install `scipy` and download `ffmpeg` if they are not yet installed/downloaded \[Back to [top](toc)\]$$\label{install_download}$$ Note that if you are not running this within `mybinder`, but on a Windows system, `ffmpeg` must be installed ...
!pip install scipy > /dev/null check_for_ffmpeg = !which ffmpeg >/dev/null && echo $? if check_for_ffmpeg != ['0']: print("Couldn't find ffmpeg, so I'll download it.") # Courtesy https://johnvansickle.com/ffmpeg/ !wget http://astro.phys.wvu.edu/zetienne/ffmpeg-static-amd64-johnvansickle.tar.xz !tar Jxf...
_____no_output_____
BSD-2-Clause
Tutorial-Start_to_Finish-BSSNCurvilinear-ScalarField_Collapse.ipynb
fedelopezar/nrpytutorial
Step 7.b: Dynamics of the solution \[Back to [top](toc)\]$$\label{movie_dynamics}$$ Step 7.b.i: Generate images for visualization animation \[Back to [top](toc)\]$$\label{genimages}$$ Here we loop through the data files output by the executable compiled and run in [the previous step](mainc), generating a [png](https:/...
## VISUALIZATION ANIMATION, PART 1: Generate PNGs, one per frame of movie ## import numpy as np from scipy.interpolate import griddata import matplotlib.pyplot as plt from matplotlib.pyplot import savefig import glob import sys from matplotlib import animation globby = glob.glob(os.path.join(outdir,'out640-00*.txt'))...
Processing file BSSN_ScalarFieldCollapse_Ccodes/output/out640-00000800.txt
BSD-2-Clause
Tutorial-Start_to_Finish-BSSNCurvilinear-ScalarField_Collapse.ipynb
fedelopezar/nrpytutorial
Step 7.b.ii: Generate visualization animation \[Back to [top](toc)\]$$\label{genvideo}$$ In the following step, [ffmpeg](http://ffmpeg.org) is used to generate an [mp4](https://en.wikipedia.org/wiki/MPEG-4) video file, which can be played directly from this Jupyter notebook.
## VISUALIZATION ANIMATION, PART 2: Combine PNGs to generate movie ## # https://stackoverflow.com/questions/14908576/how-to-remove-frame-from-matplotlib-pyplot-figure-vs-matplotlib-figure-frame # https://stackoverflow.com/questions/23176161/animating-pngs-in-matplotlib-using-artistanimation from IPython.display import...
_____no_output_____
BSD-2-Clause
Tutorial-Start_to_Finish-BSSNCurvilinear-ScalarField_Collapse.ipynb
fedelopezar/nrpytutorial
Step 7.c: Convergence of constraint violation \[Back to [top](toc)\]$$\label{convergence}$$
from IPython.display import Image os.chdir(outdir) cmd.delete_existing_files("out320*.txt") cmd.Execute("ScalarFieldCollapse_Playground", "320 2 2 "+str(CFL_FACTOR),"out320.txt") os.chdir(os.path.join("..","..")) outfig = os.path.join(outdir,"ScalarFieldCollapse_H_convergence.png") fig = plt.figure() r_640,H_640 ...
(EXEC): Executing `./ScalarFieldCollapse_Playground 320 2 2 0.5`... It: 400 t=15.71 dt=3.93e-02 | 98.3%; ETA 0 s | t/h 14137.17 | gp/s 5.12e+055e+14 (BENCH): Finished executing in 3.873671293258667 seconds.
BSD-2-Clause
Tutorial-Start_to_Finish-BSSNCurvilinear-ScalarField_Collapse.ipynb
fedelopezar/nrpytutorial
Step 8: Output this module as $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{output_to_pdf}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, ...
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-Start_to_Finish-BSSNCurvilinear-ScalarField_Collapse")
Created Tutorial-Start_to_Finish-BSSNCurvilinear-ScalarField_Collapse.tex, and compiled LaTeX file to PDF file Tutorial-Start_to_Finish- BSSNCurvilinear-ScalarField_Collapse.pdf
BSD-2-Clause
Tutorial-Start_to_Finish-BSSNCurvilinear-ScalarField_Collapse.ipynb
fedelopezar/nrpytutorial
To create a chessboard with the Python programming language, I will use two Python libraries; Matplotlib for visualization, and NumPy for building an algorithm which will help us to create and visualize a chessboard.
import matplotlib.pyplot as plt import numpy as np from matplotlib.colors import LogNorm dx, dy = 0.015, 0.05 x = np.arange(-4.0, 4.0, dx) y = np.arange(-4.0, 4.0, dy) X, Y = np.meshgrid(x, y) extent = np.min(x), np.max(x), np.min(y), np.max(y) z1 = np.add.outer(range(8), range(8)) % 2 plt.imshow(z1, cmap="binary_r", ...
_____no_output_____
MIT
Chess.ipynb
avishkar2001/DataScienceCoolStuff
Pre-Trained Models Introduction In this lab, you will learn how to leverage pre-trained models to build image classifiers instead of building a model from scratch. Table of Contents 1. Import Libraries and Packages2. Download Data 3. Define Global Constants 4. Construct ImageDataGenerator Instances 5. Compile...
from keras.preprocessing.image import ImageDataGenerator
Using TensorFlow backend.
MIT
pretrained_model.ipynb
ofyzero/Crusher-Finder-on-Building-
In this lab, we will be using the Keras library to build an image classifier, so let's download the Keras library.
import keras from keras.models import Sequential from keras.layers import Dense
_____no_output_____
MIT
pretrained_model.ipynb
ofyzero/Crusher-Finder-on-Building-
Finally, we will be leveraging the ResNet50 model to build our classifier, so let's download it as well.
from keras.applications import ResNet50 from keras.applications.resnet50 import preprocess_input
_____no_output_____
MIT
pretrained_model.ipynb
ofyzero/Crusher-Finder-on-Building-
Download Data For your convenience, I have placed the data on a server which you can retrieve easily using the **wget** command. So let's run the following line of code to get the data. Given the large size of the image dataset, it might take some time depending on your internet speed.
## get the data !wget https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DL0321EN/data/concrete_data_week3.zip
--2020-05-05 14:05:56-- https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DL0321EN/data/concrete_data_week3.zip Resolving s3-api.us-geo.objectstorage.softlayer.net (s3-api.us-geo.objectstorage.softlayer.net)... 67.228.254.196 Connecting to s3-api.us-geo.objectstorage.softlayer.net (s3-...
MIT
pretrained_model.ipynb
ofyzero/Crusher-Finder-on-Building-
And now if you check the left directory pane, you should see the zipped file *concrete_data_week3.zip* appear. So, let's go ahead and unzip the file to access the images. Given the large number of images in the dataset, this might take a couple of minutes, so please be patient, and wait until the code finishes running.
!unzip -n concrete_data_week3.zip
Archive: concrete_data_week3.zip
MIT
pretrained_model.ipynb
ofyzero/Crusher-Finder-on-Building-
Now, you should see the folder *concrete_data_week3* appear in the left pane. If you open this folder by double-clicking on it, you will find that it contains two folders: *train* and *valid*. And if you explore these folders, you will find that each contains two subfolders: *positive* and *negative*. These are the sam...
num_classes = 2 image_resize = 224 batch_size_training = 100 batch_size_validation = 100
_____no_output_____
MIT
pretrained_model.ipynb
ofyzero/Crusher-Finder-on-Building-
Construct ImageDataGenerator Instances In order to instantiate an ImageDataGenerator instance, we will set the **preprocessing_function** argument to *preprocess_input* which we imported from **keras.applications.resnet50** in order to preprocess our images the same way the images used to train ResNet50 model were pr...
data_generator = ImageDataGenerator( preprocessing_function=preprocess_input, )
_____no_output_____
MIT
pretrained_model.ipynb
ofyzero/Crusher-Finder-on-Building-
Next, we will use the *flow_from_directory* method to get the training images as follows:
train_generator = data_generator.flow_from_directory( 'concrete_data_week3/train', target_size=(image_resize, image_resize), batch_size=batch_size_training, class_mode='categorical')
Found 30001 images belonging to 2 classes.
MIT
pretrained_model.ipynb
ofyzero/Crusher-Finder-on-Building-
**Your Turn**: Use the *flow_from_directory* method to get the validation images and assign the result to **validation_generator**.
## Type your answer here validation_generator = data_generator.flow_from_directory( 'concrete_data_week3/valid', target_size=(image_resize, image_resize), batch_size=batch_size_validation, class_mode='categorical')
Found 10001 images belonging to 2 classes.
MIT
pretrained_model.ipynb
ofyzero/Crusher-Finder-on-Building-
Double-click __here__ for the solution.<!-- The correct answer is:validation_generator = data_generator.flow_from_directory( 'concrete_data_week3/valid', target_size=(image_resize, image_resize), batch_size=batch_size_validation, class_mode='categorical')--> Build, Compile and Fit Model In this section, w...
model = Sequential()
_____no_output_____
MIT
pretrained_model.ipynb
ofyzero/Crusher-Finder-on-Building-
Next, we will add the ResNet50 pre-trained model to out model. However, note that we don't want to include the top layer or the output layer of the pre-trained model. We actually want to define our own output layer and train it so that it is optimized for our image dataset. In order to leave out the output layer of the...
model.add(ResNet50( include_top=False, pooling='avg', weights='imagenet', ))
_____no_output_____
MIT
pretrained_model.ipynb
ofyzero/Crusher-Finder-on-Building-
Then, we will define our output layer as a **Dense** layer, that consists of two nodes and uses the **Softmax** function as the activation function.
model.add(Dense(num_classes, activation='softmax'))
_____no_output_____
MIT
pretrained_model.ipynb
ofyzero/Crusher-Finder-on-Building-
You can access the model's layers using the *layers* attribute of our model object.
model.layers
_____no_output_____
MIT
pretrained_model.ipynb
ofyzero/Crusher-Finder-on-Building-
You can see that our model is composed of two sets of layers. The first set is the layers pertaining to ResNet50 and the second set is a single layer, which is our Dense layer that we defined above. You can access the ResNet50 layers by running the following:
model.layers[0].layers
_____no_output_____
MIT
pretrained_model.ipynb
ofyzero/Crusher-Finder-on-Building-
Since the ResNet50 model has already been trained, then we want to tell our model not to bother with training the ResNet part, but to train only our dense output layer. To do that, we run the following.
model.layers[0].trainable = False
_____no_output_____
MIT
pretrained_model.ipynb
ofyzero/Crusher-Finder-on-Building-
And now using the *summary* attribute of the model, we can see how many parameters we will need to optimize in order to train the output layer.
model.summary()
Model: "sequential_1" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= resnet50 (Model) (None, 2048) 23587712 __________________________________...
MIT
pretrained_model.ipynb
ofyzero/Crusher-Finder-on-Building-
Next we compile our model using the **adam** optimizer.
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
_____no_output_____
MIT
pretrained_model.ipynb
ofyzero/Crusher-Finder-on-Building-
Before we are able to start the training process, with an ImageDataGenerator, we will need to define how many steps compose an epoch. Typically, that is the number of images divided by the batch size. Therefore, we define our steps per epoch as follows:
steps_per_epoch_training = len(train_generator) steps_per_epoch_validation = len(validation_generator) num_epochs = 2
_____no_output_____
MIT
pretrained_model.ipynb
ofyzero/Crusher-Finder-on-Building-
Finally, we are ready to start training our model. Unlike a conventional deep learning training were data is not streamed from a directory, with an ImageDataGenerator where data is augmented in batches, we use the **fit_generator** method.
fit_history = model.fit_generator( train_generator, steps_per_epoch=steps_per_epoch_training, epochs=num_epochs, validation_data=validation_generator, validation_steps=steps_per_epoch_validation, verbose=1, )
Epoch 1/2 52/301 [====>.........................] - ETA: 41:44 - loss: 0.1047 - accuracy: 0.9704...
MIT
pretrained_model.ipynb
ofyzero/Crusher-Finder-on-Building-
Now that the model is trained, you are ready to start using it to classify images. Since training can take a long time when building deep learning models, it is always a good idea to save your model once the training is complete if you believe you will be using the model again later. You will be using this model in the...
model.save('classifier_resnet_model.h5')
_____no_output_____
MIT
pretrained_model.ipynb
ofyzero/Crusher-Finder-on-Building-
Acquire data
mnist = tf.keras.datasets.mnist (train_x,train_y), (test_x, test_y) = mnist.load_data() batch_size = 32 # 32 is default but specify anyway epochs=10
_____no_output_____
MIT
Chapter02/Chapter2_Keras_sequential_models_TF2_alpha.ipynb
chetannitk/Tensorflow-2.0-Quick-Start-Guide
Normalise data
train_x, test_x = tf.cast(train_x/255.0, tf.float32), tf.cast(test_x/255.0, tf.float32) train_y, test_y = tf.cast(train_y,tf.int64),tf.cast(test_y,tf.int64)
_____no_output_____
MIT
Chapter02/Chapter2_Keras_sequential_models_TF2_alpha.ipynb
chetannitk/Tensorflow-2.0-Quick-Start-Guide
Sequential Model 1
model1 = tf.keras.models.Sequential([ tf.keras.layers.Flatten(), tf.keras.layers.Dense(512,activation=tf.nn.relu), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10,activation=tf.nn.softmax) ]) optimiser = tf.keras.optimizers.Adam() model1.compile (optimizer= optimiser, loss='sparse_categorical_crossentropy', ...
_____no_output_____
MIT
Chapter02/Chapter2_Keras_sequential_models_TF2_alpha.ipynb
chetannitk/Tensorflow-2.0-Quick-Start-Guide
Sequential Model 2
model2 = tf.keras.models.Sequential(); model2.add(tf.keras.layers.Flatten()) model2.add(tf.keras.layers.Dense(512, activation='relu')) model2.add(tf.keras.layers.Dropout(0.2)) model2.add(tf.keras.layers.Dense(10,activation=tf.nn.softmax)) optimiser = tf.keras.optimizers.Adam() model2.compile (optimizer= optimiser, loss...
_____no_output_____
MIT
Chapter02/Chapter2_Keras_sequential_models_TF2_alpha.ipynb
chetannitk/Tensorflow-2.0-Quick-Start-Guide
https://github.com/sachinruk/KerasQuantileModel Deep Quantile RegressionOne area that Deep Learning has not explored extensively is the uncertainty in estimates. However, as far as decision making goes, most people actually require quantiles as opposed to true uncertainty in an estimate. eg. For a given age the weight...
import pandas as pd import numpy as np import matplotlib.pyplot as plt from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Activation import tensorflow.keras.backend as K %matplotlib inline mcycle = pd.read_csv('mcycle',delimiter='\t')
_____no_output_____
MIT
quantile_regression/Keras Quantile Model.ipynb
Alex2Yang97/internship_explore_work
Standardise the inputs and outputs so that it is easier to train. I have't saved the mean and standard deviations, but you should.
mcycle.times = (mcycle.times - mcycle.times.mean())/mcycle.times.std() mcycle.accel = (mcycle.accel - mcycle.accel.mean())/mcycle.accel.std() mcycle model = Sequential() model.add(Dense(units=10, input_dim=1,activation='relu')) model.add(Dense(units=10, input_dim=1,activation='relu')) model.add(Dense(1)) model.compile(...
_____no_output_____
MIT
quantile_regression/Keras Quantile Model.ipynb
Alex2Yang97/internship_explore_work
Quantiles 0.1, 0.5, 0.9The loss for an individual data point is defined as:$$\begin{align}\mathcal{L}(\xi_i|\alpha)=\begin{cases}\alpha \xi_i &\text{if }\xi_i\ge 0, \\(\alpha-1) \xi_i &\text{if }\xi_i< 0.\end{cases}\end{align}$$where $\alpha$ is the required quantile and $\xi_i = y_i - f(\mathbf{x}_i)$ and, $f(\mathbf...
def tilted_loss(q,y,f): e = (y-f) return K.mean(K.maximum(q*e, (q-1)*e), axis=-1) def mcycleModel(): model = Sequential() model.add(Dense(units=32, input_dim=1,activation='relu')) model.add(Dense(units=32,activation='relu')) model.add(Dense(1)) return model qs = [0.1, 0.5, 0.9] t_test ...
_____no_output_____
MIT
quantile_regression/Keras Quantile Model.ipynb
Alex2Yang97/internship_explore_work
Reflect Tables into SQLALchemy ORM
# Python SQL toolkit and Object Relational Mapper import sqlalchemy from sqlalchemy.ext.automap import automap_base from sqlalchemy.orm import Session from sqlalchemy import create_engine, func # create engine to hawaii.sqlite engine = create_engine("sqlite:///Resources/hawaii.sqlite") # reflect an existing database in...
_____no_output_____
ADSL
temp_analysis_bonus_2_starter.ipynb
mattkenney9/SQL-Alchemy-Challenge
Bonus Challenge Assignment: Temperature Analysis II
# This function called `calc_temps` will accept start date and end date in the format '%Y-%m-%d' # and return the minimum, maximum, and average temperatures for that range of dates def calc_temps(start_date, end_date): """TMIN, TAVG, and TMAX for a list of dates. Args: start_date (string): A date ...
_____no_output_____
ADSL
temp_analysis_bonus_2_starter.ipynb
mattkenney9/SQL-Alchemy-Challenge
Daily Rainfall Average
# Calculate the total amount of rainfall per weather station for your trip dates using the previous year's # matching dates. # Sort this in descending order by precipitation amount and list the station, name, latitude, longitude, and elevation session.query(Measurement.station, func.sum(Measurement.prcp), Station.nam...
_____no_output_____
ADSL
temp_analysis_bonus_2_starter.ipynb
mattkenney9/SQL-Alchemy-Challenge
Close Session
session.close()
_____no_output_____
ADSL
temp_analysis_bonus_2_starter.ipynb
mattkenney9/SQL-Alchemy-Challenge
*** Load Stanford DataExported from R
dfp = pd.read_csv('~/stanford.csv') dfp.head(5)
_____no_output_____
CC-BY-4.0
sections/appendixes/example_code/survival.ipynb
mepland/data_science_notes
*** [Kaplan-Meier Model](https://lifelines.readthedocs.io/en/latest/Quickstart.htmlkaplan-meier-nelson-aalen-and-parametric-models)
def km_plot_by_factor(factor, dfp, err=True, tablim=2): kmf= [] fig = plt.figure(figsize=(12, 8)) ax = plt.axes() for i,v in enumerate(sorted(dfp[factor].unique())[::-1]): # Create a fitter kmf.append(KaplanMeierFitter()) kmf[i].fit(dfp.query(f'agecat == "{v}"')['time'], event_ob...
_____no_output_____
CC-BY-4.0
sections/appendixes/example_code/survival.ipynb
mepland/data_science_notes
*** [Cox Model](https://lifelines.readthedocs.io/en/latest/Survival%20Regression.htmlfitting-the-regression) Fit Model
covariates = ['age'] cph = CoxPHFitter() cph.fit(dfp[['time', 'status']+covariates], duration_col='time', event_col='status') cph.print_summary(model='Cox Age', decimals=4) cph.log_likelihood_ratio_test()
_____no_output_____
CC-BY-4.0
sections/appendixes/example_code/survival.ipynb
mepland/data_science_notes
[Plot the hazard ratios](https://lifelines.readthedocs.io/en/latest/Survival%20Regression.htmlplotting-the-coefficients)
fig = plt.figure(figsize=(12, 8)) ax = cph.plot(hazard_ratios=True) fig.tight_layout() if print_plots: fig.savefig(f'{output_path}/cox_hrs.pdf')
_____no_output_____
CC-BY-4.0
sections/appendixes/example_code/survival.ipynb
mepland/data_science_notes
[Plot the effect of varying a covariate](https://lifelines.readthedocs.io/en/latest/Survival%20Regression.htmlplotting-the-effect-of-varying-a-covariate)[Note that the mean age is being used as the baseline value](https://github.com/CamDavidsonPilon/lifelines/issues/543).
for covariate in covariates: values = sorted(list(dfp[covariate].unique())) if len(values) > 3: values = np.around(np.linspace(dfp[covariate].min(), dfp[covariate].max(), 4), 2) ax = cph.plot_partial_effects_on_outcome(covariates=covariate, values=values, cmap='coolwarm') ax.set_title(f'Effects ...
_____no_output_____
CC-BY-4.0
sections/appendixes/example_code/survival.ipynb
mepland/data_science_notes
Check Assumptions and Fit [Test the proportional hazard assumptions](https://lifelines.readthedocs.io/en/latest/jupyter_notebooks/Proportional%20hazard%20assumption.html)Would produce Schoenfeld residual plots if needed
cph.check_assumptions(dfp[['time', 'status']+covariates], p_value_threshold=0.01, show_plots=True) proportional_hazard_test(cph, dfp[['time', 'status']+covariates], time_transform='rank')
_____no_output_____
CC-BY-4.0
sections/appendixes/example_code/survival.ipynb
mepland/data_science_notes
[Create Linear Predictors for Residual Plots](https://lifelines.readthedocs.io/en/latest/fitters/regression/CoxPHFitter.htmllifelines.fitters.coxph_fitter.SemiParametricPHFitter.predict_log_partial_hazard)And setup plotting
dfp_linear_predictors = cph.predict_log_partial_hazard(dfp[covariates]) def plot_residuals(dfp, residual_type, x_axis='time', x_axis_label='Time', y_axis_label_suffix=' Residuals'): fig = plt.figure(figsize=(12, 8)) ax = plt.axes() for status in [True, False]: if status: label = 'Event'...
_____no_output_____
CC-BY-4.0
sections/appendixes/example_code/survival.ipynb
mepland/data_science_notes
Scaled Schoenfeld
for covariate in covariates: plot_residuals(dfp_residuals, f'scaled_schoenfeld_{covariate}')
_____no_output_____
CC-BY-4.0
sections/appendixes/example_code/survival.ipynb
mepland/data_science_notes
[Martingale Residuals](https://lifelines.readthedocs.io/en/latest/jupyter_notebooks/Cox%20residuals.htmlMartingale-residuals)
plot_residuals(dfp_residuals, 'martingale') plot_residuals(dfp_residuals, 'martingale', x_axis='linear_predictors', x_axis_label='Linear Predictors')
_____no_output_____
CC-BY-4.0
sections/appendixes/example_code/survival.ipynb
mepland/data_science_notes
[Deviance Residuals](https://lifelines.readthedocs.io/en/latest/jupyter_notebooks/Cox%20residuals.htmlDeviance-residuals)
plot_residuals(dfp_residuals, 'deviance') plot_residuals(dfp_residuals, 'deviance', x_axis='linear_predictors', x_axis_label='Linear Predictors')
_____no_output_____
CC-BY-4.0
sections/appendixes/example_code/survival.ipynb
mepland/data_science_notes
dfbeta
for covariate in covariates: plot_residuals(dfp_residuals, f'dfbeta_{covariate}', x_axis='Observation', x_axis_label='Observation', y_axis_label_suffix='')
_____no_output_____
CC-BY-4.0
sections/appendixes/example_code/survival.ipynb
mepland/data_science_notes
Imports
import os import pandas as pd import numpy as np from matplotlib import pyplot as plt from sklearn.preprocessing import OneHotEncoder from sklearn.preprocessing import LabelEncoder from sklearn.cross_validation import train_test_split %matplotlib inline
_____no_output_____
MIT
Iris.ipynb
keshav11/iris
Loading data
data_path = 'data' col_names = ['sepal length in cm', 'sepal width in cm', 'petal length in cm', 'petal width in cm', 'class'] iris_data = pd.read_csv(os.path.join(data_path, 'iris.data'), header=None, names=col_names) iris_data.head() iris_data.describe() iris_data.plot() plt.show()
_____no_output_____
MIT
Iris.ipynb
keshav11/iris
EncodingLabel encoding and one hot encoding
X = iris_data.values[:,0:4] Y = iris_data.values[:,4] # label encoder label_encoder = LabelEncoder() label_encoder = label_encoder.fit(Y) label_encoded_class = label_encoder.transform(Y) label_encoded_class # one hot encoder col_mat = label_encoded_class.reshape(-1,1) one_hot_encoder = OneHotEncoder() one_hot_encoder =...
_____no_output_____
MIT
Iris.ipynb
keshav11/iris
Classifiers
# partition data x_train, x_test, y_train, y_test = train_test_split(X, Y, test_size=0.33, random_state=11) # using svm from sklearn.svm import SVC svm = SVC(kernel='linear') svm.fit(x_train, y_train) s = svm.score(x_test, y_test) print('svm:', s) # KNN from sklearn.neighbors import KNeighborsClassifier nearest_n = K...
Gaussing Naive Bayes: 0.88
MIT
Iris.ipynb
keshav11/iris
import tensorflow as tf tf.test.gpu_device_name() from google.colab import drive drive.mount('/content/drive') import os if 'COLAB_TPU_ADDR' not in os.environ: print('Not connected to TPU') else: print("Connected to TPU") from keras.layers import Input, Conv2D, Lambda, merge, Dense, Flatten,MaxPooling2D...
_____no_output_____
MIT
model1.ipynb
Debdutta0507/Deep-Single-Shot-Musical-Instrument-IdentificationUsing-Time-Frequency-Localized-Features
KNN
import numpy as np def euc(x, y): return np.linalg.norm(x - y) def KNN(X, y, sample, k=3): distances = [] # calculate every distance for i, x in enumerate(X): distances.append(euc(sample, x)) # get the k - smallest distances d_ord = distances d_ord.sort() neigh_dists ...
_____no_output_____
MIT
KNN.ipynb
Sblbl/ML_implementations
Examples
X = np.array([ [0.15, 0.35], [0.15, 0.28], [0.12, 0.2], [0.1, 0.32], [0.06, 0.25] ]) y = np.array([1, 2, 2, 3, 3]) sample = np.array([0.1, 0.25]) KNN(X, y, sample, k=3) KNN(X, y, sample, k=1)
Neighbours: [array([0.15, 0.35])] of classes: [1]
MIT
KNN.ipynb
Sblbl/ML_implementations
Abstract- Data Understanding - A long answer would be a longer section of text that answers the question - several sentences or a paragraph. - A short answer might be a sentence or phrase, or even in some cases a YES/NO. - The short answers are always contained within / a subset of one of the plausible long a...
import numpy as np import pandas as pd import json import matplotlib.pyplot as plt import seaborn as sns from tqdm import tqdm from IPython.core.display import HTML DIR = '../input/tensorflow2-question-answering/' PATH_TRAIN = DIR + 'simplified-nq-train.jsonl' PATH_TEST = DIR + 'simplified-nq-test.jsonl'
_____no_output_____
MIT
notebooks/EDA.ipynb
mikelkl/TF2-QA
Number of samples in train & test dataset
!wc -l '../input/tensorflow2-question-answering/simplified-nq-train.jsonl' !wc -l '../input/tensorflow2-question-answering/simplified-nq-test.jsonl'
307373 ../input/tensorflow2-question-answering/simplified-nq-train.jsonl 345 ../input/tensorflow2-question-answering/simplified-nq-test.jsonl
MIT
notebooks/EDA.ipynb
mikelkl/TF2-QA
Load .jsonl file iteratively As we know, one of the most common way to convert .jsonl file into pd.DataFrame is `pd.read_json(FILENAME, orient='records', lines=True)`:
df_test = pd.read_json(PATH_TEST, orient='records', lines=True) df_test.head()
_____no_output_____
MIT
notebooks/EDA.ipynb
mikelkl/TF2-QA
However, since we have a **HUGE train dataset** for this competition, Kaggle Notebook RAM cannot afford this method.Instead, we probablly have to load the train dataset iteratively:
json_train = [] with open(PATH_TRAIN, 'rt') as f: for i in tqdm(f): json_train.append(json.loads(i.strip())) df_train = pd.DataFrame(json_train) df_train.head() df_train.iloc[0].loc['long_answer_candidates'][:10] df_train.iloc[9].loc['annotations']
_____no_output_____
MIT
notebooks/EDA.ipynb
mikelkl/TF2-QA
Data Visualization Obtain data We must be cautious that **"short answer" for this competition corresponds to "yes-no answer" in the original dataset**.
N_TRAIN = 307373 n_long_candidates_train = np.zeros(N_TRAIN) t_long_train = np.zeros((N_TRAIN,2)) # [start_token, end_token] t_yesno_train = [] t_short_train = [] top_level = [] with open(PATH_TRAIN, 'rt') as f: for i in tqdm(range(N_TRAIN), position=0, leave=True): dic = json.loads(f.readline()) ...
100%|██████████| 345/345 [00:00<00:00, 2649.06it/s]
MIT
notebooks/EDA.ipynb
mikelkl/TF2-QA
Visualization
plt.style.use('seaborn-darkgrid') plt.style.use('seaborn-poster')
_____no_output_____
MIT
notebooks/EDA.ipynb
mikelkl/TF2-QA
Take a glance at data
def show_example(example_id): example = df_train[df_train['example_id']==example_id] document_text = example['document_text'].values[0] question = example['question_text'].values[0] annotations = example['annotations'].values[0] la_start_token = annotations[0]['long_answer']['start_token'] la_e...
_____no_output_____
MIT
notebooks/EDA.ipynb
mikelkl/TF2-QA
To keep scroll bar short, content from the document text is trimmed at the end.**Long answer is highlighted in dark blue, and short answers are in light blue.**
show_example(5328212470870865242)
_____no_output_____
MIT
notebooks/EDA.ipynb
mikelkl/TF2-QA
Long answer candidates Some of data for long answers are swamped with a lot of candidates (**7946 in maximum!**):
pd.Series(n_long_candidates_train).describe() pd.Series(n_long_candidates_test).describe()
_____no_output_____
MIT
notebooks/EDA.ipynb
mikelkl/TF2-QA
n_long_candidates_train is long-tail distribution, range in [1, 7946]
plt.hist(n_long_candidates_train, bins=64, alpha=0.5, color='c', label='train') plt.xlabel('long answer candidates') plt.ylabel('samples') plt.title("Distribution of number of long answer candidates") plt.legend()
_____no_output_____
MIT
notebooks/EDA.ipynb
mikelkl/TF2-QA
n_long_candidates_train and n_long_candidates_test have similar distribution
plt.hist(n_long_candidates_train[n_long_candidates_train < np.max(n_long_candidates_test)], density=True, bins=64, alpha=0.5, color='c', label='train') plt.hist(n_long_candidates_test, density=True, bins=64, alpha=0.5, color='orange', label='test') plt.xlabel('long answer candidates') plt.ylabel('sample proportion') pl...
_____no_output_____
MIT
notebooks/EDA.ipynb
mikelkl/TF2-QA
Short answer labels - 63.47% 'NO ANSWERS' in short answer labels- one question can have multiple short answers within same long anwser- when yes-no answer is not 'None' (i.e. yes or no), short answer always be empty
pd.Series(t_short_train).describe() plt.hist(t_short_train, bins=range(max(t_short_train)), align='left', density=True, rwidth=0.6, color='lightseagreen', label='train') plt.xlabel('short answer') plt.ylabel('sample proportion') plt.title("Normalized distribution of number of short answer labels") plt.legend()
_____no_output_____
MIT
notebooks/EDA.ipynb
mikelkl/TF2-QA
a example with multiple short answers
show_example(-1413521544318030897)
_____no_output_____
MIT
notebooks/EDA.ipynb
mikelkl/TF2-QA
when yes-no answer is not 'None' (i.e. yes or no), short answer always be empty
# no_answer_state[1,:] is the number of train data whose short answer is empty # no_answer_state[:,0] is the number of train data whose yes-no answer is 'YES' OR 'NO' t_short_train = np.array(t_short_train) yesno_answer_state = np.zeros((2,2)) yesno_answer_state[1,1] = np.sum((t_short_train==0) * (np.array([ 1 if t=='N...
_____no_output_____
MIT
notebooks/EDA.ipynb
mikelkl/TF2-QA
Number of Yes-no answer labels We can see significant class imbalance in yes-no answer labels.
plt.hist(t_yesno_train, bins=[0,1,2,3], align='left', density=True, rwidth=0.6, color='lightseagreen', label='train') plt.xlabel('yes-no answer') plt.ylabel('sample proportion') plt.title("Normalized distribution of yes-no answer labels") plt.legend()
_____no_output_____
MIT
notebooks/EDA.ipynb
mikelkl/TF2-QA
Example of NO
df_train[df_train.example_id==3817861884803470204]['annotations'].iloc[0]
_____no_output_____
MIT
notebooks/EDA.ipynb
mikelkl/TF2-QA
Example of YES
df_train[df_train.example_id==5429746486027633157]['annotations'].iloc[0]
_____no_output_____
MIT
notebooks/EDA.ipynb
mikelkl/TF2-QA
Long answer labels Description of start token labels:
pd.Series(t_long_train[:,0]).describe()
_____no_output_____
MIT
notebooks/EDA.ipynb
mikelkl/TF2-QA
Desciption of end token labels:
pd.Series(t_long_train[:,1]).describe()
_____no_output_____
MIT
notebooks/EDA.ipynb
mikelkl/TF2-QA
We can see below that nearly half of the long answers have start/end token -1. In other words, there are a 50.5% of '**NO ANSWERS**' in long answer labels, not only in yes-no labels:
print('{0:.1f}% of start tokens are -1.'.format(np.sum(t_long_train[:,0] < 0) / N_TRAIN * 100)) print('{0:.1f}% of end tokens are -1.'.format(np.sum(t_long_train[:,1] < 0) / N_TRAIN * 100))
50.5% of start tokens are -1. 50.5% of end tokens are -1.
MIT
notebooks/EDA.ipynb
mikelkl/TF2-QA
**If the start token is -1, the corresponding end token is also -1**:
np.sum(t_long_train[:,0] * t_long_train[:,1] < 0)
_____no_output_____
MIT
notebooks/EDA.ipynb
mikelkl/TF2-QA
The heatmap below tells us that:- when the start token and/or the end token are -1, yes-no answer is 'NONE'- yes-no answer 'NONE' does not always mean that the start token and/or the end token are -1
# no_answer_state[1,:] is the number of train data whose start token and end token are -1 # no_answer_state[:,1] is the number of train data whose yes-no answer is 'NONE' no_answer_state = np.zeros((2,2)) no_answer_state[1,1] = np.sum((t_long_train[:,0]==-1) * (np.array([ 1 if t=='NONE' else 0 for t in t_yesno_train ]...
_____no_output_____
MIT
notebooks/EDA.ipynb
mikelkl/TF2-QA
top_levelAbout 95% long answer's top_level is true
series_top_level = pd.Series(top_level) series_top_level.value_counts(normalize=True)
_____no_output_____
MIT
notebooks/EDA.ipynb
mikelkl/TF2-QA
Text Word Counts Let us look into word counts of question texts & document texts. Obtain data
q_lens_train = np.zeros(N_TRAIN) d_lens_train = np.zeros(N_TRAIN) [{'yes_no_answer': 'NONE', 'long_answer': {'start_token': 1952, 'candidate_index': 54, 'end_token': 2019}, 'short_answers': [{'start_token': 1960, 'end_token': 1969}], 'annotation_id': 593165450220027640}] short_answer_lens_train = [] long_an...
0%| | 0/345 [00:00<?, ?it/s] 31%|███ | 107/345 [00:00<00:00, 1061.29it/s] 63%|██████▎ | 219/345 [00:00<00:00, 1075.95it/s] 100%|██████████| 345/345 [00:00<00:00, 1030.25it/s]
MIT
notebooks/EDA.ipynb
mikelkl/TF2-QA
Visualization Word counts of question text Words counts of question text range in [0, 30]
plt.hist(q_lens_train, density=True, bins=8, alpha=0.5, color='c', label='train') plt.hist(q_lens_test, density=True, bins=8, alpha=0.5, color='orange', label='test') plt.xlabel('question length') plt.ylabel('sample proportion') plt.legend()
_____no_output_____
MIT
notebooks/EDA.ipynb
mikelkl/TF2-QA
Word counts of document text Words counts of d document text range in [0, 120000]
plt.hist(d_lens_train, density=True, bins=64, alpha=0.5, color='c', label='train') plt.hist(d_lens_test, density=True, bins=64, alpha=0.5, color='orange', label='test') plt.xlabel('document length') plt.ylabel('sample proportion') plt.legend()
_____no_output_____
MIT
notebooks/EDA.ipynb
mikelkl/TF2-QA
Word counts of short answers textWords counts of short answers text is long-tail distribution, range in [1, 250]
series_short = pd.Series(short_answer_lens_train) print(series_short.describe()) series_short.value_counts(normalize=True).plot(kind='line', title="Normalized distribution of short answer length")
count 130233.000000 mean 4.096942 std 5.972028 min 1.000000 25% 2.000000 50% 2.000000 75% 4.000000 max 250.000000 dtype: float64
MIT
notebooks/EDA.ipynb
mikelkl/TF2-QA
Word counts of long answers textWords counts of long answers text is long-tail distribution, range in [5, 123548]
series_long = pd.Series(long_answer_lens_train) print(series_long.describe()) series_long.value_counts(normalize=True).plot(kind='line', title="Normalized distribution of long answer length")
count 152148.000000 mean 384.289849 std 1496.184997 min 5.000000 25% 75.000000 50% 117.000000 75% 192.000000 max 123548.000000 dtype: float64
MIT
notebooks/EDA.ipynb
mikelkl/TF2-QA
Feature Visualiztion Answer type labels
import tensorflow as tf raw_dataset = tf.data.TFRecordDataset("../input/bertjointbaseline/nq-train.tfrecords-00000-of-00001") all_answer_types = [] for raw_record in tqdm(raw_dataset, desc="Parsing tfrecod"): f = tf.train.Example.FromString(raw_record.numpy()) all_answer_types.append(f.features.feature["answer_...
short 0.507807 unk 0.341116 long 0.137655 yes 0.008404 no 0.005017 dtype: float64
MIT
notebooks/EDA.ipynb
mikelkl/TF2-QA
Import Assignments From a CSV File¶In this example, a CSV file containing the locations of potholes will be imported into a Workforce Project as new assignments. Import ArcGIS API for PythonImport the `arcgis` library and some modules within it.
import pandas as pd from arcgis.gis import GIS from arcgis.apps import workforce from arcgis.geocoding import geocode
_____no_output_____
Apache-2.0
notebooks/dev_summit_2019/Step 2 - Add Assignments From csv File.ipynb
STEFANIHUNT/workforce-scripts
Connect to Organization And Get The ProjectLet's connect to ArcGIS Online and find the new Project to add assignments to.
gis = GIS("https://arcgis.com", "workforce_scripts") item = gis.content.get("c765482bd0b9479b9104368da54df90d") project = workforce.Project(item)
_____no_output_____
Apache-2.0
notebooks/dev_summit_2019/Step 2 - Add Assignments From csv File.ipynb
STEFANIHUNT/workforce-scripts
Load the CSV File¶Let's use the pandas library to read the CSV file and display the potholes.
df = pd.read_csv("assignments.csv") df
_____no_output_____
Apache-2.0
notebooks/dev_summit_2019/Step 2 - Add Assignments From csv File.ipynb
STEFANIHUNT/workforce-scripts