markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
In the following cell, we'll use a visualization utility function we defined above to preview some flower images with their associated labels.
display_9_images_from_dataset(load_dataset(training_filenames))
notebooks/samples/explanations/ai-explanations-image.ipynb
GoogleCloudPlatform/ai-platform-samples
apache-2.0
Create training and validation datasets
def get_batched_dataset(filenames): dataset = load_dataset(filenames) dataset = dataset.cache() # This dataset fits in RAM dataset = dataset.repeat() dataset = dataset.shuffle(2048) dataset = dataset.batch(BATCH_SIZE) dataset = dataset.prefetch(AUTO) # prefetch next batch while training (autotune prefetch b...
notebooks/samples/explanations/ai-explanations-image.ipynb
GoogleCloudPlatform/ai-platform-samples
apache-2.0
Build, train, and evaluate the model In this section we'll define the layers of our model using the Keras Sequential model API. Then we'll run training and evaluation, and finally run some test predictions on the local model.
model = tf.keras.Sequential([ tf.keras.layers.Conv2D(kernel_size=3, filters=16, padding='same', activation='relu', input_shape=[*IMAGE_SIZE, 3]), tf.keras.layers.Conv2D(kernel_size=3, filters=30, padding='same', activation='relu'), tf.keras.layers.MaxPooling2D(pool_size=2), tf.keras.layers.Conv2D(kernel...
notebooks/samples/explanations/ai-explanations-image.ipynb
GoogleCloudPlatform/ai-platform-samples
apache-2.0
Train the model Train this on a GPU if you have access (in Colab, from the menu select Runtime --> Change runtime type). On a CPU, it'll take ~30 minutes to run training. On a GPU, it takes ~5 minutes.
EPOCHS = 20 # Train for 60 epochs for higher accuracy, 20 should get you ~75% history = model.fit(get_training_dataset(), steps_per_epoch=steps_per_epoch, epochs=EPOCHS, validation_data=get_validation_dataset(), validation_steps=validation_steps)
notebooks/samples/explanations/ai-explanations-image.ipynb
GoogleCloudPlatform/ai-platform-samples
apache-2.0
Get predictions on local model and visualize them
# Randomize the input so that you can execute multiple times to change results permutation = np.random.permutation(8*20) some_flowers, some_labels = (some_flowers[permutation], some_labels[permutation]) predictions = model.predict(some_flowers, batch_size=16) evaluations = model.evaluate(some_flowers, some_labels, bat...
notebooks/samples/explanations/ai-explanations-image.ipynb
GoogleCloudPlatform/ai-platform-samples
apache-2.0
Export the model as a TF 1 SavedModel AI Explanations currently supports TensorFlow 1.x. In order to deploy our model in a format compatible with AI Explanations, we'll follow the steps below to convert our Keras model to a TF Estimator, and then use the export_saved_model method to generate the SavedModel and save it ...
## Convert our Keras model to an estimator and then export to SavedModel keras_estimator = tf.keras.estimator.model_to_estimator(keras_model=model, model_dir='savedmodel_export')
notebooks/samples/explanations/ai-explanations-image.ipynb
GoogleCloudPlatform/ai-platform-samples
apache-2.0
The decode_img_bytes function below handles converting image bytes (the format our served model will expect) to floats: a [192,192,3] dimensional matrix that our model is expecting. For image explanations models, we recommend this approach rather than sending an image as a float array from the client.
def decode_img_bytes(img_bytes, height, width, color_depth): features = tf.squeeze(img_bytes, axis=1, name='input_squeeze') float_pixels = tf.map_fn( lambda img_string: tf.io.decode_image( img_string, channels=color_depth, dtype=tf.float32 ), features, dtype=tf.float32, ...
notebooks/samples/explanations/ai-explanations-image.ipynb
GoogleCloudPlatform/ai-platform-samples
apache-2.0
Generate the metadata for AI Explanations In order to deploy this model to Cloud Explanations, we need to create an explanation_metadata.json file with information about our model inputs, outputs, and baseline. For image models, using [0,1] as your input baseline represents black and white images. In this case we're u...
random_baseline = np.random.rand(192,192,3) explanation_metadata = { "inputs": { "data": { "input_tensor_name": "input_pixels:0", "modality": "image", "input_baselines": [random_baseline.tolist()] } }, "outputs": { "probability": { "output_tensor_name": "de...
notebooks/samples/explanations/ai-explanations-image.ipynb
GoogleCloudPlatform/ai-platform-samples
apache-2.0
Deploy model to AI Explanations In this step we'll use the gcloud CLI to deploy our model to AI Explanations. Create the model
MODEL = 'flowers' # Create the model if it doesn't exist yet (you only need to run this once) !gcloud ai-platform models create $MODEL --enable-logging --region=$REGION
notebooks/samples/explanations/ai-explanations-image.ipynb
GoogleCloudPlatform/ai-platform-samples
apache-2.0
Create explainable model versions For image models, we offer two choices for explanation methods: * Integrated Gradients (IG) * XRAI You can find more info on each method in the documentation. Below, we'll show you how to deploy a version with both so that you can compare results. If you already know which explanatio...
# Each time you create a version the name should be unique IG_VERSION = 'v_ig' # Create the version with gcloud !gcloud beta ai-platform versions create $IG_VERSION --region=$REGION \ --model $MODEL \ --origin $export_path \ --runtime-version 1.15 \ --framework TENSORFLOW \ --python-version 3.7 \ --machine-type n1-sta...
notebooks/samples/explanations/ai-explanations-image.ipynb
GoogleCloudPlatform/ai-platform-samples
apache-2.0
Deploy an explainable model with XRAI
# Each time you create a version the name should be unique XRAI_VERSION = 'v_xrai' # Create the XRAI version with gcloud !gcloud beta ai-platform versions create $XRAI_VERSION --region=$REGION \ --model $MODEL \ --origin $export_path \ --runtime-version 1.15 \ --framework TENSORFLOW \ --python-version 3.7 \ --machine-...
notebooks/samples/explanations/ai-explanations-image.ipynb
GoogleCloudPlatform/ai-platform-samples
apache-2.0
Get predictions and explanations on deployed model Here we'll prepare some test images to send to our model. Then we'll use the AI Platform Prediction API to get the model's predicted class along with the explanation for each image.
# Download test flowers from public bucket !mkdir flowers !gsutil -m cp gs://flowers_model/test_flowers/* ./flowers # Resize the images to what our model is expecting (192,192) test_filenames = [] for i in os.listdir('flowers'): img_path = 'flowers/' + i with PIL.Image.open(img_path) as ex_img: resize_img = e...
notebooks/samples/explanations/ai-explanations-image.ipynb
GoogleCloudPlatform/ai-platform-samples
apache-2.0
The predict_json method below calls our deployed model with the specified image data, model name, and version.
# This is adapted from a sample in the docs # Find it here: https://cloud.google.com/ai-platform/prediction/docs/online-predict#python def predict_json(project, model, instances, version=None): """Send json data to a deployed model for prediction. Args: project (str): project where the AI Platform Mod...
notebooks/samples/explanations/ai-explanations-image.ipynb
GoogleCloudPlatform/ai-platform-samples
apache-2.0
Make an AI Explanations request with gcloud First we'll look at the explanations results for IG, then we'll compare with XRAI. If you only deployed one model above, run only the cell for that explanation method.
# IG EXPLANATIONS ig_response = predict_json(PROJECT_ID, MODEL, instances, IG_VERSION) # XRAI EXPLANATIONS xrai_response = predict_json(PROJECT_ID, MODEL, instances, XRAI_VERSION)
notebooks/samples/explanations/ai-explanations-image.ipynb
GoogleCloudPlatform/ai-platform-samples
apache-2.0
See our model's predicted classes without explanations First, let's preview the images and see what our model predicted for them. Why did the model predict these classes? We'll see explanations in the next section.
from io import BytesIO import matplotlib.image as mpimg import base64 # Note: change the `ig_response` variable below if you didn't deploy an IG model for i,val in enumerate(ig_response['explanations']): class_name = CLASSES[val['attributions_by_label'][0]['label_index']] confidence_score = str(round(val['attr...
notebooks/samples/explanations/ai-explanations-image.ipynb
GoogleCloudPlatform/ai-platform-samples
apache-2.0
Visualize the images with AI Explanations Now let's look at the explanations. The images returned show the explanations for only the top class predicted by the model. This means that if one of our model's predictions is incorrect, the pixels you see highlighted are for the incorrect class. For example, if the model p...
import io for idx, flower in enumerate(ig_response['explanations']): predicted_flower = CLASSES[flower['attributions_by_label'][0]['label_index']] confidence = flower['attributions_by_label'][0]['example_score'] print('Predicted flower: ', predicted_flower) b64str = flower['attributions_by_label'][0]['attribut...
notebooks/samples/explanations/ai-explanations-image.ipynb
GoogleCloudPlatform/ai-platform-samples
apache-2.0
Let's compare this with the image explanations we get from our XRAI version.
for idx, flower in enumerate(xrai_response['explanations']): predicted_flower = CLASSES[flower['attributions_by_label'][0]['label_index']] confidence = flower['attributions_by_label'][0]['example_score'] print('Predicted flower: ', predicted_flower) b64str = flower['attributions_by_label'][0]['attributions']['d...
notebooks/samples/explanations/ai-explanations-image.ipynb
GoogleCloudPlatform/ai-platform-samples
apache-2.0
Sanity check our explanations To better make sense of the feature attributions we're getting, we should compare them with our model's baseline. In the case of image models, the baseline_score returned by AI Explanations is the score our model would give an image input with the baseline we specified. The baseline will b...
for i,val in enumerate(ig_response['explanations']): baseline_score = val['attributions_by_label'][0]['baseline_score'] predicted_score = val['attributions_by_label'][0]['example_score'] print('Baseline score: ', baseline_score) print('Predicted score: ', predicted_score) print('Predicted - Baseline: ', pred...
notebooks/samples/explanations/ai-explanations-image.ipynb
GoogleCloudPlatform/ai-platform-samples
apache-2.0
As another sanity check, we'll also look at the explanations for this model's baseline image: an image array of randomly generated values using np.random. First, we'll convert the same np.random baseline array we generated above to a base64 string and preview it.
# Convert our baseline from above to a base64 string rand_test_img = PIL.Image.fromarray((random_baseline * 255).astype('uint8')) buffer = BytesIO() rand_test_img.save(buffer, format="BMP") new_image_string = base64.b64encode(buffer.getvalue()).decode("utf-8") # Preview it plt.imshow(rand_test_img) # Save the image t...
notebooks/samples/explanations/ai-explanations-image.ipynb
GoogleCloudPlatform/ai-platform-samples
apache-2.0
The difference between your model's predicted score and the baseline score for this image should be close to 0. Run the following cell to confirm. If there is a difference between these two values you may need to increase the number of integral steps used when you deploy your model.
baseline_score = sanity_check_resp['explanations'][0]['attributions_by_label'][0]['baseline_score'] example_score = sanity_check_resp['explanations'][0]['attributions_by_label'][0]['example_score'] print(abs(baseline_score - example_score))
notebooks/samples/explanations/ai-explanations-image.ipynb
GoogleCloudPlatform/ai-platform-samples
apache-2.0
Cleaning up To clean up all GCP resources used in this project, you can delete the GCP project you used for the tutorial. Alternatively, you can clean up individual resources by running the following commands:
# Delete model version resource !gcloud ai-platform versions delete $IG_VERSION --quiet --model $MODEL !gcloud ai-platform versions delete $XRAI_VERSION --quiet --model $MODEL # Delete model resource !gcloud ai-platform models delete $MODEL --quiet # Delete Cloud Storage objects that were created !gsutil -m rm -r $BU...
notebooks/samples/explanations/ai-explanations-image.ipynb
GoogleCloudPlatform/ai-platform-samples
apache-2.0
Rabbit Redux This notebook starts with a version of the rabbit population growth model and walks through some steps for extending it. In the original model, we treat all rabbits as adults; that is, we assume that a rabbit is able to breed in the season after it is born. In this notebook, we extend the model to include...
system = System(t0 = 0, t_end = 10, adult_pop0 = 10, birth_rate = 0.9, death_rate = 0.5) system
notebooks/rabbits2.ipynb
AllenDowney/ModSimPy
mit
Now update run_simulation with the following changes: Add a second TimeSeries, named juveniles, to keep track of the juvenile population, and initialize it with juvenile_pop0. Inside the for loop, compute the number of juveniles that mature during each time step. Also inside the for loop, add a line that stores t...
def run_simulation(system): """Runs a proportional growth model. Adds TimeSeries to `system` as `results`. system: System object with t0, t_end, p0, birth_rate and death_rate """ adults = TimeSeries() adults[system.t0] = system.adult_pop0 for t in linrange(system.t...
notebooks/rabbits2.ipynb
AllenDowney/ModSimPy
mit
Test your changes in run_simulation:
run_simulation(system) system.adults
notebooks/rabbits2.ipynb
AllenDowney/ModSimPy
mit
Next, update plot_results to plot both the adult and juvenile TimeSeries.
def plot_results(system, title=None): """Plot the estimates and the model. system: System object with `results` """ newfig() plot(system.adults, 'bo-', label='adults') decorate(xlabel='Season', ylabel='Rabbit population', title=title)
notebooks/rabbits2.ipynb
AllenDowney/ModSimPy
mit
And test your updated version of plot_results.
plot_results(system, title='Proportional growth model')
notebooks/rabbits2.ipynb
AllenDowney/ModSimPy
mit
First we will look at the purely probabilistic case and a simple test problem. We set up the uncertain parameters and create the horsetail matching object as usual.
u1 = UniformParameter(lower_bound=-1, upper_bound=1) u2 = UniformParameter(lower_bound=-1, upper_bound=1) input_uncertainties = [u1, u2]
notebooks/Gradients.ipynb
lwcook/horsetail-matching
mit
Horsetail matching uses the same syntax for specifying a gradient as the scipy.minimize function: through the 'jac' argument. If 'jac' is True, then horsetail matching expects the qoi function to also return the jacobian of the qoi (the gradient with respect to the design variables). Alternatively 'jac' is a fuction th...
def fun_qjac(x, u): return TP1(x, u, jac=True) # Returns both qoi and its gradient def fun_q(x, u): return TP1(x, u, jac=False) # Returns just the qoi def fun_jac(x, u): return TP1(x, u, jac=True)[1] # Returns just the gradient theHM = HorsetailMatching(fun_qjac, input_uncertainties, jac=True, method...
notebooks/Gradients.ipynb
lwcook/horsetail-matching
mit
The gradient can be evaluated using either the 'empirical' or 'kernel' based methods, however the 'empirical' method can sometimes give discontinuous gradients and so in general the 'kernel' based method is preferred. Note that when we are using kernels to evaluate the horsetail plot (with the method 'kernel'), it is i...
from scipy.optimize import minimize solution = minimize(theHM.evalMetric, x0=[1,1], method='BFGS', jac=True) print(solution) (x1, y1, t1), (x2, y2, t2), CDFs = theHM.getHorsetail() for (x, y) in CDFs: plt.plot(x, y, c='grey', lw=0.5) plt.plot(x1, y1, 'r') plt.plot(t1, y1, 'k--') plt.xlim([-1, 5]) plt.ylim([0, 1]...
notebooks/Gradients.ipynb
lwcook/horsetail-matching
mit
Once again the optimizer has found the optimum where the CDF is a step function, but this time in fewer iterations. We can also use gradients for optimization under mixed uncertainties in exactly the same way. The example below performs the optimization of TP2 like in the mixed uncertainties tutorial, but this time us...
def fun_qjac(x, u): return TP2(x, u, jac=True) # Returns both qoi and its gradient u1 = UniformParameter() u2 = IntervalParameter() theHM = HorsetailMatching(fun_qjac, u1, u2, jac=True, method='kernel', samples_prob=500, samples_int=50, integration_points=numpy.linspace(-20, 100, 3000),...
notebooks/Gradients.ipynb
lwcook/horsetail-matching
mit
To plot the optimum solution...
upper, lower, CDFs = theHM.getHorsetail() (q1, h1, t1) = upper (q2, h2, t2) = lower for CDF in CDFs: plt.plot(CDF[0], CDF[1], c='grey', lw=0.05) plt.plot(q1, h1, 'r') plt.plot(q2, h2, 'r') plt.plot(t1, h1, 'k--') plt.plot(t2, h2, 'k--') plt.xlim([0, 15]) plt.ylim([0, 1]) plt.xlabel('Quantity of Interest') plt.show...
notebooks/Gradients.ipynb
lwcook/horsetail-matching
mit
A variety of tools employing different methodologies have been developed over the years to compute multi-group cross sections for certain applications, including NJOY (LANL), MC$^2$-3 (ANL), and Serpent (VTT). The openmc.mgxs Python module is designed to leverage OpenMC's tally system to calculate multi-group cross sec...
%matplotlib inline import numpy as np import matplotlib.pyplot as plt import openmc import openmc.mgxs as mgxs
docs/source/pythonapi/examples/mgxs-part-i.ipynb
samuelshaner/openmc
mit
First we need to define materials that will be used in the problem. Before defining a material, we must create nuclides that are used in the material.
# Instantiate some Nuclides h1 = openmc.Nuclide('H1') o16 = openmc.Nuclide('O16') u235 = openmc.Nuclide('U235') u238 = openmc.Nuclide('U238') zr90 = openmc.Nuclide('Zr90')
docs/source/pythonapi/examples/mgxs-part-i.ipynb
samuelshaner/openmc
mit
With our material, we can now create a Materials object that can be exported to an actual XML file.
# Instantiate a Materials collection and export to XML materials_file = openmc.Materials([inf_medium]) materials_file.export_to_xml()
docs/source/pythonapi/examples/mgxs-part-i.ipynb
samuelshaner/openmc
mit
We now must create a geometry that is assigned a root universe and export it to XML.
# Create Geometry and set root Universe openmc_geometry = openmc.Geometry() openmc_geometry.root_universe = root_universe # Export to "geometry.xml" openmc_geometry.export_to_xml()
docs/source/pythonapi/examples/mgxs-part-i.ipynb
samuelshaner/openmc
mit
Next, we must define simulation parameters. In this case, we will use 10 inactive batches and 40 active batches each with 2500 particles.
# OpenMC simulation parameters batches = 50 inactive = 10 particles = 2500 # Instantiate a Settings object settings_file = openmc.Settings() settings_file.batches = batches settings_file.inactive = inactive settings_file.particles = particles settings_file.output = {'tallies': True} # Create an initial uniform spatia...
docs/source/pythonapi/examples/mgxs-part-i.ipynb
samuelshaner/openmc
mit
Now we are ready to generate multi-group cross sections! First, let's define a 2-group structure using the built-in EnergyGroups class.
# Instantiate a 2-group EnergyGroups object groups = mgxs.EnergyGroups() groups.group_edges = np.array([0., 0.625, 20.0e6])
docs/source/pythonapi/examples/mgxs-part-i.ipynb
samuelshaner/openmc
mit
We can now use the EnergyGroups object, along with our previously created materials and geometry, to instantiate some MGXS objects from the openmc.mgxs module. In particular, the following are subclasses of the generic and abstract MGXS class: TotalXS TransportXS NuTransportXS AbsorptionXS CaptureXS FissionXS NuFissio...
# Instantiate a few different sections total = mgxs.TotalXS(domain=cell, groups=groups) absorption = mgxs.AbsorptionXS(domain=cell, groups=groups) scattering = mgxs.ScatterXS(domain=cell, groups=groups)
docs/source/pythonapi/examples/mgxs-part-i.ipynb
samuelshaner/openmc
mit
The Absorption object includes tracklength tallies for the 'absorption' and 'flux' scores in the 2-group structure in cell 1. Now that each MGXS object contains the tallies that it needs, we must add these tallies to a Tallies object to generate the "tallies.xml" input file for OpenMC.
# Instantiate an empty Tallies object tallies_file = openmc.Tallies() # Add total tallies to the tallies file tallies_file += total.tallies.values() # Add absorption tallies to the tallies file tallies_file += absorption.tallies.values() # Add scattering tallies to the tallies file tallies_file += scattering.tallies...
docs/source/pythonapi/examples/mgxs-part-i.ipynb
samuelshaner/openmc
mit
In addition to the statepoint file, our simulation also created a summary file which encapsulates information about the materials and geometry. By default, a Summary object is automatically linked when a StatePoint is loaded. This is necessary for the openmc.mgxs module to properly process the tally data. The statepoin...
# Load the tallies from the statepoint into each MGXS object total.load_from_statepoint(sp) absorption.load_from_statepoint(sp) scattering.load_from_statepoint(sp)
docs/source/pythonapi/examples/mgxs-part-i.ipynb
samuelshaner/openmc
mit
Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits.
img = mnist.train.images[2] plt.imshow(img.reshape((28, 28)), cmap='Greys_r') print(mnist.train.images.shape[1])
autoencoder/Simple_Autoencoder.ipynb
spencer2211/deep-learning
mit
We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed rep...
# Size of the encoding layer (the hidden layer) encoding_dim = 32 # feel free to change this value image_size = mnist.train.images.shape[1] # Input and target placeholders inputs_ = tf.placeholder(tf.float32, shape=[None, image_size], name="inputs") targets_ = tf.placeholder(tf.float32, shape=[None, image_size], name...
autoencoder/Simple_Autoencoder.ipynb
spencer2211/deep-learning
mit
Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss. Calling mnist.train.next_batch(batch_size) will return a tuple of (images, labels). We're not concerned with the labels here, we just need the images. Otherwise this is pretty straight...
epochs = 20 batch_size = 200 sess.run(tf.global_variables_initializer()) for e in range(epochs): for ii in range(mnist.train.num_examples//batch_size): batch = mnist.train.next_batch(batch_size) feed = {inputs_: batch[0], targets_: batch[0]} batch_cost, _ = sess.run([cost, opt], feed_dict=fe...
autoencoder/Simple_Autoencoder.ipynb
spencer2211/deep-learning
mit
Ceramic Resonator
data_ceramic = np.genfromtxt('temp_and_freq_error_data_ceramic') freq_error_ceramic = data_ceramic[:3600, 0] temp_ceramic = data_ceramic[:, 1] t = np.arange(0, len(freq_error_ceramic)) fig, ax1 = plt.subplots() plt.title("Clock frequency accuracy at 26 °C (ceramic resonator)") plt.grid() ax1.plot(t / 60, freq_error_...
jupyter_notebooks/clock_frequency_accuracy.ipynb
franzpl/StableGrid
mit
Standard Deviation
std_dev_ceramic = np.std(freq_error_ceramic)
jupyter_notebooks/clock_frequency_accuracy.ipynb
franzpl/StableGrid
mit
Average
average_ceramic = np.average(freq_error_ceramic) ppm_average_ceramic = (average_ceramic / 16000000) * 10**6 ppm_average_ceramic
jupyter_notebooks/clock_frequency_accuracy.ipynb
franzpl/StableGrid
mit
The frequency tolerance of the used ceramic resonator of the Arduino UNO is approx. 500 ppm. Consequently, this means for mains frequency measurements:
(50 * 500 * 10**-6) * 1000 # (mains frequency * ppm) * 1000 in mHZ np.max(freq_error_ceramic) np.min(freq_error_ceramic)
jupyter_notebooks/clock_frequency_accuracy.ipynb
franzpl/StableGrid
mit
The error for mains frequency measurements is 25 mHz. Ceramic resonator behaviour under the influence of increasing temperature
data_temp_ceramic = np.genfromtxt('ceramic_behaviour_increasing_temperature') freq_error_temp_ceramic = data_temp_ceramic[:, 0] increasing_temp_ceramic = data_temp_ceramic[:, 1] fig, ax3 = plt.subplots() plt.title("Clock frequency stability (ceramic resonator)") plt.grid() ax3.plot(increasing_temp_ceramic, freq_error...
jupyter_notebooks/clock_frequency_accuracy.ipynb
franzpl/StableGrid
mit
Quartz
data_quartz = np.genfromtxt('temp_and_freq_error_data_quartz') freq_error_quartz = data_quartz[:3600, 0] temp_quartz = data_quartz[:, 1] t = np.arange(0, len(freq_error_quartz)) fig, ax5 = plt.subplots() plt.title("Clock frequency accuracy at 26 °C (quartz)") plt.grid() ax5.plot(t / 60, freq_error_quartz, color='b')...
jupyter_notebooks/clock_frequency_accuracy.ipynb
franzpl/StableGrid
mit
Standard Deviation
std_dev_quartz = np.std(freq_error_quartz)
jupyter_notebooks/clock_frequency_accuracy.ipynb
franzpl/StableGrid
mit
Average
average_quartz = np.average(freq_error_quartz) ppm_average_quartz = (average_quartz / 16000000) * 10**6 ppm_average_quartz average_quartz np.max(freq_error_quartz) np.min(freq_error_quartz)
jupyter_notebooks/clock_frequency_accuracy.ipynb
franzpl/StableGrid
mit
Quartz behaviour under the influence of increasing temperature
data_temp_quartz = np.genfromtxt('quartz_behaviour_increasing_temperature.txt') freq_error_temp_quartz = data_temp_quartz[:, 0] increasing_temp_quartz = data_temp_quartz[:, 1] fig, ax7 = plt.subplots() plt.title("Clock frequency stability (quartz)") plt.grid() ax7.plot(increasing_temp_quartz, freq_error_temp_quartz, ...
jupyter_notebooks/clock_frequency_accuracy.ipynb
franzpl/StableGrid
mit
An Interpreter for a Simple Programming Language In this notebook we develop an interpreter for a small programming language. The grammar for this language is stored in the file Pure.g4.
!cat -n Pure.g4
ANTLR4-Python/Interpreter/Interpreter.ipynb
Danghor/Formal-Languages
gpl-2.0
The grammar shown above does only contain skip actions. The corrsponding grammar that is enriched with actions is stored in the file Simple.g4. An example program that conforms to this grammar is stored in the file sum.sl.
!cat sum.sl
ANTLR4-Python/Interpreter/Interpreter.ipynb
Danghor/Formal-Languages
gpl-2.0
The file Simple.g4 contains a parser for the language described by the grammar Pure.g4. This parser returns an abstract syntax tree. This tree is represented as a nested tuple.
!cat -n Simple.g4
ANTLR4-Python/Interpreter/Interpreter.ipynb
Danghor/Formal-Languages
gpl-2.0
The parser shown above will transform the program sum.sl into the nested tuple stored in the file sum.ast.
!cat sum.ast !antlr4 -Dlanguage=Python3 Simple.g4 from SimpleLexer import SimpleLexer from SimpleParser import SimpleParser import antlr4 %run ../AST-2-Dot.ipynb
ANTLR4-Python/Interpreter/Interpreter.ipynb
Danghor/Formal-Languages
gpl-2.0
The function main takes one parameter file. This parameter is a string specifying a program file. The function reads the program contained in this file and executes it.
def main(file): with open(file, 'r') as handle: program_text = handle.read() input_stream = antlr4.InputStream(program_text) lexer = SimpleLexer(input_stream) token_stream = antlr4.CommonTokenStream(lexer) parser = SimpleParser(token_stream) result = parser.progra...
ANTLR4-Python/Interpreter/Interpreter.ipynb
Danghor/Formal-Languages
gpl-2.0
The function execute_list takes two arguments: - Statement_List is a list of statements, - Values is a dictionary assigning integer values to variable names. The function executes the statements in Statement_List. If an assignment statement is executed, the dictionary Values is updated.
def execute_tuple(Statement_List, Values={}): for stmnt in Statement_List: execute(stmnt, Values)
ANTLR4-Python/Interpreter/Interpreter.ipynb
Danghor/Formal-Languages
gpl-2.0
The function execute takes two arguments: - stmnt is a statement, - Values is a dictionary assigning integer values to variable names. The function executes the statements in Statement_List. If an assignment statement is executed, the dictionary Values is updated.
L = [1,2,3,4,5] a, b, *R = L a, b, R def execute(stmnt, Values): op = stmnt[0] if stmnt == 'program': pass elif op == ':=': _, var, value = stmnt Values[var] = evaluate(value, Values) elif op == 'read': _, var = stmnt Values[var] = int(input()) elif op == 'pr...
ANTLR4-Python/Interpreter/Interpreter.ipynb
Danghor/Formal-Languages
gpl-2.0
The function evaluate takes two arguments: - expr is a logical expression or an arithmetic expression, - Values is a dictionary assigning integer values to variable names. The function evaluates the given expression and returns this value.
def evaluate(expr, Values): if isinstance(expr, int): return expr if isinstance(expr, str): return Values[expr] op = expr[0] if op == '==': _, lhs, rhs = expr return evaluate(lhs, Values) == evaluate(rhs, Values) if op == '<': _, lhs, rhs = expr retur...
ANTLR4-Python/Interpreter/Interpreter.ipynb
Danghor/Formal-Languages
gpl-2.0
Lists A list is a data structure which can hold a list of items of different types. Think of a shopping list. Items in the list can be accessed using zero based index. You will use these when you want to add more data and access the data based on the position in index.
list_items = ["milk", "cereal", "banana", 22.5, [1,2,3]] ## A list can contain another list and items of different types print list_items print "3rd item in the list: ", list_items[2] # Zero based index starts from 0 so 3rd item will have index 2
Lecture Notebooks/Getting Started.ipynb
napsternxg/DataMiningPython
gpl-3.0
Sets Like list but only store unique items which are hashable (think basic data types like string, ints and not lists, will explain later). Super useful for checking if an item is already in the list. Items are not indexed. So items can only be added or removed. You will use these when you want to keep track of unique ...
set_items = set([1,2,3, 1]) print set_items print "Is 1 in set_items: ", 1 in set_items print "Is 10 in set_items: ", 10 in set_items
Lecture Notebooks/Getting Started.ipynb
napsternxg/DataMiningPython
gpl-3.0
Dictionaries Like sets but can also map values to each unique item. Essentially, it stores key-value pairs which are useful for fast lookup of items. Think of telephone directory or shopping catalogue. Keys should be of same time as items in sets, but values can be anything. You will use these when you want to keep uni...
item_details = { "milk": { "brand": "Amul", "quantity": 2.5, "cost": 10 }, "chocolate": { "brand": "Cadbury", "quantity": 1, "cost": 5 }, } print item_details print "What are is the brand of milk: ", item_details["milk"]["brand"] print "What are is the co...
Lecture Notebooks/Getting Started.ipynb
napsternxg/DataMiningPython
gpl-3.0
Functions Using a function is handy in cases when you need to repeat something over an over again. A function can take arguments and return some variables. E.g. if you want to fetch tweets using different queries then you can define a function which takes the query and gives you as output the tweets on that query. You...
def get_items_from_file(filename): data = [] with open(filename) as fp: for line in fp: line = line.strip().split(" ") data.append(line) return data print "Data in file data/temp1.txt" print get_items_from_file("../data/temp1.txt") print "Data in file data/temp2.txt" print ...
Lecture Notebooks/Getting Started.ipynb
napsternxg/DataMiningPython
gpl-3.0
Loading Data
from scipy.io import arff data, meta = arff.loadarff("../data/iris.arff") data.shape, meta data[0]
Lecture Notebooks/Getting Started.ipynb
napsternxg/DataMiningPython
gpl-3.0
Pandas Pandas is a wonderful library for working with tabular data in python. It can read csv files easily and represents them as dataframes. Think of it like excel but faster and without a GUI.
import pandas as pd df_iris = pd.DataFrame(data, columns=meta.names()) df_iris.head() print "The shape of iris data is: ", df_iris.shape print "Show how many instances are of each class: " df_iris["class"].value_counts() df_iris["sepallength"].hist(bins=10)
Lecture Notebooks/Getting Started.ipynb
napsternxg/DataMiningPython
gpl-3.0
Filtering data Filtering parts of the data in pandas is really easy. If you want to filter data for editing it then you need to make a copy of the filtered data.
print "Show data containing with petalwidth > 2.0" df_iris[df_iris["petalwidth"] > 2.0]
Lecture Notebooks/Getting Started.ipynb
napsternxg/DataMiningPython
gpl-3.0
Titanic data ``` VARIABLE DESCRIPTIONS: survival Survival (0 = No; 1 = Yes) pclass Passenger Class (1 = 1st; 2 = 2nd; 3 = 3rd) name Name sex Sex age Age sibsp Number of Siblings/Spouses Aboard parch Number of Parents/...
df = pd.read_csv("../data/titanic.csv") df.shape df.head()
Lecture Notebooks/Getting Started.ipynb
napsternxg/DataMiningPython
gpl-3.0
Plotting data Great for visual inspection. Matplotlib and Seaborn Matplotlib is a low level python library which gives you complete control over your plots. Seaborn is a library made on top of matplotlib and which adds functionality to create certain types of plots easily. Works great with pandas.
# We need the line below to show plots directly in the notebook. %matplotlib inline import matplotlib.pyplot as plt import seaborn as sns sns.set_style("ticks") sns.set_context("paper") colors = { "Iris-setosa": "red", "Iris-versicolor": "green", "Iris-virginica": "blue", } plt.scatter(df_iris.petallengt...
Lecture Notebooks/Getting Started.ipynb
napsternxg/DataMiningPython
gpl-3.0
Question: Draw the plot of mean petalwidth of the various categories of Iris-classes. It should show the mean petalwidth for each petallengths in buckets [0, 2.5, 4.5, 6.5, 10] ANSWER BELOW . . . . . . . . . . . . . . . . . . . . . . . . . . . .
sns.barplot(x="class", y="petalwidth", hue=pd.cut(df_iris.petallength, bins=[0, 2.5, 4.5, 6.5, 10]), data=df_iris)
Lecture Notebooks/Getting Started.ipynb
napsternxg/DataMiningPython
gpl-3.0
Depois, devemos retirar cada quebra de linha no final de cada linha, ou seja, os '\n'.
vetor = [ x[:-1] for x in vetor ] import nltk vetor = ([s.replace('&', '').replace(' - ', '').replace('.', '').replace(',', '').replace('!', ''). replace('+', '')for s in vetor])
Problema 2/Daniel - Julliana/.ipynb_checkpoints/Amazon-checkpoint.ipynb
jarvis-fga/Projetos
mit
A seguir, retiramos os dois últimos caracteres sobrando apenas o nosso comentário. Depois, passamos ele para lowercase.
TextosQuebrados = [ x[:-4] for x in vetor ] TextosQuebrados = map(lambda X:X.lower(),TextosQuebrados) #TextosQuebrados = [x.split(' ') for x in TextosQuebrados] TextosQuebrados = [nltk.tokenize.word_tokenize(frase) for frase in TextosQuebrados] #X[0] import nltk stopwords = nltk.corpus.stopwords.words('english') ...
Problema 2/Daniel - Julliana/.ipynb_checkpoints/Amazon-checkpoint.ipynb
jarvis-fga/Projetos
mit
Foi decidida a abordagem por poly SCV
""""from sklearn import svm from sklearn.model_selection import cross_val_score k = 10 # Implement poly SVC poly_svc = svm.SVC(kernel='linear') accuracy_poly_svc = cross_val_score(poly_svc, treino_dados, treino_marcacoes, cv=k, scoring='accuracy') print('poly_svc: ', accuracy_poly_svc.mean())""""
Problema 2/Daniel - Julliana/.ipynb_checkpoints/Amazon-checkpoint.ipynb
jarvis-fga/Projetos
mit
Resultado - Poly: Os 3: Após 10 minutos rodando, foi decidido parar o teste IMdB: 0.51750234411626805 Amazon: 0.51125019534302241 Yelp: 0.56500429754649173 Resultado - Linear: Os 3: 0.7745982496802607 (5 minutos) IMdB: 0.72168288013752147 Amazon: 0.78869745272698855 Yelp: 0.77492342553523996
def fit_and_predict(nome, modelo, treino_dados, treino_marcacoes, teste_dados, teste_marcacoes): modelo.fit(treino_dados, treino_marcacoes) resultado = modelo.predict(teste_dados) acertos = (resultado == teste_marcacoes) total_de_acertos = sum(acertos) total_de_elementos = len(teste_dados) taxa_de_acerto = floa...
Problema 2/Daniel - Julliana/.ipynb_checkpoints/Amazon-checkpoint.ipynb
jarvis-fga/Projetos
mit
The S may be expressed in terms of phi-scaled versions of itself.
S = 4*s3 + s6 # synonyms S
CompoundFiveOctahedra.ipynb
4dsolutions/Python5
mit
<a data-flickr-embed="true" href="https://www.flickr.com/photos/kirbyurner/6335726352/in/photolist-25JufgG-22RBsb5-CwvwpP-vLby2U-ujipN3-f75zUP-aDSfHf-8ryECF-8ryEix-7mcmne-5zTRjp-5zY9gA-7k4Eid-7jZLe2-7k4Ejf-7k4Em5-7jZLhp" title="S Module"><img src="https://farm7.staticflickr.com/6114/6335726352_902009df40.jpg" width="5...
# Koski writes: FiveOcta = 132 * S + 36 * s3 RD6 = 126*S + 30*s3 # 1/10 of RTW is 6S+6s3 # adding them together gets the result # Whew FiveOcta RD6 FiveOcta2 = 10116 * s9 + 2388 * s12 FiveOcta2
CompoundFiveOctahedra.ipynb
4dsolutions/Python5
mit
David: "So, the beauty of working with these bits is that I can move them around."
for i in 132*S + 36*s3, 564*s3 + 132*s6, 2388*s6 + 564*s9, 10116*s9 + 2388*s12: print(i)
CompoundFiveOctahedra.ipynb
4dsolutions/Python5
mit
<a data-flickr-embed="true" href="https://www.flickr.com/photos/kirbyurner/31499584137/in/dateposted-public/" title="icosa_within"><img src="https://farm5.staticflickr.com/4818/31499584137_1babf0215c.jpg" width="500" height="312" alt="icosa_within"></a><script async src="//embedr.flickr.com/assets/client-code.js" char...
cubocta = gmpy2.mpfr(2.5) # inscribed in Octa 4 sfactor = 2 * gmpy2.sqrt(2) * Ø ** -2 Icosa_within = cubocta * sfactor ** 2 # inscribed in Octa 4 RT_within = 60*S + 60*s3 # RT anchored by Icosa_within (long diags) print(Icosa_within * 8) print(cubocta * 8 + RT_within) # 20 ...
CompoundFiveOctahedra.ipynb
4dsolutions/Python5
mit
David went on to get the volume of the Compound Five Cuboctahedra, depicted below: <a data-flickr-embed="true" href="https://www.flickr.com/photos/kirbyurner/31591317707/in/dateposted-public/" title="Five Cuboctahedrons"><img src="https://farm5.staticflickr.com/4899/31591317707_d6426c753e.jpg" width="500" height="312"...
480*S + 280*s3
CompoundFiveOctahedra.ipynb
4dsolutions/Python5
mit
Lets take this opportunity to contextualize the above in terms of our larger volumes table. We'll need a few more players, namely the E family. The T_factor, about 0.99948333, and discussed in Synergetics, is the linear scale factor by which the E module is reduced to create a T module. This T_factor to the 3rd power...
E = (root2/8) * (Ø ** -3) # home base Emod E3 = E * Ø ** 3 # Emod phi up e3 = E * Ø ** -3 # Emod phi down SuperRT = 120 * E3 # = 20 * Synergetics Constant sqrt(9/8) F = gmpy2.mpfr(gmpy2.mpq(1, 16)) # space-filling shape, appears in 5RD T_factor = Ø/root2 * gmpy2.root( gmpy2.mpq(2,...
CompoundFiveOctahedra.ipynb
4dsolutions/Python5
mit
View token tags Recall that you can obtain a particular token by its index position. * To view the coarse POS tag use token.pos_ * To view the fine-grained tag use token.tag_ * To view the description of either type of tag use spacy.explain(tag) <div class="alert alert-success">Note that `token.pos` and `token.tag` ret...
# Print the full text: print(doc.text) # Print the fifth word and associated tags: print(doc[4].text, doc[4].pos_, doc[4].tag_, spacy.explain(doc[4].tag_))
nlp/UPDATED_NLP_COURSE/02-Parts-of-Speech-Tagging/00-POS-Basics.ipynb
rishuatgithub/MLPy
apache-2.0
We can apply this technique to the entire Doc object:
for token in doc: print(f'{token.text:{10}} {token.pos_:{8}} {token.tag_:{6}} {spacy.explain(token.tag_)}')
nlp/UPDATED_NLP_COURSE/02-Parts-of-Speech-Tagging/00-POS-Basics.ipynb
rishuatgithub/MLPy
apache-2.0
Coarse-grained Part-of-speech Tags Every token is assigned a POS Tag from the following list: <table><tr><th>POS</th><th>DESCRIPTION</th><th>EXAMPLES</th></tr> <tr><td>ADJ</td><td>adjective</td><td>*big, old, green, incomprehensible, first*</td></tr> <tr><td>ADP</td><td>adposition</td><td>*in, to, during*</td></tr> <t...
doc = nlp(u'I read books on NLP.') r = doc[1] print(f'{r.text:{10}} {r.pos_:{8}} {r.tag_:{6}} {spacy.explain(r.tag_)}') doc = nlp(u'I read a book on NLP.') r = doc[1] print(f'{r.text:{10}} {r.pos_:{8}} {r.tag_:{6}} {spacy.explain(r.tag_)}')
nlp/UPDATED_NLP_COURSE/02-Parts-of-Speech-Tagging/00-POS-Basics.ipynb
rishuatgithub/MLPy
apache-2.0
In the first example, with no other cues to work from, spaCy assumed that read was present tense.<br>In the second example the present tense form would be I am reading a book, so spaCy assigned the past tense. Counting POS Tags The Doc.count_by() method accepts a specific token attribute as its argument, and returns a ...
doc = nlp(u"The quick brown fox jumped over the lazy dog's back.") # Count the frequencies of different coarse-grained POS tags: POS_counts = doc.count_by(spacy.attrs.POS) POS_counts
nlp/UPDATED_NLP_COURSE/02-Parts-of-Speech-Tagging/00-POS-Basics.ipynb
rishuatgithub/MLPy
apache-2.0
This isn't very helpful until you decode the attribute ID:
doc.vocab[83].text
nlp/UPDATED_NLP_COURSE/02-Parts-of-Speech-Tagging/00-POS-Basics.ipynb
rishuatgithub/MLPy
apache-2.0
Create a frequency list of POS tags from the entire document Since POS_counts returns a dictionary, we can obtain a list of keys with POS_counts.items().<br>By sorting the list we have access to the tag and its count, in order.
for k,v in sorted(POS_counts.items()): print(f'{k}. {doc.vocab[k].text:{5}}: {v}') # Count the different fine-grained tags: TAG_counts = doc.count_by(spacy.attrs.TAG) for k,v in sorted(TAG_counts.items()): print(f'{k}. {doc.vocab[k].text:{4}}: {v}')
nlp/UPDATED_NLP_COURSE/02-Parts-of-Speech-Tagging/00-POS-Basics.ipynb
rishuatgithub/MLPy
apache-2.0
<div class="alert alert-success">**Why did the ID numbers get so big?** In spaCy, certain text values are hardcoded into `Doc.vocab` and take up the first several hundred ID numbers. Strings like 'NOUN' and 'VERB' are used frequently by internal operations. Others, like fine-grained tags, are assigned hash values as ne...
# Count the different dependencies: DEP_counts = doc.count_by(spacy.attrs.DEP) for k,v in sorted(DEP_counts.items()): print(f'{k}. {doc.vocab[k].text:{4}}: {v}')
nlp/UPDATED_NLP_COURSE/02-Parts-of-Speech-Tagging/00-POS-Basics.ipynb
rishuatgithub/MLPy
apache-2.0
Train and deploy Xgboost (Scikit-learn) on Kubeflow from Notebooks This notebook introduces you the usage of Kubeflow Fairing to train and deploy a model to Kubeflow on Google Kubernetes Engine (GKE), and Google Cloud AI Platform training. This notebook demonstrate how to: Train an XGBoost model in a local notebook, U...
import argparse import logging import joblib import sys import pandas as pd from sklearn.metrics import roc_auc_score from sklearn.model_selection import train_test_split from sklearn.impute import SimpleImputer from xgboost import XGBClassifier logging.basicConfig(format='%(message)s') logging.getLogger().setLevel(lo...
examples/kubeflow-fairing-example/Fairing_XGBoost.ipynb
GoogleCloudPlatform/professional-services
apache-2.0
Define the model logic Define a function to split the input file into training and testing datasets.
def gcs_copy(src_path, dst_path): import subprocess print(subprocess.run(['gsutil', 'cp', src_path, dst_path], stdout=subprocess.PIPE).stdout[:-1].decode('utf-8')) def gcs_download(src_path, file_name): import subprocess print(subprocess.run(['gsutil', 'cp', src_path, file_name], stdout=subprocess....
examples/kubeflow-fairing-example/Fairing_XGBoost.ipynb
GoogleCloudPlatform/professional-services
apache-2.0
Define functions to train, evaluate, and save the trained model.
def train_model(train_X, train_y, test_X, test_y, n_estimators, learning_rate): """Train the model using XGBRegressor.""" model = XGBClassifier(n_estimators=n_estimators, learning_rate=learning_rate) model.fit(train_X, ...
examples/kubeflow-fairing-example/Fairing_XGBoost.ipynb
GoogleCloudPlatform/professional-services
apache-2.0
Define a class for your model, with methods for training and prediction.
class FraudServe(object): def __init__(self): self.train_input = GCP_Bucket + "train_fraud.csv" self.n_estimators = 50 self.learning_rate = 0.1 self.model_file = "trained_fraud_model.joblib" self.model = None def train(self): (train_X, train_y), (test_X, tes...
examples/kubeflow-fairing-example/Fairing_XGBoost.ipynb
GoogleCloudPlatform/professional-services
apache-2.0
Train an XGBoost model in a notebook Call FraudServe().train() to train your model, and then evaluate and save your trained model.
FraudServe().train()
examples/kubeflow-fairing-example/Fairing_XGBoost.ipynb
GoogleCloudPlatform/professional-services
apache-2.0
Make Use of Fairing Spicify a image registry that will hold the image built by fairing
# In this demo, I use gsutil, therefore i compile a special image to install GoogleCloudSDK as based image base_image = 'gcr.io/{}/fairing-predict-example:latest'.format(GCP_PROJECT) !docker build --build-arg PY_VERSION=3.6.4 . -t {base_image} !docker push {base_image} DOCKER_REGISTRY = 'gcr.io/{}/fairing-job-xgboost'...
examples/kubeflow-fairing-example/Fairing_XGBoost.ipynb
GoogleCloudPlatform/professional-services
apache-2.0
Train an XGBoost model remotely on Kubeflow Import the TrainJob and GKEBackend classes. Kubeflow Fairing packages the FraudServe class, the training data, and the training job's software prerequisites as a Docker image. Then Kubeflow Fairing deploys and runs the training job on Kubeflow.
from fairing import TrainJob from fairing.backends import GKEBackend train_job = TrainJob(FraudServe, BASE_IMAGE, input_files=["requirements.txt"], docker_registry=DOCKER_REGISTRY, backend=GKEBackend()) train_job.submit()
examples/kubeflow-fairing-example/Fairing_XGBoost.ipynb
GoogleCloudPlatform/professional-services
apache-2.0
Train an XGBoost model remotely on Cloud ML Engine Import the TrainJob and GCPManagedBackend classes. Kubeflow Fairing packages the FraudServe class, the training data, and the training job's software prerequisites as a Docker image. Then Kubeflow Fairing deploys and runs the training job on Cloud ML Engine.
from fairing import TrainJob from fairing.backends import GCPManagedBackend train_job = TrainJob(FraudServe, BASE_IMAGE, input_files=["requirements.txt"], docker_registry=DOCKER_REGISTRY, backend=GCPManagedBackend()) train_job.submit()
examples/kubeflow-fairing-example/Fairing_XGBoost.ipynb
GoogleCloudPlatform/professional-services
apache-2.0
Deploy the trained model to Kubeflow for predictions Import the PredictionEndpoint and KubeflowGKEBackend classes. Kubeflow Fairing packages the FraudServe class, the trained model, and the prediction endpoint's software prerequisites as a Docker image. Then Kubeflow Fairing deploys and runs the prediction endpoint on ...
from fairing import PredictionEndpoint from fairing.backends import KubeflowGKEBackend # The trained_ames_model.joblib is exported during the above local training endpoint = PredictionEndpoint(FraudServe, BASE_IMAGE, input_files=['trained_fraud_model.joblib', "requirements.txt"], docker_re...
examples/kubeflow-fairing-example/Fairing_XGBoost.ipynb
GoogleCloudPlatform/professional-services
apache-2.0
Deploy to GCP
# Deploy model to gcp # from fairing.deployers.gcp.gcpserving import GCPServingDeployer # deployer = GCPServingDeployer() # deployer.deploy(VERSION_DIR, MODEL_NAME, VERSION_NAME)
examples/kubeflow-fairing-example/Fairing_XGBoost.ipynb
GoogleCloudPlatform/professional-services
apache-2.0
Call the prediction endpoint Create a test dataset, then call the endpoint on Kubeflow for predictions.
(train_X, train_y), (test_X, test_y) = read_input(GCP_Bucket + "train_fraud.csv") endpoint.predict_nparray(test_X)
examples/kubeflow-fairing-example/Fairing_XGBoost.ipynb
GoogleCloudPlatform/professional-services
apache-2.0
Clean up the prediction endpoint Delete the prediction endpoint created by this notebook.
endpoint.delete()
examples/kubeflow-fairing-example/Fairing_XGBoost.ipynb
GoogleCloudPlatform/professional-services
apache-2.0
Word counting Write a function tokenize that takes a string of English text returns a list of words. It should also remove stop words, which are common short words that are often removed before natural language processing. Your function should have the following logic: Split the string into lines using splitlines. Spl...
things = "hello!" def ispuct(char, punctuation='`~!@#$%^&*()_-+={[}]|\:;"<,>.?/}\t'): return (not (char in punctuation)) #x = list(filter(ispuct, things)) #a = '' #a.join(x) #print(new_things) def tokenize(s, stop_words = '', punctuation='`~!@#$%^&*()_+={[}]|\:;"<,>.?/}\t'): m = [] s = s.replace("-", " ")...
assignments/assignment07/AlgorithmsEx01.ipynb
edwardd1/phys202-2015-work
mit