markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Quick Programming PowerupNumpy arrays have a large number of useful ways to access data, we'll talk about two in this powerup! MaskingA handy way to select elements in NumPy is masking:* This lets you easily do things like Select X's for example where Y=True
import numpy as np #Here is some fake data Xs=np.random.uniform(size=(20)) Ys=np.random.uniform(size=(20)) > .8 #80% of our Ys should be true print(Xs) print(Ys) # Here we'll use as mask to grab all True Xs mask=(Ys==1) print(mask) print(Xs[mask]) #Or since Ys are true or false print(Xs[Ys])
_____no_output_____
BSD-3-Clause
notebooks/4-ImageData.ipynb
jasonsydes/DNNWS_2022
Using Lists as IndicesA handy way to select elements in NumPy is using lists:* This lets you easily do things like Select The 3 Biggest Elements in X
#Some other Handy Tricks Xs=np.random.uniform(size=(5)) #Input Data Ys=np.random.uniform(size=(5)) >.5 #Target Data print("Xs",Xs) print("Ys",Xs) #Lists for index's work too print("Selected Xs",Xs[[1,2,4]]) #This can be useful if you want to grab the labels for smallest values of x #This gives you index's of an a...
_____no_output_____
BSD-3-Clause
notebooks/4-ImageData.ipynb
jasonsydes/DNNWS_2022
Images with Neural Networks This notebook makes extensive use of examples and figures from [here](http://cs231n.github.io/convolutional-networks/), which is a great reference for further details. GOALS* Understand how Image data is stored and used* Write a Multi-Class classification model* Be able to use convolutional...
import os import os.path import numpy as np from matplotlib import pyplot as plt from random import random from sys import version print("Import complete") # Load pre-shuffled MNIST data into train and test sets (_xtrain, _ytrain), (X_test, Y_test) = tf.keras.datasets.fashion_mnist.load_data() #We want to include a...
_____no_output_____
BSD-3-Clause
notebooks/4-ImageData.ipynb
jasonsydes/DNNWS_2022
* Note above that the labels are integers from 0-9* Also note the images are integers from 0-255 (uint8)We will deal with the labels first. Lets make some useful arrays and dictionaries to keep track of what each integer means
# This is useful for making plots it takes an integer lookup_dict={ 0 :'T-shirt/top', 1 :'Trouser', 2 :'Pullover', 3 :'Dress', 4 :'Coat', 5 :'Sandal', 6 :'Shirt', 7 :'Sneaker', 8 :'Bag', 9 :'Ankle boot' } #Lets make a list in the order of the labels above so [T-Shirt,Trouser,....
_____no_output_____
BSD-3-Clause
notebooks/4-ImageData.ipynb
jasonsydes/DNNWS_2022
Multi-Class Classification**Reminder** * Classification is problem where each of our examples (x) belongs to a class (y). Since Neural networks are universal function approximators, we can use $P(y|x)$**Like before to change our problem we need*** The correct activation on our last layer - **softmax*** The correct l...
Y_train_one_hot = tf.keras.utils.to_categorical(Y_train, 10) Y_develop_one_hot = tf.keras.utils.to_categorical(Y_develop, 10) Y_test_one_hot = tf.keras.utils.to_categorical(Y_test, 10) print('Example:',Y_train[0],'=',Y_train_one_hot[0])
_____no_output_____
BSD-3-Clause
notebooks/4-ImageData.ipynb
jasonsydes/DNNWS_2022
Now lets handle the image data* Our Convolutional Neural Networks need a shape of Batch x Height x Width x Channels, for us (batch_size x 28 x 28 x 1)* In this case channels=1, but for a color image you'll have 3 RGB and sometimes 4 with a transparency channel RGBA * It's much easier for a neural network to handle data...
f=plt.figure(figsize=(15,3)) plt.imshow(np.squeeze(np.hstack(X_train[0:7])),cmap='gray') #hstack aranges the first 7 images into one long image #Reshape X_train = X_train.reshape(X_train.shape[0], 28, 28, 1) X_test = X_test.reshape(X_test.shape[0], 28, 28, 1) X_develop = X_develop.reshape(X_develop.shape[0], 28, 28, ...
_____no_output_____
BSD-3-Clause
notebooks/4-ImageData.ipynb
jasonsydes/DNNWS_2022
Notice that the pixel values imported as an integer array that saturates at `255`. Let's turn the data into floats $\in [0, 1]$.
X_train = X_train.astype('float32') X_test = X_test.astype('float32') if X_train.max()>1: X_train = X_train/255 X_test = X_test/255 X_develop = X_develop/255 assert(np.max(X_train) <=1) assert(np.max(X_test) <=1) assert(np.max(X_develop) <=1) print("all sets scaled to float values between", X_train.min(...
_____no_output_____
BSD-3-Clause
notebooks/4-ImageData.ipynb
jasonsydes/DNNWS_2022
The Take Away* Image data is 3 dimensional (width,height,channel (i.e color) ) * It is often stored from 0-255 and should be normalized between 0-1* Class labels are given as integers and need to be converted to **one hot** vectors * Multi-classification problems * Use **softmax** as an output * Use **Cat...
input_layer=tf.keras.layers.Input( shape=X_train.shape[1:] ) # Shape here does not include the batch size ## Here is our magic layer to turn image data into something a dense layer can use flat_input=tf.keras.layers.Flatten()(input_layer )#Dense layers take a shape of ( batch x features) ## hidden_layer1=tf.keras.lay...
_____no_output_____
BSD-3-Clause
notebooks/4-ImageData.ipynb
jasonsydes/DNNWS_2022
Loss CurvesThe keras fit function returns a history object, that we've ignored until now, but it's a very important tool.It records the loss of the training and development datasets at each epoch, as well as metrics like accuracy.Let's plot the loss.**Most imporantly*** Is the development loss greater than the train l...
#We'll do this a lot so let's put it in a function def plot_history(history): plt.plot(history.history['loss'],label='Train') plt.plot(history.history['val_loss'],label='Develop') plt.xlabel('Epochs') plt.ylabel('Loss') plt.ylim((0,1.5*np.max(history.history['val_loss']))) plt.legend() ...
_____no_output_____
BSD-3-Clause
notebooks/4-ImageData.ipynb
jasonsydes/DNNWS_2022
There are many techniques to deal with over-fitting and we'll talk more about them latter, but the easiest way is to just stop the training earlier. You can do this with```keras.callbacks.EarlyStopping(monitor='val_loss', min_delta=0, patience=0, verbose=0, mode='auto', baseline=None, restore_best_weights=False)```This...
es=tf.keras.callbacks.EarlyStopping(monitor='val_loss', min_delta=0, patience=0, verbose=0, mode='auto') history=dense_model.fit(X_train, Y_train_one_hot, batch_size=32, epochs=10, verbose=1, validation_data=(X_develop,Y_develop_one_hot), callbacks=[es] ) plot_hist...
_____no_output_____
BSD-3-Clause
notebooks/4-ImageData.ipynb
jasonsydes/DNNWS_2022
Since we picked up training where we left off the early stopping function quits training as soon as the develop loss stops going down. ExceriseWith that let's practice writing our own Dense network image classifierWe will a new dataset as an example cifar10labels=https://www.cs.toronto.edu/~kriz/cifar.html
# Load CIFAR data into train and test sets (_cfxtrain, _cfytrain), (cfX_test, cfY_test) = tf.keras.datasets.cifar10.load_data() #Split into Train and Develop train_index=[] develop_index=[] for i in range(len(_cfxtrain)): if random() <0.8: train_index.append(i) else: develop_index.append(i) cf...
_____no_output_____
BSD-3-Clause
notebooks/4-ImageData.ipynb
jasonsydes/DNNWS_2022
Step 1 Scale your data to be between 0 and 1
"Your code here normalize cfX_train/test/develop" for data_set in [cfX_train,cfX_develop,cfX_test]: assert np.max(data_set)==1., 'Max of your data set is '+str(np.max(data_set))+' not 1' assert np.min(data_set)==0., 'Max of your data set is '+str(np.min(data_set))+' not 0' print('Great job! Your dataset is nor...
_____no_output_____
BSD-3-Clause
notebooks/4-ImageData.ipynb
jasonsydes/DNNWS_2022
Step 2 Create One-Hot encoded labelsName them:* cfY_train_one_hot* cfY_develop_one_hot* cfY_test_one_hot
"Your code here" assert 'cfY_train_one_hot' in locals(), 'cfY_train_one_hot not found' assert 'cfY_develop_one_hot' in locals(), 'cfY_develop_one_hot not found' assert 'cfY_test_one_hot' in locals(), 'cfY_test_one_hot not found' assert (cfY_train_one_hot).shape[1]==10, 'cfY_train_one_hot not the correct size' ...
_____no_output_____
BSD-3-Clause
notebooks/4-ImageData.ipynb
jasonsydes/DNNWS_2022
Step 3 Create a Dense Neural NetworkWrite your own dense image classifier.Remeber you'll need: * an input layer* a flatten layer* some dense layers with activations* an output layer with a softmax activationCreate and compile a model named **cifar_model*** Make sure the loss is catagorical_crossentropy
"your code here" assert 'cifar_model' in locals(), "Could not find cifar_model" assert cifar_model.input_shape ==(None,32,32,3), "Check your input shape is correct" assert cifar_model.output_shape[1] ==10, "Check your output shape is correct" assert cifar_model._is_compiled, "Make sure to compile your model" assert cif...
_____no_output_____
BSD-3-Clause
notebooks/4-ImageData.ipynb
jasonsydes/DNNWS_2022
Step 4: Fit your Model
"your code here"
_____no_output_____
BSD-3-Clause
notebooks/4-ImageData.ipynb
jasonsydes/DNNWS_2022
Step 5: Plot your loss curves
"your code here"
_____no_output_____
BSD-3-Clause
notebooks/4-ImageData.ipynb
jasonsydes/DNNWS_2022
Save Your Model
cifar_model.save("my_cifar_model")
_____no_output_____
BSD-3-Clause
notebooks/4-ImageData.ipynb
jasonsydes/DNNWS_2022
Load Your Model
loaded_model=tf.keras.models.load_model("my_cifar_model")
_____no_output_____
BSD-3-Clause
notebooks/4-ImageData.ipynb
jasonsydes/DNNWS_2022
Use your ModelWill try a quick example with a photo. You can use mine or upload your own to Talapas* We need to open our photos and resize/reshape them for our model* We need rescale them to match training
import PIL dog_image=PIL.Image.open('/gpfs/projects/bgmp/shared/2019_ML_workshop/datasets/Test_Photos/dog.jpg') dog_array=np.asarray(dog_image) print(dog_array.shape) plt.imshow(dog_array)
_____no_output_____
BSD-3-Clause
notebooks/4-ImageData.ipynb
jasonsydes/DNNWS_2022
Crop and Resize
length,width=dog_image.size min_length=min([length,width]) #Crop a box with coordinates (left,top,right,bottom) 0,0 is the upper left corner new_image=dog_image.crop((0,0,min_length,min_length)) print("Cropped Size",new_image.size) new_image=new_image.resize((32,32)) print("Resized Image",new_image) new_array=np.asa...
_____no_output_____
BSD-3-Clause
notebooks/4-ImageData.ipynb
jasonsydes/DNNWS_2022
Put it together into a function
def process_image(input_file): image=PIL.Image.open(input_file) length,width=dog_image.size min_length=min([length,width]) #Crop a box with coordinates (left,top,right,bottom) 0,0 is the upper left corner new_image=image.crop((0,0,min_length,min_length)) new_image=new_image.resize((32,32)) ...
_____no_output_____
BSD-3-Clause
notebooks/4-ImageData.ipynb
jasonsydes/DNNWS_2022
Borrelia MM1 Plasmid Plots *Chris Lausted and Chenkai Luo**7 Jan 2018*We can upload a file called `replicons.fna` before running this notebook. Or in this case, let's download the plasmids of B31(NRZ) and put them into one file. * *
%%bash ## Simple function to download fasta and genbank ## <https://www.ncbi.nlm.nih.gov/genome/738?genome_assembly_id=393461> dlncbi () { ## Download with wget and rename. wget -q -O temp.fna "https://www.ncbi.nlm.nih.gov/search/api/sequence/$1?report=fasta" ## Rename the plasmid to start with something like l...
_____no_output_____
Apache-2.0
borreliaplots_MM1.ipynb
lausted/cl_borrelia_plasmid_plot
The FBI is tracking on a potential smuggling ring that is led by a shady character known by his nom de guerre of The Hamburgler and is using social media platforms to help organize her or his efforts. Your mission, should you choose to accept it, is to create a system that uses the API of various services to trace com...
!pip install praw !pip install pytz # suggested to use because kept getting tzinfo error when importing datetime. this will specify pacific timezone import praw import time from time import sleep # importing to slow down execution import datetime from datetime import datetime from datetime import timedelta from pra...
_____no_output_____
CC0-1.0
Modone.ipynb
mrandolph95/STC510
2. enter login and script info for reddit
usr_name = "nunya_9911" usr_password = "disposablepassword123!" reddit_app_id = '0Fss0e88a5UL1dWmgk2vug' reddit_app_secret = 'AmCxyt0gEFlMe6r2TDs6ILzQfZI5Eg' reddit = praw.Reddit(user_agent="Mod 1 (by u/nunya_9911)", client_id=reddit_app_id, client_secret=reddit_app_secret, # a...
_____no_output_____
CC0-1.0
Modone.ipynb
mrandolph95/STC510
**This is an explanation of how to use search.** It is copied from the floating comment when I began to type "search". I put it here so it was easy to reference while I was typing everything out.def search(query: str, sort: str='relevance', syntax: str='lucene', time_filter: str='all', **generator_kwargs: Any) ->Iterat...
# this labels the current date so I can use it later rightnow = datetime.now() # per the reading about time. the reading said to enter datetime.datetime.timedelta, but that didn't work. # did this instead. not sure if Python updated since the reading was published? delta = timedelta(hours=72) # this defines the time 3 ...
_____no_output_____
CC0-1.0
Modone.ipynb
mrandolph95/STC510
Jupyter/Colab Notebook to Showcase sparkMeasure APIs for Python [Run on Google Colab Research: ](https://colab.research.google.com/github/LucaCanali/sparkMeasure/blob/master/examples/SparkMeasure_Jupyter_Colab_Example.ipynb) **SparkMeasure is a tool for performance troubleshooting of Apache Spark workloads** It simp...
# Install Spark # Note: This installs the latest Spark version (version 2.4.3, as tested in May 2019) !pip install pyspark from pyspark.sql import SparkSession # Create Spark Session # This example uses a local cluster, you can modify master to use YARN or K8S if available # This example downloads sparkMeasure 0.1...
+---------+ | count(1)| +---------+ |100000000| +---------+ Scheduling mode = FIFO Spark Context default degree of parallelism = 8 Aggregated Spark stage metrics: numStages => 4 sum(numTasks) => 25 elapsedTime => 1518 (2 s) sum(stageDuration) => 1493 (1 s) sum(executorRunTime) => 11473 (11 s) sum(executorCpuTime) => ...
Apache-2.0
examples/SparkMeasure_Jupyter_Colab_Example.ipynb
knockdata/sparkMeasure
Example of collecting using Task MetricsCollecting Spark task metrics at the granularity of each task completion has additional overheadcompare to collecting at the stage completion level, therefore this option should only be used if you need data with this finer granularity, for example because you wantto study skew ...
from sparkmeasure import TaskMetrics taskmetrics = TaskMetrics(spark) taskmetrics.begin() spark.sql("select count(*) from range(1000) cross join range(1000) cross join range(100)").show() taskmetrics.end() taskmetrics.print_report()
+---------+ | count(1)| +---------+ |100000000| +---------+ Scheduling mode = FIFO Spark Contex default degree of parallelism = 8 Aggregated Spark task metrics: numtasks => 25 elapsedTime => 1478 (1 s) sum(duration) => 11402 (11 s) sum(schedulerDelay) => 62 sum(executorRunTime) => 11299 (11 s) sum(executorCpuTime) =>...
Apache-2.0
examples/SparkMeasure_Jupyter_Colab_Example.ipynb
knockdata/sparkMeasure
Prelim Exam Question 1
import numpy as np c = np.eye(4) print(c)
[[1. 0. 0. 0.] [0. 1. 0. 0.] [0. 0. 1. 0.] [0. 0. 0. 1.]]
Apache-2.0
PrelimExam.ipynb
cieloanne/Linear-Algebra-58019
Question 2
answer = 2*c print(answer)
[[2. 0. 0. 0.] [0. 2. 0. 0.] [0. 0. 2. 0.] [0. 0. 0. 2.]]
Apache-2.0
PrelimExam.ipynb
cieloanne/Linear-Algebra-58019
Question 3
A = np.array([2,7,4]) B = np.array([3,9,8]) cross = np.cross(A,B) print(cross)
[20 -4 -3]
Apache-2.0
PrelimExam.ipynb
cieloanne/Linear-Algebra-58019
Generate CSV-files from database Excel filesThis scripts converts the Excel files that are extracted from the original rodent basal ganglia database into CSV files that are used by the notebook that generates the database, `1. Create the Rodent Basal Ganglia Graph.ipynb`.The project provides these files, so it is not ...
## REGIONS import pandas as pd files = [ ("../Data/xlsx/basal_ganglia_regions.xlsx", "regions_other", "../Data/csvs/basal_ganglia/regions/regions_other.csv", {}), ("../Data/xlsx/basal_ganglia_regions.xlsx", "region_records", "../Data/csvs/basal_ganglia/regions/region_records.csv", {"Original_framework":int, "D...
Converted all Cells from xlsx to csv
CC-BY-4.0
3. Initial file generation of CSVs for the graph generation/basal_ganglia_xlsx_to_csv.ipynb
marenpg/jupyter_basal_ganglia
Vertex AI: Create, train, and deploy an AutoML text classification model Learning ObjectiveIn this notebook, you learn how to:* Create a dataset and import data* Train an AutoML model* Get and review evaluations for the model* Deploy a model to an endpoint* Get online predictions* Get batch predictions IntroductionThi...
# Setup your dependencies import os # The Google Cloud Notebook product has specific requirements IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version") # Google Cloud Notebook requires dependencies to be installed with '--user' USER_FLAG = "" if IS_GOOGLE_CLOUD_NOTEBOOK: USER_FLAG = ...
Requirement already satisfied: google-cloud-aiplatform in /opt/conda/lib/python3.7/site-packages (1.1.1) Collecting google-cloud-aiplatform Downloading google_cloud_aiplatform-1.3.0-py2.py3-none-any.whl (1.3 MB)  |████████████████████████████████| 1.3 MB 8.6 MB/s eta 0:00:01 [?25hRequirement already satisfied...
Apache-2.0
courses/machine_learning/deepdive2/launching_into_ml/labs/automl_text_classification.ipynb
mohakala/training-data-analyst
Please ignore any incompatibility warnings. **Restart** the kernel before proceeding further (On the Notebook menu - Kernel - Restart Kernel). Set your project IDFinally, you must initialize the client library before you can send requests to the Vertex AI service. With the Python SDK, you initialize the client library...
# import necessary libraries import os from datetime import datetime import jsonlines from google.cloud import aiplatform, storage from google.protobuf import json_format PROJECT_ID = "[your-project-id]" REGION = "us-central1" # Get your Google Cloud project ID from gcloud if not os.getenv("IS_TESTING"): shell_o...
Project ID: qwiklabs-gcp-00-09d98f4803b0
Apache-2.0
courses/machine_learning/deepdive2/launching_into_ml/labs/automl_text_classification.ipynb
mohakala/training-data-analyst
Create a dataset and import your dataThe notebook uses the 'Happy Moments' dataset for demonstration purposes. You can change it to another text classification dataset that [conforms to the data preparation requirements](https://cloud.google.com/vertex-ai/docs/datasets/prepare-textclassification).Using the Python SDK,...
# TODO # Use a timestamp to ensure unique resources TIMESTAMP = # TODO: Your code goes here src_uris = "gs://cloud-ml-data/NL-classification/happiness.csv" display_name = f"e2e-text-dataset-{TIMESTAMP}" # TODO # create a dataset and import the dataset ds = # TODO: Your code goes here( display_name=display_name, ...
INFO:google.cloud.aiplatform.datasets.dataset:Creating TextDataset INFO:google.cloud.aiplatform.datasets.dataset:Create TextDataset backing LRO: projects/259224131669/locations/us-central1/datasets/7829200088927830016/operations/2215787784218607616 INFO:google.cloud.aiplatform.datasets.dataset:TextDataset created. Reso...
Apache-2.0
courses/machine_learning/deepdive2/launching_into_ml/labs/automl_text_classification.ipynb
mohakala/training-data-analyst
Train your text classification modelOnce your dataset has finished importing data, you are ready to train your model. To do this, you first need the full resource name of your dataset, where the full name has the format `projects/[YOUR_PROJECT]/locations/us-central1/datasets/[YOUR_DATASET_ID]`. If you don't have the r...
# TODO # list all of the datasets in your project datasets = # TODO: Your code goes here(filter=f'display_name="{display_name}"') print(datasets)
[<google.cloud.aiplatform.datasets.text_dataset.TextDataset object at 0x7fe2544d6dd0> resource name: projects/259224131669/locations/us-central1/datasets/7829200088927830016]
Apache-2.0
courses/machine_learning/deepdive2/launching_into_ml/labs/automl_text_classification.ipynb
mohakala/training-data-analyst
When you create a new model, you need a reference to the `TextDataset` object that corresponds to your dataset. You can use the `ds` variable you created previously when you created the dataset or you can also list all of your datasets to get a reference to your dataset. Each item returned from `TextDataset.list()` is ...
# Get the dataset ID if it's not available dataset_id = "[your-dataset-id]" if dataset_id == "[your-dataset-id]": # Use the reference to the new dataset captured when we created it dataset_id = ds.resource_name.split("/")[-1] print(f"Dataset ID: {dataset_id}") text_dataset = aiplatform.TextDataset(dataset...
Dataset ID: 7829200088927830016
Apache-2.0
courses/machine_learning/deepdive2/launching_into_ml/labs/automl_text_classification.ipynb
mohakala/training-data-analyst
Now you can begin training your model. Training the model is a two part process:1. **Define the training job.** You must provide a display name and the type of training you want when you define the training job.2. **Run the training job.** When you run the training job, you need to supply a reference to the dataset to ...
# Define the training job training_job_display_name = f"e2e-text-training-job-{TIMESTAMP}" # TODO # constructs a AutoML Text Training Job job = # TODO: Your code goes here( display_name=training_job_display_name, prediction_type="classification", multi_label=False, ) model_display_name = f"e2e-text-classifi...
INFO:google.cloud.aiplatform.training_jobs:View Training: https://console.cloud.google.com/ai/platform/locations/us-central1/training/1280924449289273344?project=259224131669 INFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/259224131669/locations/us-central1/trainingPipelines/128092444928927334...
Apache-2.0
courses/machine_learning/deepdive2/launching_into_ml/labs/automl_text_classification.ipynb
mohakala/training-data-analyst
Get and review model evaluation scoresAfter your model has finished training, you can review the evaluation scores for it.First, you need to get a reference to the new model. As with datasets, you can either use the reference to the `model` variable you created when deployed the model or you can list all of the model...
# TODO # list the aiplatform model models = # TODO: Your code goes here(filter=f'display_name="{model_display_name}"') print(models)
[<google.cloud.aiplatform.models.Model object at 0x7fe24dedda90> resource name: projects/259224131669/locations/us-central1/models/274218199967334400]
Apache-2.0
courses/machine_learning/deepdive2/launching_into_ml/labs/automl_text_classification.ipynb
mohakala/training-data-analyst
Using the model name (in the format `projects/[PROJECT_NAME]/locations/us-central1/models/[MODEL_ID]`), you can get its model evaluations. To get model evaluations, you must use the underlying service client.Building a service client requires that you provide the name of the regionalized hostname used for your model. I...
# Get the ID of the model model_name = "[your-model-resource-name]" if model_name == "[your-model-resource-name]": # Use the `resource_name` of the Model instance you created previously model_name = model.resource_name print(f"Model name: {model_name}") # Get a reference to the Model Service client client...
Model name: projects/259224131669/locations/us-central1/models/274218199967334400
Apache-2.0
courses/machine_learning/deepdive2/launching_into_ml/labs/automl_text_classification.ipynb
mohakala/training-data-analyst
Before you can view the model evaluation you must first list all of the evaluations for that model. Each model can have multiple evaluations, although a new model is likely to only have one.
model_evaluations = model_service_client.list_model_evaluations(parent=model_name) model_evaluation = list(model_evaluations)[0]
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive2/launching_into_ml/labs/automl_text_classification.ipynb
mohakala/training-data-analyst
Now that you have the model evaluation, you can look at your model's scores. If you have questions about what the scores mean, review the [public documentation](https://cloud.google.com/vertex-ai/docs/training/evaluating-automl-modelstext).The results returned from the service are formatted as [`google.protobuf.Value`]...
model_eval_dict = json_format.MessageToDict(model_evaluation._pb) metrics = model_eval_dict["metrics"] confidence_metrics = metrics["confidenceMetrics"] print(f'Area under precision-recall curve (AuPRC): {metrics["auPrc"]}') for confidence_scores in confidence_metrics: metrics = confidence_scores.keys() print(...
Area under precision-recall curve (AuPRC): 0.9556533 recallAt1: 0.8852321 precisionAt1: 0.8852321 recall: 1.0 precision: 0.14285715 f1ScoreAt1: 0.8852321 f1Score: 0.25 f1Score: 0.82728595 f1ScoreAt1: 0.8852321 recall: 0.96202534 precision: 0.72565246 recallAt1: 0.8852321 precisionAt1: 0.8852321 confide...
Apache-2.0
courses/machine_learning/deepdive2/launching_into_ml/labs/automl_text_classification.ipynb
mohakala/training-data-analyst
Deploy your text classification modelOnce your model has completed training, you must deploy it to an _endpoint_ to get online predictions from it. When you deploy the model to an endpoint, a copy of the model is made on the endpoint with a new resource name and display name.You can deploy multiple models to the same ...
deployed_model_display_name = f"e2e-deployed-text-classification-model-{TIMESTAMP}" # TODO # deploy a model endpoint = # TODO: Your code goes here( deployed_model_display_name=deployed_model_display_name, sync=True )
INFO:google.cloud.aiplatform.models:Creating Endpoint INFO:google.cloud.aiplatform.models:Create Endpoint backing LRO: projects/259224131669/locations/us-central1/endpoints/7980783159979540480/operations/3267096822232907776 INFO:google.cloud.aiplatform.models:Endpoint created. Resource name: projects/259224131669/locat...
Apache-2.0
courses/machine_learning/deepdive2/launching_into_ml/labs/automl_text_classification.ipynb
mohakala/training-data-analyst
In case you didn't record the name of the new endpoint, you can get a list of all your endpoints as you did before with datasets and models. For each endpoint, you can list the models deployed to that endpoint. To get a reference to the model that you just deployed, you can check the `display_name` of each model deploy...
endpoints = aiplatform.Endpoint.list() endpoint_with_deployed_model = [] for endpoint_ in endpoints: for model in endpoint_.list_models(): if model.display_name.find(deployed_model_display_name) == 0: endpoint_with_deployed_model.append(endpoint_) print(endpoint_with_deployed_model)
[<google.cloud.aiplatform.models.Endpoint object at 0x7fe24de99d10> resource name: projects/259224131669/locations/us-central1/endpoints/7980783159979540480]
Apache-2.0
courses/machine_learning/deepdive2/launching_into_ml/labs/automl_text_classification.ipynb
mohakala/training-data-analyst
Get online predictions from your modelNow that you have your endpoint's resource name, you can get online predictions from the text classification model. To get the online prediction, you send a prediction request to your endpoint.
endpoint_name = "[your-endpoint-name]" if endpoint_name == "[your-endpoint-name]": endpoint_name = endpoint.resource_name print(f"Endpoint name: {endpoint_name}") endpoint = aiplatform.Endpoint(endpoint_name) content = "I got a high score on my math final!" # TODO # send a prediction request to your endpoint res...
Endpoint name: projects/259224131669/locations/us-central1/endpoints/7980783159979540480 Prediction ID: 5078374828348538880 Prediction display name: affection Prediction confidence score: 4.789760350831784e-05 Prediction ID: 4213683699893403648 Prediction display name: achievement Prediction confidence score: 0.9997887...
Apache-2.0
courses/machine_learning/deepdive2/launching_into_ml/labs/automl_text_classification.ipynb
mohakala/training-data-analyst
Get batch predictions from your modelYou can get batch predictions from a text classification model without deploying it. You must first format all of your prediction instances (prediction input) in JSONL format and you must store the JSONL file in a Google Cloud Storage bucket. You must also provide a Google Cloud St...
instances = [ "We hiked through the woods and up the hill to the ice caves", "My kitten is so cute", ] input_file_name = "batch-prediction-input.jsonl"
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive2/launching_into_ml/labs/automl_text_classification.ipynb
mohakala/training-data-analyst
For batch prediction, you must supply the following:+ All of your prediction instances as individual files on Google Cloud Storage, as TXT files for your instances+ A JSONL file that lists the URIs of all your prediction instances+ A Google Cloud Storage bucket to hold the output from batch predictionFor this tutorial,...
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S") BUCKET_NAME = "[your-bucket-name]" if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "[your-bucket-name]": BUCKET_NAME = f"automl-text-notebook-{TIMESTAMP}" BUCKET_URI = f"gs://{BUCKET_NAME}" ! gsutil mb -l $REGION $BUCKET_URI # Instantiate the Stor...
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive2/launching_into_ml/labs/automl_text_classification.ipynb
mohakala/training-data-analyst
Now that you have the bucket with the prediction instances ready, you can send a batch prediction request to Vertex AI. When you send a request to the service, you must provide the URI of your JSONL file and your output bucket, including the `gs://` protocols.With the Python SDK, you can create a batch prediction job b...
job_display_name = "e2e-text-classification-batch-prediction-job" model = aiplatform.Model(model_name=model_name) # TODO # create a batch prediction job batch_prediction_job = # TODO: Your code goes here( job_display_name=job_display_name, gcs_source=f"{BUCKET_URI}/{input_file_name}", gcs_destination_prefi...
INFO:google.cloud.aiplatform.jobs:Creating BatchPredictionJob INFO:google.cloud.aiplatform.jobs:BatchPredictionJob created. Resource name: projects/259224131669/locations/us-central1/batchPredictionJobs/3571004859807170560 INFO:google.cloud.aiplatform.jobs:To use this BatchPredictionJob in another session: INFO:google....
Apache-2.0
courses/machine_learning/deepdive2/launching_into_ml/labs/automl_text_classification.ipynb
mohakala/training-data-analyst
Once the batch prediction job completes, the Python SDK prints out the resource name of the batch prediction job in the format `projects/[PROJECT_ID]/locations/[LOCATION]/batchPredictionJobs/[BATCH_PREDICTION_JOB_ID]`. You can query the Vertex AI service for the status of the batch prediction job using its ID.The follo...
from google.cloud.aiplatform import jobs batch_job = jobs.BatchPredictionJob(batch_prediction_job_name) print(f"Batch prediction job state: {str(batch_job.state)}")
Batch prediction job state: JobState.JOB_STATE_SUCCEEDED
Apache-2.0
courses/machine_learning/deepdive2/launching_into_ml/labs/automl_text_classification.ipynb
mohakala/training-data-analyst
After the batch job has completed, you can view the results of the job in your output Storage bucket. You might want to first list all of the files in your output bucket to find the URI of the output file.
BUCKET_OUTPUT = f"{BUCKET_URI}/output" ! gsutil ls -a $BUCKET_OUTPUT
gs://qwiklabs-gcp-00-09d98f4803b0/output/prediction-e2e-text-classification-model-20210824122127-2021-08-24T17:42:17.359307Z/
Apache-2.0
courses/machine_learning/deepdive2/launching_into_ml/labs/automl_text_classification.ipynb
mohakala/training-data-analyst
The output from the batch prediction job should be contained in a folder (or _prefix_) that includes the name of the batch prediction job plus a time stamp for when it was created.For example, if your batch prediction job name is `my-job` and your bucket name is `my-bucket`, the URI of the folder containing your output...
RESULTS_DIRECTORY = "prediction_results" RESULTS_DIRECTORY_FULL = f"{RESULTS_DIRECTORY}/output" # Create missing directories os.makedirs(RESULTS_DIRECTORY, exist_ok=True) # Get the Cloud Storage paths for each result ! gsutil -m cp -r $BUCKET_OUTPUT $RESULTS_DIRECTORY # Get most recently modified directory latest_di...
Copying gs://qwiklabs-gcp-00-09d98f4803b0/output/prediction-e2e-text-classification-model-20210824122127-2021-08-24T17:42:17.359307Z/predictions_00001.jsonl... / [1/1 files][ 945.0 B/ 945.0 B] 100% Done Operation completed over 1 objects/945.0 B. ...
Apache-2.0
courses/machine_learning/deepdive2/launching_into_ml/labs/automl_text_classification.ipynb
mohakala/training-data-analyst
With all of the results files downloaded locally, you can open them and read the results. In this tutorial, you use the [`jsonlines`](https://jsonlines.readthedocs.io/en/latest/) library to read the output results.The following cell opens up the JSONL output file and then prints the predictions for each instance.
# Get downloaded results in directory results_files = [] for dirpath, subdirs, files in os.walk(latest_directory): for file in files: if file.find("predictions") >= 0: results_files.append(os.path.join(dirpath, file)) # Consolidate all the results into a list results = [] for results_file in r...
instance: gs://qwiklabs-gcp-00-09d98f4803b0/input_1.txt ids: ['5078374828348538880', '7384217837562232832', '4213683699893403648', '8825369718320791552', '1619610314527997952', '466688809921150976', '2772531819134844928'] displayNames: ['affection', 'enjoy_the_moment', 'achievement', 'nature', 'leisure', 'bonding', ...
Apache-2.0
courses/machine_learning/deepdive2/launching_into_ml/labs/automl_text_classification.ipynb
mohakala/training-data-analyst
Cleaning upTo clean up all Google Cloud resources used in this project, you can [delete the Google Cloud project](https://cloud.google.com/resource-manager/docs/creating-managing-projectsshutting_down_projects) you used for the tutorial.Otherwise, you can delete the individual resources you created in this tutorial:* ...
if os.getenv("IS_TESTING"): ! gsutil rm -r $BUCKET_URI batch_job.delete() # `force` parameter ensures that models are undeployed before deletion endpoint.delete(force=True) model.delete() text_dataset.delete() # Training job job.delete()
INFO:google.cloud.aiplatform.base:Deleting BatchPredictionJob : projects/259224131669/locations/us-central1/batchPredictionJobs/3571004859807170560 INFO:google.cloud.aiplatform.base:Delete BatchPredictionJob backing LRO: projects/259224131669/locations/us-central1/operations/7219005495250518016 INFO:google.cloud.aipla...
Apache-2.0
courses/machine_learning/deepdive2/launching_into_ml/labs/automl_text_classification.ipynb
mohakala/training-data-analyst
import everything
import matplotlib import matplotlib.pyplot as plt import numpy as np from keras.models import Sequential from keras.layers import Dense from sklearn.datasets import make_blobs %matplotlib inline matplotlib.rcParams['figure.figsize'] = (10.0, 8.0) #data, label = make_moons(n_samples=500, noise=0.2, random_state=0) #lab...
_____no_output_____
BSD-2-Clause
sklearn.datasets_create_two_datasets_blobs.ipynb
tonyhuang84/notebook_dnn
Using Machine Learning to explain and predict the life expectancy of different countries Data Source:https://www.kaggle.com/kumarajarshi/life-expectancy-who/data Timeframe of the Data:2000 - 2015 Data Preprocessing
# Importing libraries %matplotlib inline import matplotlib.pyplot as plt import numpy as np import pandas as pd # Importing the dataset life_data = pd.read_csv("data/Life_expectancy_data.csv", sep=",") # Dropping the year column as Life expectancy for each country between 1950 - 2015 is analyzed in another model life_d...
_____no_output_____
MIT
charts/.ipynb_checkpoints/life_expectancy-checkpoint.ipynb
NaveenRajuS/lung-disease
Exploratory Data Analysis
life_data.columns # GDP vs. Life Expectancy plt.scatter(life_data['GDP'], life_data['Life expectancy ']) plt.title('GDP vs. Life Expectancy') plt.xlabel('GDP') plt.ylabel('Life Expectancy') # Adult Mortality vs. Life Expectancy plt.scatter(life_data['Adult Mortality'], life_data['Life expectancy ']) plt.title('Adult Mo...
_____no_output_____
MIT
charts/.ipynb_checkpoints/life_expectancy-checkpoint.ipynb
NaveenRajuS/lung-disease
How to: Scrape the Webwith Python + requests + BeautifulSoup Before you replicate the following code, make sure you have Python and all dependencies installed.- To install package manager brew: `/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"` - To install Python3: ...
import requests from bs4 import BeautifulSoup
_____no_output_____
MIT
Simple-Web-Scraper-Python.ipynb
demetriospogkas/Web-Scraping-with-Beautiful-Soup
Grab HTML source code Send GET request
url = 'http://www.imfdb.org/wiki/Category:Movie' headers = { 'User-Agent': 'Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Mobile Safari/537.36', 'Connection' : 'keep-alive' } proxies = { # Include your proxies if needed # 'http':'...'...
_____no_output_____
MIT
Simple-Web-Scraper-Python.ipynb
demetriospogkas/Web-Scraping-with-Beautiful-Soup
Save the response
text = response.text text
_____no_output_____
MIT
Simple-Web-Scraper-Python.ipynb
demetriospogkas/Web-Scraping-with-Beautiful-Soup
Parse the response with BeautifulSoup
souped = BeautifulSoup(text, "html.parser") souped
_____no_output_____
MIT
Simple-Web-Scraper-Python.ipynb
demetriospogkas/Web-Scraping-with-Beautiful-Soup
Find the `` for movie pages
movie_pages = souped.find('div', attrs={'id':'mw-pages'}) movie_pages
_____no_output_____
MIT
Simple-Web-Scraper-Python.ipynb
demetriospogkas/Web-Scraping-with-Beautiful-Soup
Grab all links to movie pages
bullets = movie_pages.find_all('li') bullets urls = [] # Initiate an empty list for bullet in bullets: # simple for loop url = 'http://www.imfdb.org' + bullet.a['href'] # local scope variable print(url) # console.log in JavaScript urls.append(url) urls
_____no_output_____
MIT
Simple-Web-Scraper-Python.ipynb
demetriospogkas/Web-Scraping-with-Beautiful-Soup
Find the link to the next pageConveniently enough, it's the very last `` in the movie_pages ``
movie_pages movie_pages.find_all('a') # This is a list type(movie_pages.find_all('a')) next_page = movie_pages.find_all('a')[-1] next_page next_page.text next_page['href'] next_page_url = 'http://www.imfdb.org' + next_page['href'] next_page_url
_____no_output_____
MIT
Simple-Web-Scraper-Python.ipynb
demetriospogkas/Web-Scraping-with-Beautiful-Soup
Bind that to one piece of codeto extract 5k pages/links
urls = [] def scrape_the_web(url): # Python function with one parameter headers = { 'User-Agent': 'Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Mobile Safari/537.36', 'Connection' : 'keep-alive' } proxies = { # Don't for...
_____no_output_____
MIT
Simple-Web-Scraper-Python.ipynb
demetriospogkas/Web-Scraping-with-Beautiful-Soup
Now that we've got every link, let's extract firearm information from each page
url = 'http://www.imfdb.org/wiki/American_Graffiti' headers = { 'User-Agent': 'Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Mobile Safari/537.36', 'Connection' : 'keep-alive' } proxies = { # Don't forget your proxies if you need any } ...
_____no_output_____
MIT
Simple-Web-Scraper-Python.ipynb
demetriospogkas/Web-Scraping-with-Beautiful-Soup
Let's try with another movie
url = 'http://www.imfdb.org/wiki/And_All_Will_Be_Quiet_(Potem_nastapi_cisza)' headers = { 'User-Agent': 'Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Mobile Safari/537.36', 'Connection' : 'keep-alive' } proxies = { # Don't forget your p...
[' Pistols ', ' Tokarev TT-33 ', 'Luger P08', ' Submachine Guns ', 'PPSh-41', ' MP40 ', ' Machine Guns ', 'Degtyaryov DP-28', ' MG34 ', 'Goryunov SG-43 Machine Gun', ' Maxim ', ' Rifles ', ' Mosin Nagant M44 Carbine ', ' Mosin Nagant M38 Carbine ', ' Karabiner 98k ', ' Hand Grenades ', ' F-1 hand grenade ', ' Model 24 ...
MIT
Simple-Web-Scraper-Python.ipynb
demetriospogkas/Web-Scraping-with-Beautiful-Soup
Remove the extra spaces, or any special characters
print([span.next.next.next.text.strip() for span in souped.find_all('span', attrs={'class':'mw-headline'})])
['Tokarev TT-33', 'Various characters are seen with a Tokarev TT-33 pistol.', 'Some German NCO and officers carry a Luger P08 pistol.', 'PPSh-41', 'Polish infantrymen are mainly armed with PPSh-41 submachine guns.', 'MP40 is submachine gun used by German infantrymen.', 'Degtyaryov DP-28', 'Polish soldiers mainly use De...
MIT
Simple-Web-Scraper-Python.ipynb
demetriospogkas/Web-Scraping-with-Beautiful-Soup
Bind into one code
len(urls) headers = { 'User-Agent': 'Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Mobile Safari/537.36', 'Connection' : 'keep-alive' } proxies = { # Don't forget your proxies if you need any } every_gun_in_every_movie = [] uncaught_movies = [] ...
_____no_output_____
MIT
Simple-Web-Scraper-Python.ipynb
demetriospogkas/Web-Scraping-with-Beautiful-Soup
And since we're at it`pip3 install pandas`
import pandas as pd df = pd.DataFrame(every_gun_in_every_movie) df df.movie_title.value_counts().head(8) df.gun_used.value_counts().head(8) df.to_csv("every_gun_in_every_movie.csv", index=False) from matplotlib import pyplot as plt %matplotlib inline df.movie_title.value_counts().head(8).plot(kind='bar') plt.style.use(...
_____no_output_____
MIT
Simple-Web-Scraper-Python.ipynb
demetriospogkas/Web-Scraping-with-Beautiful-Soup
Fibonacci Numbers A Fibonacci number F(n) is computed as the sum of the two numbers preceeding it in a Fibonacci sequence(0), 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, ...,for example, F(10) = 55.More formally, we can define a Fibonacci number F(n) as $F(n) = F(n-1) + F(n-2)$, for integers $n > 1$:$$F(n)=\begin{cases} 0...
def fibo_recurse(n): if n <= 1: return n else: return fibo_recurse(n-1) + fibo_recurse(n-2) print(fibo_recurse(0)) print(fibo_recurse(1)) print(fibo_recurse(10))
0 1 55
MIT
algorithms_in_ipython_notebooks/ipython_nbs/efficiency/fibonacci-tree.ipynb
gopala-kr/ds-notebooks
However, it is unfortunately a terribly inefficient algorithm with an exponential running time of $O(2^n)$. The main problem why it is so slow is that we recompute Fibonacci number $F(n) = F(n-1) + F(n-2)$ repeatedly as shown in the recursive tree below: ![Fibonacci Number Recursive Tree](../../images/efficiency/fibona...
def fibo_dynamic(n): f, f_minus_1 = 0, 1 for i in range(n): f_minus_1, f = f, f + f_minus_1 return f print(fibo_dynamic(0)) print(fibo_dynamic(1)) print(fibo_dynamic(10))
0 1 55
MIT
algorithms_in_ipython_notebooks/ipython_nbs/efficiency/fibonacci-tree.ipynb
gopala-kr/ds-notebooks
(If you are interested in other approaches, I recommend you take a look at the pages on [Wikipedia](https://en.wikipedia.org/wiki/Fibonacci_number) and [Wolfram](http://mathworld.wolfram.com/FibonacciNumber.html).) To get a rough idea of the running times of each of our implementations, let's use the `%timeit` magic fo...
%timeit -r 3 -n 10 fibo_recurse(n=30) %timeit -r 3 -n 10 fibo_dynamic(n=30)
10 loops, best of 3: 4.05 µs per loop
MIT
algorithms_in_ipython_notebooks/ipython_nbs/efficiency/fibonacci-tree.ipynb
gopala-kr/ds-notebooks
Finally, let's benchmark our implementations for varying sizes of n:
import timeit funcs = ['fibo_recurse', 'fibo_dynamic'] orders_n = list(range(0, 50, 10)) times_n = {f:[] for f in funcs} for n in orders_n: for f in funcs: times_n[f].append(min(timeit.Timer('%s(n)' % f, 'from __main__ import %s, n' % f) .repeat(repeat=3, number=5))) %...
_____no_output_____
MIT
algorithms_in_ipython_notebooks/ipython_nbs/efficiency/fibonacci-tree.ipynb
gopala-kr/ds-notebooks
Homework 3. Pandas Important notes1. *When you open this file on GitHub, copy the address to this file from the address bar of your browser. Now you can go to [Google Colab](https://colab.research.google.com/), click `File -> Open notebook -> GitHub`, paste the copied URL and click the search button (the one with the ...
import pandas as p df = p.read_csv("https://github.com/hse-mlwp-2022/assignment3-template/raw/main/data/T100_MARKET_ALL_CARRIER.zip") df.columns = (col.lower() for col in df.columns)
_____no_output_____
MIT
pandas_exercise.ipynb
Tima1117/assignment2-template
2. What columns are in the data? (0.5 point)
columns = df.columns print(columns)
Index(['passengers', 'freight', 'mail', 'distance', 'unique_carrier', 'airline_id', 'unique_carrier_name', 'unique_carrier_entity', 'region', 'carrier', 'carrier_name', 'carrier_group', 'carrier_group_new', 'origin_airport_id', 'origin_airport_seq_id', 'origin_city_market_id', 'origin', 'ori...
MIT
pandas_exercise.ipynb
Tima1117/assignment2-template
3. How many distinct carrier names are in the dataset? (0.5 point)
carrier_names = df['unique_carrier_name'].nunique() print(carrier_names)
318
MIT
pandas_exercise.ipynb
Tima1117/assignment2-template
4. Calculate the totals of the `freight`, `mail`, and `passengers` columns for flights from the United Kingdom to the United States. (1 point)
freight_total = df[(df['origin_country_name'] == 'United Kingdom') & (df['dest_country_name'] == 'United States')]['freight'].sum() mail_total = df[(df['origin_country_name'] == 'United Kingdom') & (df['dest_country_name'] == 'United States')]['mail'].sum() passengers_total = df[(df['origin_country_name'] == 'United Ki...
freight total: 903296879.0 mail total: 29838395.0 passengers total: 10685608.0
MIT
pandas_exercise.ipynb
Tima1117/assignment2-template
5. Which 10 carriers flew the most passengers out of the United States to another country? (1.5 points)The result should be a Python iterable, e.g. a list or a corresponding pandas object
top_10_by_passengers = df[(df['origin_country_name'] == 'United States') & (df['dest_country_name'] != 'United States')].groupby('unique_carrier_name')['passengers'].sum().nlargest(10).index.tolist() print(f"List of top 10 carriers with max number of passengers flown out of US: {top_10_by_passengers}")
List of top 10 carriers with max number of passengers flown out of US: ['American Airlines Inc.', 'United Air Lines Inc.', 'Delta Air Lines Inc.', 'JetBlue Airways', 'British Airways Plc', 'Lufthansa German Airlines', 'Westjet', 'Air Canada', 'Southwest Airlines Co.', 'Virgin Atlantic Airways']
MIT
pandas_exercise.ipynb
Tima1117/assignment2-template
6. Between which two cities were the most passengers flown? Make sure to account for both directions. (1.5 points)
def direction(row): origin = row['origin_city_name'] dest = row['dest_city_name'] if origin == dest: return 'none' elif origin > dest: k = origin origin = dest dest = k return '?'.join([origin, dest]) df['direction'] = df[['origin_city_name', 'dest_city_name']].apply(direction, axis=1) top_d...
top route is 'Chicago, IL - New York, NY' with traffic of 4131579.0 passengers
MIT
pandas_exercise.ipynb
Tima1117/assignment2-template
7. Find the top 3 carriers for the pair of cities found in 6 and calculate the percentage of passengers each accounted for. (2 points)The result should be a pandas dataframe object with two columns: 1. carrier name (string)2. percentage of passengers (float in the range of 0-100)
all_passengers_series = df[df['direction']==top_direction].groupby('unique_carrier_name')['passengers'].sum() all_passengers = all_passengers_series.sum() (df[df['direction']==top_direction].groupby('unique_carrier_name')['passengers'].sum().nlargest(3) / all_passengers * 100).reset_index() top_3_carriers_df = (df[df['...
_____no_output_____
MIT
pandas_exercise.ipynb
Tima1117/assignment2-template
8. Find the percentage of international travel per country using total passengers on class F flights. (3 points)
international_travel_per_country = ... # Place your code here instead of '...' international_travel_per_country
_____no_output_____
MIT
pandas_exercise.ipynb
Tima1117/assignment2-template
Modeling and Simulation in PythonChapter 4Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
# Configure Jupyter so figures appear in the notebook %matplotlib inline # Configure Jupyter to display the assigned value after an assignment %config InteractiveShell.ast_node_interactivity='last_expr_or_assign' # import functions from the modsim library from modsim import *
_____no_output_____
MIT
code/chap04-Mine.ipynb
dlrow-olleh/ModSimPy
Returning values Here's a simple function that returns a value:
def add_five(x): return x + 5
_____no_output_____
MIT
code/chap04-Mine.ipynb
dlrow-olleh/ModSimPy
And here's how we call it.
y = add_five(3)
_____no_output_____
MIT
code/chap04-Mine.ipynb
dlrow-olleh/ModSimPy
If you run a function on the last line of a cell, Jupyter displays the result:
add_five(5)
_____no_output_____
MIT
code/chap04-Mine.ipynb
dlrow-olleh/ModSimPy
But that can be a bad habit, because usually if you call a function and don't assign the result in a variable, the result gets discarded.In the following example, Jupyter shows the second result, but the first result just disappears.
add_five(3) add_five(5)
_____no_output_____
MIT
code/chap04-Mine.ipynb
dlrow-olleh/ModSimPy
When you call a function that returns a variable, it is generally a good idea to assign the result to a variable.
y1 = add_five(3) y2 = add_five(5) print(y1, y2)
8 10
MIT
code/chap04-Mine.ipynb
dlrow-olleh/ModSimPy
**Exercise:** Write a function called `make_state` that creates a `State` object with the state variables `olin=10` and `wellesley=2`, and then returns the new `State` object.Write a line of code that calls `make_state` and assigns the result to a variable named `init`.
def make_state(): bikeshare = State(olin=10, wellesley = 2) return bikeshare init = make_state()
_____no_output_____
MIT
code/chap04-Mine.ipynb
dlrow-olleh/ModSimPy
Running simulations Here's the code from the previous notebook.
def step(state, p1, p2): """Simulate one minute of time. state: bikeshare State object p1: probability of an Olin->Wellesley customer arrival p2: probability of a Wellesley->Olin customer arrival """ if flip(p1): bike_to_wellesley(state) if flip(p2): bike_to_olin(st...
_____no_output_____
MIT
code/chap04-Mine.ipynb
dlrow-olleh/ModSimPy
Here's a modified version of `run_simulation` that creates a `State` object, runs the simulation, and returns the `State` object.
def run_simulation(p1, p2, num_steps): """Simulate the given number of time steps. p1: probability of an Olin->Wellesley customer arrival p2: probability of a Wellesley->Olin customer arrival num_steps: number of time steps """ state = State(olin=10, wellesley=2, olin_emp...
_____no_output_____
MIT
code/chap04-Mine.ipynb
dlrow-olleh/ModSimPy
Now `run_simulation` doesn't plot anything:
state = run_simulation(0.4, 0.2, 60)
_____no_output_____
MIT
code/chap04-Mine.ipynb
dlrow-olleh/ModSimPy
But after the simulation, we can read the metrics from the `State` object.
state.olin_empty
_____no_output_____
MIT
code/chap04-Mine.ipynb
dlrow-olleh/ModSimPy
Now we can run simulations with different values for the parameters. When `p1` is small, we probably don't run out of bikes at Olin.
state = run_simulation(0.2, 0.2, 60) state.olin_empty
_____no_output_____
MIT
code/chap04-Mine.ipynb
dlrow-olleh/ModSimPy
When `p1` is large, we probably do.
state = run_simulation(0.6, 0.2, 60) state.olin_empty
_____no_output_____
MIT
code/chap04-Mine.ipynb
dlrow-olleh/ModSimPy
More for loops `linspace` creates a NumPy array of equally spaced numbers.
p1_array = linspace(0, 1, 5)
_____no_output_____
MIT
code/chap04-Mine.ipynb
dlrow-olleh/ModSimPy
We can use an array in a `for` loop, like this:
for p1 in p1_array: print(p1)
0.0 0.25 0.5 0.75 1.0
MIT
code/chap04-Mine.ipynb
dlrow-olleh/ModSimPy
This will come in handy in the next section.`linspace` is defined in `modsim.py`. You can get the documentation using `help`.
help(linspace)
Help on function linspace in module modsim: linspace(start, stop, num=50, **options) Returns an array of evenly-spaced values in the interval [start, stop]. start: first value stop: last value num: number of values Also accepts the same keyword arguments as np.linspace. See https://d...
MIT
code/chap04-Mine.ipynb
dlrow-olleh/ModSimPy
`linspace` is based on a NumPy function with the same name. [Click here](https://docs.scipy.org/doc/numpy/reference/generated/numpy.linspace.html) to read more about how to use it. **Exercise:** Use `linspace` to make an array of 10 equally spaced numbers from 1 to 10 (including both).
linspace(1,10,10)
_____no_output_____
MIT
code/chap04-Mine.ipynb
dlrow-olleh/ModSimPy
**Exercise:** The `modsim` library provides a related function called `linrange`. You can view the documentation by running the following cell:
help(linrange)
Help on function linrange in module modsim: linrange(start=0, stop=None, step=1, **options) Returns an array of evenly-spaced values in the interval [start, stop]. This function works best if the space between start and stop is divisible by step; otherwise the results might be surprising. By ...
MIT
code/chap04-Mine.ipynb
dlrow-olleh/ModSimPy
Use `linrange` to make an array of numbers from 1 to 11 with a step size of 2.
linrange(1,11,2)
_____no_output_____
MIT
code/chap04-Mine.ipynb
dlrow-olleh/ModSimPy
Sweeping parameters `p1_array` contains a range of values for `p1`.
p2 = 0.2 num_steps = 60 p1_array = linspace(0, 1, 11)
_____no_output_____
MIT
code/chap04-Mine.ipynb
dlrow-olleh/ModSimPy