markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
In-Class Coding Lab: ListsThe goals of this lab are to help you understand: - List indexing and slicing - List methods such as insert, append, find, delete - How to iterate over lists with loops Python Lists work like Real-Life Lists In real life, we make lists all the time. To-Do lists. Shopping lists. Reading lists...
shopping_list = [ 'Milk', 'Eggs', 'Bread', 'Beer'] item_count = len(shopping_list) print("List: %s has %d items" % (shopping_list, item_count))
List: ['Milk', 'Eggs', 'Bread', 'Beer'] has 4 items
MIT
content/lessons/09/Class-Coding-Lab/CCL-Lists.ipynb
jvrecca-su/ist256project
Enumerating Your List ItemsIn real-life, we *enumerate* lists all the time. We go through the items on our list one at a time and make a decision, for example: "Did I add that to my shopping cart yet?"In Python we go through items in our lists with the `for` loop. We use `for` because the number of items in pre-determ...
for item in shopping_list: print("I need to buy some %s " % (item))
I need to buy some Milk I need to buy some Eggs I need to buy some Bread I need to buy some Beer
MIT
content/lessons/09/Class-Coding-Lab/CCL-Lists.ipynb
jvrecca-su/ist256project
Now You Try It!Write code in the space below to print each stock on its own line.
stocks = [ 'IBM', 'AAPL', 'GOOG', 'MSFT', 'TWTR', 'FB'] #TODO: Write code here for item in stocks: print(item)
IBM AAPL GOOG MSFT TWTR FB
MIT
content/lessons/09/Class-Coding-Lab/CCL-Lists.ipynb
jvrecca-su/ist256project
Indexing ListsSometimes we refer to our items by their place in the list. For example "Milk is the first item on the list" or "Beer is the last item on the list."We can also do this in Python, and it is called *indexing* the list. **IMPORTANT** The first item in a Python lists starts at index **0**.
print("The first item in the list is:", shopping_list[0]) print("The last item in the list is:", shopping_list[3]) print("This is also the last item in the list:", shopping_list[-1]) print("This is the second to last item in the list:", shopping_list[-2])
The first item in the list is: Milk The last item in the list is: Beer This is also the last item in the list: Beer This is the second to last item in the list: Bread
MIT
content/lessons/09/Class-Coding-Lab/CCL-Lists.ipynb
jvrecca-su/ist256project
For Loop with IndexYou can also loop through your Python list using an index. In this case we use the `range()` function to determine how many times we should loop:
for i in range(len(shopping_list)): print("I need to buy some %s " % (shopping_list[i]))
I need to buy some Milk I need to buy some Eggs I need to buy some Bread I need to buy some Beer
MIT
content/lessons/09/Class-Coding-Lab/CCL-Lists.ipynb
jvrecca-su/ist256project
Now You Try It!Write code to print the 2nd and 4th stocks in the list variable `stocks`. For example:`AAPL MSFT`
#TODO: Write code here print("This is the second stock in the list:", stocks[1]) print("This is the fourth stock in the list:", stocks[3])
This is the second stock in the list: AAPL This is the fourth stock in the list: MSFT
MIT
content/lessons/09/Class-Coding-Lab/CCL-Lists.ipynb
jvrecca-su/ist256project
Lists are MutableUnlike strings, lists are mutable. This means we can change a value in the list.For example, I want `'Craft Beer'` not just `'Beer'`:
print(shopping_list) shopping_list[-1] = 'Craft Beer' print(shopping_list)
['Milk', 'Eggs', 'Bread', 'Beer'] ['Milk', 'Eggs', 'Bread', 'Craft Beer']
MIT
content/lessons/09/Class-Coding-Lab/CCL-Lists.ipynb
jvrecca-su/ist256project
List MethodsIn your readings and class lecture, you encountered some list methods. These allow us to maniupulate the list by adding or removing items.
print("Shopping List: %s" %(shopping_list)) print("Adding 'Cheese' to the end of the list...") shopping_list.append('Cheese') #add to end of list print("Shopping List: %s" %(shopping_list)) print("Adding 'Cereal' to position 0 in the list...") shopping_list.insert(0,'Cereal') # add to the beginning of the list (pos...
Shopping List: ['Milk', 'Eggs', 'Bread', 'Craft Beer'] Adding 'Cheese' to the end of the list... Shopping List: ['Milk', 'Eggs', 'Bread', 'Craft Beer', 'Cheese'] Adding 'Cereal' to position 0 in the list... Shopping List: ['Cereal', 'Milk', 'Eggs', 'Bread', 'Craft Beer', 'Cheese'] Removing 'Cheese' from the list... Sh...
MIT
content/lessons/09/Class-Coding-Lab/CCL-Lists.ipynb
jvrecca-su/ist256project
Now You Try It!Write a program to remove the following stocks: `IBM` and `TWTR`Then add this stock to the end `NFLX` and this stock to the beginning `TSLA`Print your list when you are done. It should look like this:`['TSLA', 'AAPL', 'GOOG', 'MSFT', 'FB', 'NFLX']`
# TODO: Write Code here print("Stocks: %s" % (stocks)) print('Removing Stocks: IBM, TWTR') stocks.remove('IBM') stocks.remove('TWTR') print("Stocks: %s" % (stocks)) print('Adding Stock to End:NFLX') stocks.append('NFLX') print("Stocks: %s" % (stocks)) print('Adding Stock to Beginning:TSLA') stocks.insert(0, 'TSLA...
Stocks: ['IBM', 'AAPL', 'GOOG', 'MSFT', 'TWTR', 'FB'] Removing Stocks: IBM, TWTR Stocks: ['AAPL', 'GOOG', 'MSFT', 'FB'] Adding Stock to End:NFLX Stocks: ['AAPL', 'GOOG', 'MSFT', 'FB', 'NFLX'] Adding Stock to Beginning:TSLA Final Stocks: ['TSLA', 'AAPL', 'GOOG', 'MSFT', 'FB', 'NFLX']
MIT
content/lessons/09/Class-Coding-Lab/CCL-Lists.ipynb
jvrecca-su/ist256project
SortingSince Lists are mutable. You can use the `sort()` method to re-arrange the items in the list alphabetically (or numerically if it's a list of numbers)
print("Before Sort:", shopping_list) shopping_list.sort() print("After Sort:", shopping_list)
Before Sort: ['Milk', 'Eggs', 'Bread', 'Craft Beer'] After Sort: ['Bread', 'Craft Beer', 'Eggs', 'Milk']
MIT
content/lessons/09/Class-Coding-Lab/CCL-Lists.ipynb
jvrecca-su/ist256project
Putting it all togetherWinning Lotto numbers. When the lotto numbers are drawn, they are in any order, when they are presented they're allways sorted. Let's write a program to input 5 numbers then output them sorted```1. for i in range(5)2. input a number3. append the number you input to the lotto_numbers list4....
## TODO: Write program here: lotto_numbers = [] # start with an empty list for i in range(5): number = int(input("Enter a number: ")) lotto_numbers.append(number) lotto_numbers.sort() print("Today's winning lotto numbers are", lotto_numbers)
Enter a number: 12 Enter a number: 15 Enter a number: 22 Enter a number: 9 Enter a number: 4 Today's winning lotto numbers are [4, 9, 12, 15, 22]
MIT
content/lessons/09/Class-Coding-Lab/CCL-Lists.ipynb
jvrecca-su/ist256project
Composing a pipeline from reusable, pre-built, and lightweight componentsThis tutorial describes how to build a Kubeflow pipeline from reusable, pre-built, and lightweight components. The following provides a summary of the steps involved in creating and using a reusable component:- Write the program that contains you...
import kfp import kfp.gcp as gcp import kfp.dsl as dsl import kfp.compiler as compiler import kfp.components as comp import datetime import kubernetes as k8s # Required Parameters PROJECT_ID='<ADD GCP PROJECT HERE>' GCS_BUCKET='gs://<ADD STORAGE LOCATION HERE>'
_____no_output_____
Apache-2.0
samples/tutorials/mnist/04_Reusable_and_Pre-build_Components_as_Pipeline.ipynb
eedorenko/pipelines
Create clientIf you run this notebook **outside** of a Kubeflow cluster, run the following command:- `host`: The URL of your Kubeflow Pipelines instance, for example "https://``.endpoints.``.cloud.goog/pipeline"- `client_id`: The client ID used by Identity-Aware Proxy- `other_client_id`: The client ID used to obtain t...
# Optional Parameters, but required for running outside Kubeflow cluster # The host for 'AI Platform Pipelines' ends with 'pipelines.googleusercontent.com' # The host for pipeline endpoint of 'full Kubeflow deployment' ends with '/pipeline' # Examples are: # https://7c021d0340d296aa-dot-us-central2.pipelines.googleuse...
_____no_output_____
Apache-2.0
samples/tutorials/mnist/04_Reusable_and_Pre-build_Components_as_Pipeline.ipynb
eedorenko/pipelines
Build reusable components Writing the program code The following cell creates a file `app.py` that contains a Python script. The script downloads MNIST dataset, trains a Neural Network based classification model, writes the training log and exports the trained model to Google Cloud Storage.Your component can create o...
%%bash # Create folders if they don't exist. mkdir -p tmp/reuse_components_pipeline/mnist_training # Create the Python file that lists GCS blobs. cat > ./tmp/reuse_components_pipeline/mnist_training/app.py <<HERE import argparse from datetime import datetime import tensorflow as tf parser = argparse.ArgumentParser()...
_____no_output_____
Apache-2.0
samples/tutorials/mnist/04_Reusable_and_Pre-build_Components_as_Pipeline.ipynb
eedorenko/pipelines
Create a Docker containerCreate your own container image that includes your program. Creating a Dockerfile Now create a container that runs the script. Start by creating a Dockerfile. A Dockerfile contains the instructions to assemble a Docker image. The `FROM` statement specifies the Base Image from which you are b...
%%bash # Create Dockerfile. # AI platform only support tensorflow 1.14 cat > ./tmp/reuse_components_pipeline/mnist_training/Dockerfile <<EOF FROM tensorflow/tensorflow:1.14.0-py3 WORKDIR /app COPY . /app EOF
_____no_output_____
Apache-2.0
samples/tutorials/mnist/04_Reusable_and_Pre-build_Components_as_Pipeline.ipynb
eedorenko/pipelines
Build docker image Now that we have created our Dockerfile for creating our Docker image. Then we need to build the image and push to a registry to host the image. There are three possible options:- Use the `kfp.containers.build_image_from_working_dir` to build the image and push to the Container Registry (GCR). This ...
IMAGE_NAME="mnist_training_kf_pipeline" TAG="latest" # "v_$(date +%Y%m%d_%H%M%S)" GCR_IMAGE="gcr.io/{PROJECT_ID}/{IMAGE_NAME}:{TAG}".format( PROJECT_ID=PROJECT_ID, IMAGE_NAME=IMAGE_NAME, TAG=TAG ) APP_FOLDER='./tmp/reuse_components_pipeline/mnist_training/' # In the following, for the purpose of demonstra...
_____no_output_____
Apache-2.0
samples/tutorials/mnist/04_Reusable_and_Pre-build_Components_as_Pipeline.ipynb
eedorenko/pipelines
If you want to use docker to build the imageRun the following in a cell```bash%%bash -s "{PROJECT_ID}"IMAGE_NAME="mnist_training_kf_pipeline"TAG="latest" "v_$(date +%Y%m%d_%H%M%S)" Create script to build docker image and push it.cat > ./tmp/components/mnist_training/build_image.sh <<HEREPROJECT_ID="${1}"IMAGE_NAME="$...
image_name = GCR_IMAGE
_____no_output_____
Apache-2.0
samples/tutorials/mnist/04_Reusable_and_Pre-build_Components_as_Pipeline.ipynb
eedorenko/pipelines
Writing your component definition fileTo create a component from your containerized program, you must write a component specification in YAML that describes the component for the Kubeflow Pipelines system.For the complete definition of a Kubeflow Pipelines component, see the [component specification](https://www.kubef...
%%bash -s "{image_name}" GCR_IMAGE="${1}" echo ${GCR_IMAGE} # Create Yaml # the image uri should be changed according to the above docker image push output cat > mnist_pipeline_component.yaml <<HERE name: Mnist training description: Train a mnist model and save to GCS inputs: - name: model_path description: 'P...
_____no_output_____
Apache-2.0
samples/tutorials/mnist/04_Reusable_and_Pre-build_Components_as_Pipeline.ipynb
eedorenko/pipelines
Define deployment operation on AI Platform
mlengine_deploy_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/2df775a28045bda15372d6dd4644f71dcfe41bfe/components/gcp/ml_engine/deploy/component.yaml') def deploy( project_id, model_uri, model_id, runtime_version, python_version): return mleng...
_____no_output_____
Apache-2.0
samples/tutorials/mnist/04_Reusable_and_Pre-build_Components_as_Pipeline.ipynb
eedorenko/pipelines
Kubeflow serving deployment component as an option. **Note that, the deployed Endppoint URI is not availabe as output of this component.**```pythonkubeflow_deploy_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/2df775a28045bda15372d6dd4644f71dcfe41bfe/components/gcp/ml_engine...
def deployment_test(project_id: str, model_name: str, version: str) -> str: model_name = model_name.split("/")[-1] version = version.split("/")[-1] import googleapiclient.discovery def predict(project, model, data, version=None): """Run predictions on a list of instances. Args: ...
_____no_output_____
Apache-2.0
samples/tutorials/mnist/04_Reusable_and_Pre-build_Components_as_Pipeline.ipynb
eedorenko/pipelines
Create your workflow as a Python function Define your pipeline as a Python function. ` @kfp.dsl.pipeline` is a required decoration, and must include `name` and `description` properties. Then compile the pipeline function. After the compilation is completed, a pipeline file is created.
# Define the pipeline @dsl.pipeline( name='Mnist pipeline', description='A toy pipeline that performs mnist model training.' ) def mnist_reuse_component_deploy_pipeline( project_id: str = PROJECT_ID, model_path: str = 'mnist_model', bucket: str = GCS_BUCKET ): train_task = mnist_train_op( ...
_____no_output_____
Apache-2.0
samples/tutorials/mnist/04_Reusable_and_Pre-build_Components_as_Pipeline.ipynb
eedorenko/pipelines
Submit a pipeline run
pipeline_func = mnist_reuse_component_deploy_pipeline experiment_name = 'minist_kubeflow' arguments = {"model_path":"mnist_model", "bucket":GCS_BUCKET} run_name = pipeline_func.__name__ + ' run' # Submit pipeline directly from pipeline function run_result = client.create_run_from_pipeline_func(pipeline_...
_____no_output_____
Apache-2.0
samples/tutorials/mnist/04_Reusable_and_Pre-build_Components_as_Pipeline.ipynb
eedorenko/pipelines
Notes for Using Python for Research(HarvardX PH526x) Part 4: Randomness and Time 1. Simulating Randomness(模拟随机性) 2. Examples Involving Randomness 3. Using the NumPy Random Module 4. Measuring Time(测量时间) 5. Random Walks(RW,随机游走)$$x(t=x) = x(t=0) + \Delta x(t=1) + \ldots + \Delta x(t=k)$$ Simulating Randomness 部分的代码
import random random.choice(["H","T"]) random.choice([0, 1]) random.choice([1,2,3,4,5,6]) random.choice(range(1, 7)) random.choice([range(1,7)]) random.choice(random.choice([range(1, 7), range(1, 9), range(1, 11)]))
_____no_output_____
MulanPSL-1.0
python_notes/using_python_for_research_ph256x_harvard/week2/4randomness_n_time.ipynb
ZaynChen/notes
Examples Involving Randomness 部分的代码
import random import matplotlib.pyplot as plt import numpy as np rolls = [] for k in range(100000): rolls.append(random.choice([1,2,3,4,5,6])) plt.hist(rolls, bins = np.linspace(0.5, 6.5, 7)); ys = [] for rep in range(100000): y = 0 for k in range(10): x = random.choice([1,2,3,4,5,6]) y = y...
_____no_output_____
MulanPSL-1.0
python_notes/using_python_for_research_ph256x_harvard/week2/4randomness_n_time.ipynb
ZaynChen/notes
Using the NumPy Random Module 部分的代码
import numpy as np np.random.random() np.random.random(5) np.random.random((5, 3)) np.random.normal(0, 1) np.random.normal(0, 1, 5) np.random.normal(0, 1, (2, 5)) import matplotlib.pyplot as plt X = np.random.randint(1, 7, (100000, 10)) Y = np.sum(X, axis=1) plt.hist(Y);
_____no_output_____
MulanPSL-1.0
python_notes/using_python_for_research_ph256x_harvard/week2/4randomness_n_time.ipynb
ZaynChen/notes
Measuring Time 部分的代码
import time import random import numpy as np start_time = time.time() ys = [] for rep in range(1000000): y = 0 for k in range(10): x = random.choice([1,2,3,4,5,6]) y = y + x ys.append(y) end_time = time.time() print(end_time - start_time) start_time = time.time() X = np.random.randint(1, 7, (10...
_____no_output_____
MulanPSL-1.0
python_notes/using_python_for_research_ph256x_harvard/week2/4randomness_n_time.ipynb
ZaynChen/notes
Random Walks 部分的代码
import numpy as np import matplotlib.pyplot as plt delta_X = np.random.normal(0,1,(2,5)) plt.plot(delta_X[0], delta_X[1], "go") X = np.cumsum(delta_X, axis=1) X X_0 = np.array([[0], [0]]) delta_X = np.random.normal(0, 1, (2, 100)) X = np.concatenate((X_0, np.cumsum(delta_X, axis=1)), axis=1) plt.plot(X[0], X[1], "ro-")...
_____no_output_____
MulanPSL-1.0
python_notes/using_python_for_research_ph256x_harvard/week2/4randomness_n_time.ipynb
ZaynChen/notes
Mask R-CNN DemoA quick intro to using the pre-trained model to detect and segment objects.
import os os.environ["CUDA_VISIBLE_DEVICES"] = "-1" import sys import random import math import numpy as np import skimage.io import matplotlib import matplotlib.pyplot as plt # Root directory of the project ROOT_DIR = os.path.abspath("../") # Import Mask RCNN sys.path.append(ROOT_DIR) # To find local version of the...
Using TensorFlow backend.
MIT
samples/demo.ipynb
jaewon-jun9/Mask_RCNN
ConfigurationsWe'll be using a model trained on the MS-COCO dataset. The configurations of this model are in the ```CocoConfig``` class in ```coco.py```.For inferencing, modify the configurations a bit to fit the task. To do so, sub-class the ```CocoConfig``` class and override the attributes you need to change.
class InferenceConfig(coco.CocoConfig): # Set batch size to 1 since we'll be running inference on # one image at a time. Batch size = GPU_COUNT * IMAGES_PER_GPU GPU_COUNT = 1 IMAGES_PER_GPU = 1 config = InferenceConfig() config.display()
Configurations: BACKBONE resnet101 BACKBONE_STRIDES [4, 8, 16, 32, 64] BATCH_SIZE 1 BBOX_STD_DEV [0.1 0.1 0.2 0.2] COMPUTE_BACKBONE_SHAPE None DETECTION_MAX_INSTANCES 100 DETECTION_MIN_CONFIDENCE 0.7 DETECTION_NMS_THRESHOLD ...
MIT
samples/demo.ipynb
jaewon-jun9/Mask_RCNN
Create Model and Load Trained Weights
# Create model object in inference mode. model = modellib.MaskRCNN(mode="inference", model_dir=MODEL_DIR, config=config) # Load weights trained on MS-COCO model.load_weights(COCO_MODEL_PATH, by_name=True)
_____no_output_____
MIT
samples/demo.ipynb
jaewon-jun9/Mask_RCNN
Class NamesThe model classifies objects and returns class IDs, which are integer value that identify each class. Some datasets assign integer values to their classes and some don't. For example, in the MS-COCO dataset, the 'person' class is 1 and 'teddy bear' is 88. The IDs are often sequential, but not always. The CO...
# COCO Class names # Index of the class in the list is its ID. For example, to get ID of # the teddy bear class, use: class_names.index('teddy bear') class_names = ['BG', 'person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light', 'fire hydrant',...
_____no_output_____
MIT
samples/demo.ipynb
jaewon-jun9/Mask_RCNN
Run Object Detection
# Load a random image from the images folder file_names = next(os.walk(IMAGE_DIR))[2] image = skimage.io.imread(os.path.join(IMAGE_DIR, random.choice(file_names))) # Run detection results = model.detect([image], verbose=1) # Visualize results r = results[0] visualize.display_instances(image, r['rois'], r['masks'], r['c...
_____no_output_____
MIT
samples/demo.ipynb
jaewon-jun9/Mask_RCNN
Classes and subclasses In this notebook, I will show you the basics of classes and subclasses in Python. As you've seen in the lectures from this week, `Trax` uses layer classes as building blocks for deep learning models, so it is important to understand how classes and subclasses behave in order to be able to build ...
class My_Class: #Definition of My_class x = None
_____no_output_____
MIT
NLP/Sequences/1/NLP_C3_W1_lecture_nb_02_classes.ipynb
verneh/DataSci
`My_Class` has one parameter `x` without any value. You can think of parameters as the variables that every object assigned to a class will have. So, at this point, any object of class `My_Class` would have a variable `x` equal to `None`. To check this, I'll create two instances of that class and get the value of `x`...
instance_a= My_Class() #To create an instance from class "My_Class" you have to call "My_Class" instance_b= My_Class() print('Parameter x of instance_a: ' + str(instance_a.x)) #To get a parameter 'x' from an instance 'a', write 'a.x' print('Parameter x of instance_b: ' + str(instance_b.x))
Parameter x of instance_a: None Parameter x of instance_b: None
MIT
NLP/Sequences/1/NLP_C3_W1_lecture_nb_02_classes.ipynb
verneh/DataSci
For an existing instance you can assign new values for any of its parameters. In the next cell, assign a value of `5` to the parameter `x` of `instance_a`.
### START CODE HERE (1 line) ### instance_a.x = 5 ### END CODE HERE ### print('Parameter x of instance_a: ' + str(instance_a.x))
Parameter x of instance_a: 5
MIT
NLP/Sequences/1/NLP_C3_W1_lecture_nb_02_classes.ipynb
verneh/DataSci
1.1 The `__init__` method When you want to assign values to the parameters of your class when an instance is created, it is necessary to define a special method: `__init__`. The `__init__` method is called when you create an instance of a class. It can have multiple arguments to initialize the paramenters of your inst...
class My_Class: def __init__(self, y): # The __init__ method takes as input the instance to be initialized and a variable y self.x = y # Sets parameter x to be equal to y
_____no_output_____
MIT
NLP/Sequences/1/NLP_C3_W1_lecture_nb_02_classes.ipynb
verneh/DataSci
In this case, the parameter `x` of an instance from `My_Class` would take the value of an argument `y`. The argument `self` is used to pass information from the instance being created to the method `__init__`. In the next cell, create an instance `instance_c`, with `x` equal to `10`.
### START CODE HERE (1 line) ### instance_c = My_Class(10) ### END CODE HERE ### print('Parameter x of instance_c: ' + str(instance_c.x))
Parameter x of instance_c: 10
MIT
NLP/Sequences/1/NLP_C3_W1_lecture_nb_02_classes.ipynb
verneh/DataSci
Note that in this case, you had to pass the argument `y` from the `__init__` method to create an instance of `My_Class`. 1.2 The `__call__` method Another important method is the `__call__` method. It is performed whenever you call an initialized instance of a class. It can have multiple arguments and you can define i...
class My_Class: def __init__(self, y): # The __init__ method takes as input the instance to be initialized and a variable y self.x = y # Sets parameter x to be equal to y def __call__(self, z): # __call__ method with self and z as arguments self.x += z # Adds z to parameter x whe...
_____no_output_____
MIT
NLP/Sequences/1/NLP_C3_W1_lecture_nb_02_classes.ipynb
verneh/DataSci
Let’s create `instance_d` with `x` equal to 5.
instance_d = My_Class(5)
_____no_output_____
MIT
NLP/Sequences/1/NLP_C3_W1_lecture_nb_02_classes.ipynb
verneh/DataSci
And now, see what happens when `instance_d` is called with argument `10`.
instance_d(10)
15
MIT
NLP/Sequences/1/NLP_C3_W1_lecture_nb_02_classes.ipynb
verneh/DataSci
Now, you are ready to complete the following cell so any instance from `My_Class`:- Is initialized taking two arguments `y` and `z` and assigns them to `x_1` and `x_2`, respectively. And, - When called, takes the values of the parameters `x_1` and `x_2`, sums them, prints and returns the result.
class My_Class: def __init__(self, y, z): #Initialization of x_1 and x_2 with arguments y and z ### START CODE HERE (2 lines) ### self.x_1 = y self.x_2 = z ### END CODE HERE ### def __call__(self): #When called, adds the values of parameters x_1 and x_2, prints and return...
_____no_output_____
MIT
NLP/Sequences/1/NLP_C3_W1_lecture_nb_02_classes.ipynb
verneh/DataSci
Run the next cell to check your implementation. If everything is correct, you shouldn't get any errors.
instance_e = My_Class(10,15) def test_class_definition(): assert instance_e.x_1 == 10, "Check the value assigned to x_1" assert instance_e.x_2 == 15, "Check the value assigned to x_2" assert instance_e() == 25, "Check the __call__ method" print("\033[92mAll tests passed!") test_class_defi...
Addition of 10 and 15 is 25 All tests passed!
MIT
NLP/Sequences/1/NLP_C3_W1_lecture_nb_02_classes.ipynb
verneh/DataSci
1.3 Custom methods In addition to the `__init__` and `__call__` methods, your classes can have custom-built methods to do whatever you want when called. To define a custom method, you have to indicate its input arguments, the instructions that you want it to perform and the values to return (if any). In the next cell,...
class My_Class: def __init__(self, y, z): #Initialization of x_1 and x_2 with arguments y and z self.x_1 = y self.x_2 = z def __call__(self): #Performs an operation with x_1 and x_2, and returns the result a = self.x_1 - 2*self.x_2 return a def my_method(self, w): #...
_____no_output_____
MIT
NLP/Sequences/1/NLP_C3_W1_lecture_nb_02_classes.ipynb
verneh/DataSci
Create an instance `instance_f` of `My_Class` with any integer values that you want for `x_1` and `x_2`. For that instance, see the result of calling `My_method`, with an argument `w` equal to `16`.
### START CODE HERE (1 line) ### instance_f = My_Class(1,10) ### END CODE HERE ### print("Output of my_method:",instance_f.my_method(16))
Output of my_method: 26
MIT
NLP/Sequences/1/NLP_C3_W1_lecture_nb_02_classes.ipynb
verneh/DataSci
As you can corroborate in the previous cell, to call a custom method `m`, with arguments `args`, for an instance `i` you must write `i.m(args)`. With that in mind, methods can call others within a class. In the following cell, try to define `new_method` which calls `my_method` with `v` as input argument. Try to do this...
class My_Class: def __init__(self, y, z): #Initialization of x_1 and x_2 with arguments y and z self.x_1 = None self.x_2 = None def __call__(self): #Performs an operation with x_1 and x_2, and returns the result a = None return a def my_method(self, w)...
_____no_output_____
MIT
NLP/Sequences/1/NLP_C3_W1_lecture_nb_02_classes.ipynb
verneh/DataSci
SPOILER ALERT Solution:
# hidden-cell class My_Class: def __init__(self, y, z): #Initialization of x_1 and x_2 with arguments y and z self.x_1 = y self.x_2 = z def __call__(self): #Performs an operation with x_1 and x_2, and returns the result a = self.x_1 - 2*self.x_2 return a def...
Output of my_method: 26 Output of new_method: 26
MIT
NLP/Sequences/1/NLP_C3_W1_lecture_nb_02_classes.ipynb
verneh/DataSci
Part 2: Subclasses and Inheritance `Trax` uses classes and subclasses to define layers. The base class in `Trax` is `layer`, which means that every layer from a deep learning model is defined as a subclass of the `layer` class. In this part of the notebook, you are going to see how subclasses work. To define a subclas...
class sub_c(My_Class): #Subclass sub_c from My_class def additional_method(self): #Prints the value of parameter x_1 print(self.x_1)
_____no_output_____
MIT
NLP/Sequences/1/NLP_C3_W1_lecture_nb_02_classes.ipynb
verneh/DataSci
2.1 Inheritance When you define a subclass `sub`, every method and parameter is inherited from `super` class, including the `__init__` and `__call__` methods. This means that any instance from `sub` can use the methods defined in `super`. Run the following cell and see for yourself.
instance_sub_a = sub_c(1,10) print('Parameter x_1 of instance_sub_a: ' + str(instance_sub_a.x_1)) print('Parameter x_2 of instance_sub_a: ' + str(instance_sub_a.x_2)) print("Output of my_method of instance_sub_a:",instance_sub_a.my_method(16))
Parameter x_1 of instance_sub_a: 1 Parameter x_2 of instance_sub_a: 10 Output of my_method of instance_sub_a: 26
MIT
NLP/Sequences/1/NLP_C3_W1_lecture_nb_02_classes.ipynb
verneh/DataSci
As you can see, `sub_c` does not have an initialization method `__init__`, it is inherited from `My_class`. However, you can overwrite any method you want by defining it again in the subclass. For instance, in the next cell define a class `sub_c` with a redefined `my_Method` that multiplies `x_1` and `x_2` but does not...
class sub_c(My_Class): #Subclass sub_c from My_class def my_method(self): #Multiplies x_1 and x_2 and returns the result ### START CODE HERE (1 line) ### b = self.x_1*self.x_2 ### END CODE HERE ### return b
_____no_output_____
MIT
NLP/Sequences/1/NLP_C3_W1_lecture_nb_02_classes.ipynb
verneh/DataSci
To check your implementation run the following cell.
test = sub_c(3,10) assert test.my_method() == 30, "The method my_method should return the product between x_1 and x_2" print("Output of overridden my_method of test:",test.my_method()) #notice we didn't pass any parameter to call my_method #print("Output of overridden my_method of test:",test.my_method(16)) #try to se...
Output of overridden my_method of test: 30
MIT
NLP/Sequences/1/NLP_C3_W1_lecture_nb_02_classes.ipynb
verneh/DataSci
In the next cell, two instances are created, one of `My_Class` and another one of `sub_c`. The instances are initialized with equal `x_1` and `x_2` parameters.
y,z= 1,10 instance_sub_a = sub_c(y,z) instance_a = My_Class(y,z) print('My_method for an instance of sub_c returns: ' + str(instance_sub_a.my_method())) print('My_method for an instance of My_Class returns: ' + str(instance_a.my_method(10)))
My_method for an instance of sub_c returns: 10 My_method for an instance of My_Class returns: 20
MIT
NLP/Sequences/1/NLP_C3_W1_lecture_nb_02_classes.ipynb
verneh/DataSci
Interactive Data ExplorationThis notebook demonstrates how the functions and techniques we covered in the first notebook can be combined to build interactive data exploration tools. The code in the cells below will generate two interactive panels. The The first panel enables comparison of LIS output, SNODAS, and SNOTE...
import numpy as np import pandas as pd import geopandas import xarray as xr import fsspec import s3fs from datetime import datetime as dt from scipy.spatial import distance import holoviews as hv, geoviews as gv from geoviews import opts from geoviews import tile_sources as gvts from datashader.colors import viridis ...
_____no_output_____
MIT
book/tutorials/lis/2_interactive_data_exploration.ipynb
zachghiaccio/website
Load Data SNOTEL Sites info
# create dictionary linking state names and abbreviations snotel = {"AZ" : "arizona", "CO" : "colorado", "ID" : "idaho", "MT" : "montana", "NM" : "newmexico", "UT" : "utah", "WY" : "wyoming"} # load SNOTEL site metadata for sites in the given state def load_s...
_____no_output_____
MIT
book/tutorials/lis/2_interactive_data_exploration.ipynb
zachghiaccio/website
SNOTEL Depth & SWE
def load_snotel_txt(state, var): # define path to file key = f"SNOTEL/snotel_{state}{var}_20162020.txt" # open text file fh = s3.open(f"{bucket}/{key}") # read each line and note those that begin with '#' lines = fh.readlines() skips = sum(1 for ln in lines if ln.decode('ascii').s...
_____no_output_____
MIT
book/tutorials/lis/2_interactive_data_exploration.ipynb
zachghiaccio/website
SNODAS Depth & SWELike the LIS output we have been working with, a sample of SNODAS data is available on our S3 bucket in Zarr format. We can therefore load the SNODAS just as we load the LIS data.
# load snodas depth data key = "SNODAS/snodas_snowdepth_20161001_20200930.zarr" snodas_depth = xr.open_zarr(s3.get_mapper(f"{bucket}/{key}"), consolidated=True) # load snodas swe data key = "SNODAS/snodas_swe_20161001_20200930.zarr" snodas_swe = xr.open_zarr(s3.get_mapper(f"{bucket}/{key}"), consolidated=True)
_____no_output_____
MIT
book/tutorials/lis/2_interactive_data_exploration.ipynb
zachghiaccio/website
LIS OutputsNext we'll load the LIS outputs. First, we'll define the helper function we saw in the previous notebook that adds `lat` and `lon` as coordinate variables. We'll use this immediately upon loading the data.
def add_latlon_coords(dataset: xr.Dataset)->xr.Dataset: """Adds lat/lon as dimensions and coordinates to an xarray.Dataset object.""" # get attributes from dataset attrs = dataset.attrs # get x, y resolutions dx = round(float(attrs['DX']), 3) dy = round(float(attrs['DY']), 3) ...
_____no_output_____
MIT
book/tutorials/lis/2_interactive_data_exploration.ipynb
zachghiaccio/website
Load the LIS data and apply `add_latlon_coords()`:
# LIS surfacemodel DA_10km key = "DA_SNODAS/SURFACEMODEL/LIS_HIST.d01.zarr" lis_sf = xr.open_zarr(s3.get_mapper(f"{bucket}/{key}"), consolidated=True) # (optional for 10km simulation?) lis_sf = add_latlon_coords(lis_sf) # drop off irrelevant variables drop_vars = ['_history', '_eis_source_path', 'orig_lat', 'orig_lo...
_____no_output_____
MIT
book/tutorials/lis/2_interactive_data_exploration.ipynb
zachghiaccio/website
Working with the full LIS output dataset can be slow and consume lots of memory. Here we temporally subset the data to a shorter window of time. The full dataset contains daily values from 10/1/2016 to 9/30/2018. Feel free to explore the full dataset by modifying the `time_range` variable below and re-running all cells...
# subset LIS data for two years time_range = slice('2016-10-01', '2017-04-30') lis_sf = lis_sf.sel(time=time_range)
_____no_output_____
MIT
book/tutorials/lis/2_interactive_data_exploration.ipynb
zachghiaccio/website
In the next cell, we extract the data variable names and timesteps from the LIS outputs. These will be used to define the widget options.
# gather metadata from LIS # get variable names:string vnames = list(lis_sf.data_vars) print(vnames) # get time-stamps:string tstamps = list(np.datetime_as_string(lis_sf.time.values, 'D')) print(len(tstamps), tstamps[0], tstamps[-1])
_____no_output_____
MIT
book/tutorials/lis/2_interactive_data_exploration.ipynb
zachghiaccio/website
By default, the `holoviews` plotting library automatically adjusts the range of plot colorbars based on the range of values in the data being plotted. This may not be ideal when comparing data on different timesteps. In the next cell we extract the upper and lower bounds for each data variable which we'll later use to ...
%%time # pre-load min/max range for LIS variables def get_cmap_range(vns): vals = [(lis_sf[x].sel(time='2016-12').min(skipna=True).values.item(), lis_sf[x].sel(time='2016-12').max(skipna=True).values.item()) for x in vns] return dict(zip(vns, vals)) cmap_lims = get_cmap_range(vnames)
_____no_output_____
MIT
book/tutorials/lis/2_interactive_data_exploration.ipynb
zachghiaccio/website
Interactive Widgets SNOTEL Site Map and TimeseriesThe two cells that follow will create an interactive panel for comparing LIS, SNODAS, and SNOTEL snow depth and snow water equivalent. The SNOTEL site locations are plotted as points on an interactive map. Hover over the sites to view metadata and click on a site to ge...
# get snotel depth def get_depth(state, site, ts, te): df = snotel_depth[state] # subset between time range mask = (df['Date'] >= ts) & (df['Date'] <= te) df = df.loc[mask] # extract timeseries for the site return pd.concat([df.Date, df.filter(like=site)], axis=1).set_index('Date') # ...
_____no_output_____
MIT
book/tutorials/lis/2_interactive_data_exploration.ipynb
zachghiaccio/website
Interactive LIS Output ExplorerThe cell below creates a `panel` layout for exploring LIS output rasters. Select a variable using the drop down and then use the date slider to scrub back and forth in time!
# date widget (slider & key in) # start and end dates date_fmt = '%Y-%m-%d' b = dt.strptime('2016-10-01', date_fmt) e = dt.strptime('2017-04-30', date_fmt) # define date widgets date_slider = pn.widgets.DateSlider(start=b, end=e, value=b, name="LIS Model Date") dt_input = pn.widgets.DatetimeInput(name='LIS Model Date ...
_____no_output_____
MIT
book/tutorials/lis/2_interactive_data_exploration.ipynb
zachghiaccio/website
Data Science Unit 2 Sprint Challenge 4 — Model Validation Follow the instructions for each numbered part to earn a score of 2. See the bottom of the notebook for a list of ways you can earn a score of 3. Predicting Blood DonationsOur dataset is from a mobile blood donation vehicle in Taiwan. The Blood Transfusion S...
# all imports here import pandas as pd from sklearn.metrics import accuracy_score import numpy as np from sklearn.model_selection import train_test_split from sklearn.feature_selection import f_regression, SelectKBest from sklearn.linear_model import LogisticRegression from sklearn.preprocessing import StandardScaler f...
_____no_output_____
MIT
DS_Unit_2_Sprint_Challenge_4_Model_Validation.ipynb
donw385/DS-Unit-2-Sprint-4-Model-Validation
Part 1.1 — Begin with baselinesWhat **accuracy score** would you get here with a **"majority class baseline"?** (You don't need to split the data into train and test sets yet. You can answer this question either with a scikit-learn function or with a pandas function.)
#determine majority class df['made_donation_in_march_2007'].value_counts(normalize=True) # Guess the majority class for every prediction: majority_class = 0 y_pred = [majority_class] * len(df['made_donation_in_march_2007']) #accuracy score same as majority class, because dataset not split yet accuracy_score(df['made_...
_____no_output_____
MIT
DS_Unit_2_Sprint_Challenge_4_Model_Validation.ipynb
donw385/DS-Unit-2-Sprint-4-Model-Validation
What **recall score** would you get here with a **majority class baseline?**(You can answer this question either with a scikit-learn function or with no code, just your understanding of recall.)
#when it is actually yes, how often do you predict yes? 0, because always predicting no # recall = true_positive / actual_positive
_____no_output_____
MIT
DS_Unit_2_Sprint_Challenge_4_Model_Validation.ipynb
donw385/DS-Unit-2-Sprint-4-Model-Validation
Part 1.2 — Split dataIn this Sprint Challenge, you will use "Cross-Validation with Independent Test Set" for your model evaluation protocol.First, **split the data into `X_train, X_test, y_train, y_test`**, with random shuffle. (You can include 75% of the data in the train set, and hold out 25% for the test set.)
#split data X = df.drop(columns='made_donation_in_march_2007') y = df['made_donation_in_march_2007'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42) #validate 75% in train set X_train.shape #validate 25% in test set X_test.shape
_____no_output_____
MIT
DS_Unit_2_Sprint_Challenge_4_Model_Validation.ipynb
donw385/DS-Unit-2-Sprint-4-Model-Validation
Part 2.1 — Make a pipelineMake a **pipeline** which includes:- Preprocessing with any scikit-learn [**Scaler**](https://scikit-learn.org/stable/modules/classes.htmlmodule-sklearn.preprocessing)- Feature selection with **[`SelectKBest`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.SelectK...
#make pipeline with 3 prerequisites kbest = SelectKBest(f_regression) pipeline = Pipeline([('scale', StandardScaler()),('kbest', kbest), ('lr', LogisticRegression(solver='lbfgs'))]) pipe = make_pipeline(RobustScaler(),SelectKBest(),LogisticRegression(solver='lbfgs'))
_____no_output_____
MIT
DS_Unit_2_Sprint_Challenge_4_Model_Validation.ipynb
donw385/DS-Unit-2-Sprint-4-Model-Validation
Part 2.2 — Do Grid Search Cross-ValidationDo [**GridSearchCV**](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) with your pipeline. Use **5 folds** and **recall score**.Include these **parameters for your grid:** `SelectKBest`- `k : 1, 2, 3, 4` `LogisticRegression`- `class_...
param_grid = {'selectkbest__k':[1,2,3,4],'logisticregression__class_weight':[None,'balanced'],'logisticregression__C':[.0001, .001, .01, .1, 1.0, 10.0, 100.00, 1000.0, 10000.0]} gs = GridSearchCV(pipe,param_grid,cv=5,scoring='recall') gs.fit(X_train, y_train) # grid_search = GridSearchCV(pipeline, { 'lr__class_weigh...
/usr/local/lib/python3.6/dist-packages/sklearn/model_selection/_search.py:841: DeprecationWarning: The default of the `iid` parameter will change from True to False in version 0.22 and will be removed in 0.24. This will change numeric results when test-set sizes are unequal. DeprecationWarning)
MIT
DS_Unit_2_Sprint_Challenge_4_Model_Validation.ipynb
donw385/DS-Unit-2-Sprint-4-Model-Validation
Part 3 — Show best score and parametersDisplay your **best cross-validation score**, and the **best parameters** (the values of `k, class_weight, C`) from the grid search.(You're not evaluated here on how good your score is, or which parameters you find. You're only evaluated on being able to display the information. ...
validation_score = gs.best_score_ print() print('Cross-Validation Score:', -validation_score) print() print('Best estimator:', gs.best_estimator_) print() gs.best_estimator_ # Cross-Validation Score: -0.784519402166461 # best parameters: k=1,C=0.0001,class_weight=balanced
_____no_output_____
MIT
DS_Unit_2_Sprint_Challenge_4_Model_Validation.ipynb
donw385/DS-Unit-2-Sprint-4-Model-Validation
Part 4 — Calculate classification metrics from a confusion matrixSuppose this is the confusion matrix for your binary classification model: Predicted Negative Positive Actual Negative 85 58 Positive 8 36
true_negative = 85 false_positive = 58 false_negative = 8 true_positive = 36 predicted_positive = 58+36 actual_positive = 8 + 36
_____no_output_____
MIT
DS_Unit_2_Sprint_Challenge_4_Model_Validation.ipynb
donw385/DS-Unit-2-Sprint-4-Model-Validation
Calculate accuracy
accuracy = (true_negative + true_positive) / (true_negative + false_positive +false_negative + true_positive) print ('Accuracy:', accuracy)
Accuracy: 0.6470588235294118
MIT
DS_Unit_2_Sprint_Challenge_4_Model_Validation.ipynb
donw385/DS-Unit-2-Sprint-4-Model-Validation
Calculate precision
precision = true_positive / predicted_positive print ('Precision:', precision)
Precision: 0.3829787234042553
MIT
DS_Unit_2_Sprint_Challenge_4_Model_Validation.ipynb
donw385/DS-Unit-2-Sprint-4-Model-Validation
Calculate recall
recall = true_positive / actual_positive print ('Recall:', recall)
Recall: 0.8181818181818182
MIT
DS_Unit_2_Sprint_Challenge_4_Model_Validation.ipynb
donw385/DS-Unit-2-Sprint-4-Model-Validation
BONUS — How you can earn a score of 3 Part 1Do feature engineering, to try improving your cross-validation score. Part 2Add transformations in your pipeline and parameters in your grid, to try improving your cross-validation score. Part 3Show names of selected features. Then do a final evaluation on the test set — wha...
# # Which features were selected? selector = gs.best_estimator_.named_steps['selectkbest'] all_names = X_train.columns selected_mask = selector.get_support() selected_names = all_names[selected_mask] unselected_names = all_names[~selected_mask] print('Features selected:') for name in selected_names: print(name) p...
_____no_output_____
MIT
DS_Unit_2_Sprint_Challenge_4_Model_Validation.ipynb
donw385/DS-Unit-2-Sprint-4-Model-Validation
Visualizing Chipotle's Data This time we are going to pull data directly from the internet.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. Step 1. Import the necessary libraries
import pandas as pd from collections import Counter import matplotlib.pyplot as plt # set this so the graphs open internally %matplotlib inline
_____no_output_____
BSD-3-Clause
07_Visualization/Chipotle/Exercises.ipynb
duongv/pandas_exercises
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv). Step 3. Assign it to a variable called chipo.
url = 'https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv' chipo = pd.read_csv(url, sep = '\t')
_____no_output_____
BSD-3-Clause
07_Visualization/Chipotle/Exercises.ipynb
duongv/pandas_exercises
Step 4. See the first 10 entries
chipo.head(10)
_____no_output_____
BSD-3-Clause
07_Visualization/Chipotle/Exercises.ipynb
duongv/pandas_exercises
Step 5. Create a histogram of the top 5 items bought
# Create a Series of Item_name x = chipo.item_name # use the Counter to count frequency with keys and frequency. letter_counts = Counter(x) new_data = pd.DataFrame.from_dict(letter_counts, orient='index') data = new_data.sort_values(0,ascending=False)[0:5] data.plot(kind='bar') plt.xlabel('Item') plt.ylabel ('The numb...
_____no_output_____
BSD-3-Clause
07_Visualization/Chipotle/Exercises.ipynb
duongv/pandas_exercises
Figure 2: Illustration of graphical method for finding best adaptation strategy in uncorrelated environmentsGoal: illustration of the steps of the graphical method
import numpy as np import scipy.spatial %matplotlib inline import matplotlib.pyplot as plt plt.style.use(['transitions.mplstyle']) import matplotlib colors = matplotlib.rcParams['axes.prop_cycle'].by_key()['color'] from matplotlib import patches import sys sys.path.append('lib/') import evolimmune, plotting def paret...
_____no_output_____
MIT
graphicalmethod.ipynb
andim/transitions-paper
Test SKNW for Cahn-Hillard dataset.
#os.chdir(r'/Users/devyanijivani/git/pygraspi/notebooks/data') dest = "/Users/devyanijivani/git/pygraspi/notebooks/junctions" myFiles = glob.glob('*.txt') myFiles.sort() for i, file in enumerate(myFiles): morph = np.array(pandas.read_csv(file, delimiter=' ', header=None)).swapaxes(0, 1) skel, distance = medial...
_____no_output_____
MIT
notebooks/junctions_testcases.ipynb
devyanijivani/pygraspi
Exploring tabular data with pandasIn this notebook, we will explore a time series of water levels at the Point Atkinson lighthouse using pandas. This is a basic introduction to pandas and we touch on the following topics:* Reading a csv file* Simple plots* Indexing and subsetting* DatetimeIndex* Grouping* Time series m...
import pandas as pd import matplotlib.pyplot as plt import datetime import numpy as np %matplotlib inline
_____no_output_____
MIT
Pandas Lesson.ipynb
nsoontie/pythonPandasLesson
Read the data It is helpful to understand the structure of your dataset before attempting to read it with pandas.
!head 7795-01-JAN-2000_slev.csv
Station_Name,Point Atkinson, B.C. Station_Number,7795 Latitude_Decimal_Degrees,49.337 Longitude_Decimal_Degrees,123.253 Datum,CD Time_zone,UTC SLEV=Observed Water Level Obs_date,SLEV(metres) 2000/01/01 08:00,2.95, 2000/01/01 09:00,3.34,
MIT
Pandas Lesson.ipynb
nsoontie/pythonPandasLesson
This dataset contains comma separated values. It has a few rows of metadata (station name, longitude, latitude, etc).The actual data begins with timestamps and water level records at row 9. We can read this data with a pandas function read_csv().read_csv() has many arguments to help customize the reading of many differ...
data = pd.read_csv('7795-01-JAN-2000_slev.csv', skiprows = 8, index_col=False, parse_dates=[0], names=['date','wlev'])
_____no_output_____
MIT
Pandas Lesson.ipynb
nsoontie/pythonPandasLesson
data is a DataFrame object
type(data)
_____no_output_____
MIT
Pandas Lesson.ipynb
nsoontie/pythonPandasLesson
Let's take a quick peak at the dataset.
data.head() data.tail() data.describe()
_____no_output_____
MIT
Pandas Lesson.ipynb
nsoontie/pythonPandasLesson
Notice that pandas did not apply the summary statistics to the date column. Simple Plots pandas has support for some simple plotting features, like line plots, scatter plots, box plots, etc. For full overview of plots visit http://pandas.pydata.org/pandas-docs/stable/visualization.htmlPlotting is really easy. pandas e...
data.plot('date','wlev') data.plot(kind='hist') data.plot(kind='box')
_____no_output_____
MIT
Pandas Lesson.ipynb
nsoontie/pythonPandasLesson
Indexing and Subsetting We can index and subset the data in different ways.By row numberFor example, grab the first two rows.
data[0:2]
_____no_output_____
MIT
Pandas Lesson.ipynb
nsoontie/pythonPandasLesson
Note that accessing a single row by the row number doesn't work!
data[0]
_____no_output_____
MIT
Pandas Lesson.ipynb
nsoontie/pythonPandasLesson
In that case, I would recommend using .iloc or slice for one row.
data.iloc[0] data[0:1]
_____no_output_____
MIT
Pandas Lesson.ipynb
nsoontie/pythonPandasLesson
By columnFor example, print the first few lines of the wlev column.
data['wlev'].head()
_____no_output_____
MIT
Pandas Lesson.ipynb
nsoontie/pythonPandasLesson
By a conditionFor example, subset the data with date greater than Jan 1, 2008. We pass our condition into the square brackets of data.
data_20082009 = data[data['date']>datetime.datetime(2008,1,1)] data_20082009.plot('date','wlev')
_____no_output_____
MIT
Pandas Lesson.ipynb
nsoontie/pythonPandasLesson
Mulitple conditionsFor example, look for extreme water level events. That is, instances where the water level is above 5 m or below 0 m.Don't forget to put brackets () around each part of the condition.
data_extreme = data[(data['wlev']>5) | (data['wlev']<0)] data_extreme.head()
_____no_output_____
MIT
Pandas Lesson.ipynb
nsoontie/pythonPandasLesson
ExerciseWhat was the maximum water level in 2006? Bonus: When?Solution Isolate the year 2006. Use describe to look up the max water level.
data_2006 = data[(data['date']>=datetime.datetime(2006,1,1)) & (data['date'] < datetime.datetime(2007,1,1))] data_2006.describe()
_____no_output_____
MIT
Pandas Lesson.ipynb
nsoontie/pythonPandasLesson
The max water level is 5.49m. Use a condition to determine the date.
date_max = data_2006[data_2006['wlev']==5.49]['date'] print date_max
53399 2006-02-04 17:00:00 Name: date, dtype: datetime64[ns]
MIT
Pandas Lesson.ipynb
nsoontie/pythonPandasLesson
Manipulating dates In the above example, it would have been convenient if we could access only the year part of the time stamp. But this doesn't work:
data['date'].year
_____no_output_____
MIT
Pandas Lesson.ipynb
nsoontie/pythonPandasLesson
We can use the pandas DatetimeIndex class to make this work. The DatetimeIndex allows us to easily access properties, like year, month, and day of each timestamp. We will use this to add new Year, Month, Day, Hour and DayOfYear columns to the dataframe.
date_index = pd.DatetimeIndex(data['date']) print date_index data['Day'] = date_index.day data['Month'] = date_index.month data['Year'] = date_index.year data['Hour'] = date_index.hour data['DayOfYear'] = date_index.dayofyear data.head() data.describe()
_____no_output_____
MIT
Pandas Lesson.ipynb
nsoontie/pythonPandasLesson
Notice that now pandas applies the describe function to these new columns because it sees them as numerical data.Now, we can access a single year with a simpler conditional.
data_2006 = data[data['Year']==2006] data_2006.head()
_____no_output_____
MIT
Pandas Lesson.ipynb
nsoontie/pythonPandasLesson
Grouping Sometimes, it is convenient to group data with similar characteristics. We can do this with the groupby() method.For example, we might want to group by year.
data_annual = data.groupby(['Year']) data_annual['wlev'].describe().head(20)
_____no_output_____
MIT
Pandas Lesson.ipynb
nsoontie/pythonPandasLesson
Now the data is organized into groups based on the year of the observation.AggregatingOnce the data is grouped, we may want to summarize it in some way. We can do this with the apply() function. The argument of apply() is a function that we want to apply to each group. For example, we may want to calculate the mean sea...
annual_means = data_annual['wlev'].apply(np.mean) print annual_means
Year 2000 3.067434 2001 3.057653 2002 3.078112 2003 3.112990 2004 3.104097 2005 3.127036 2006 3.142052 2007 3.095614 2008 3.070757 2009 3.080533 Name: wlev, dtype: float64
MIT
Pandas Lesson.ipynb
nsoontie/pythonPandasLesson
It is also really easy to plot the aggregated data.
annual_means.plot()
_____no_output_____
MIT
Pandas Lesson.ipynb
nsoontie/pythonPandasLesson