markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
4. Visualize the resultsLet's take a look:
for i in range(1, len(average_precisions)): print("{:<14}{:<6}{}".format(classes[i], 'AP', round(average_precisions[i], 3))) print() print("{:<14}{:<6}{}".format('','mAP', round(mean_average_precision, 3))) m = max((n_classes + 1) // 2, 2) n = 2 fig, cells = plt.subplots(m, n, figsize=(n*8,m*8)) for i in range(m):...
_____no_output_____
Apache-2.0
ssd300_evaluation.ipynb
rogeryan/ssd_keras
5. Advanced use`Evaluator` objects maintain copies of all relevant intermediate results like predictions, precisions and recalls, etc., so in case you want to experiment with different parameters, e.g. different IoU overlaps, there is no need to compute the predictions all over again every time you make a change to a ...
evaluator.get_num_gt_per_class(ignore_neutral_boxes=True, verbose=False, ret=False) evaluator.match_predictions(ignore_neutral_boxes=True, matching_iou_threshold=0.5, border_pixels='include', ...
aeroplane AP 0.822 bicycle AP 0.874 bird AP 0.787 boat AP 0.713 bottle AP 0.505 bus AP 0.899 car AP 0.89 cat AP 0.923 chair AP 0.61 cow AP 0.845 diningtable AP 0.79 dog AP 0.899 horse ...
Apache-2.0
ssd300_evaluation.ipynb
rogeryan/ssd_keras
Roboschool simulations of physical robotics with Amazon SageMaker--- IntroductionRoboschool is an [open source](https://github.com/openai/roboschool/tree/master/roboschool) physics simulator that is commonly used to train RL policies for simulated robotic systems. Roboschool provides 3D visualization of physical syst...
# Uncomment the problem to work on roboschool_problem = "reacher" # roboschool_problem = 'hopper' # roboschool_problem = 'humanoid'
_____no_output_____
Apache-2.0
reinforcement_learning/rl_roboschool_ray/rl_roboschool_ray.ipynb
Amirosimani/amazon-sagemaker-examples
Pre-requisites ImportsTo get started, we'll import the Python libraries we need, set up the environment with a few prerequisites for permissions and configurations.
import sagemaker import boto3 import sys import os import glob import re import subprocess import numpy as np from IPython.display import HTML import time from time import gmtime, strftime sys.path.append("common") from misc import get_execution_role, wait_for_s3_object from docker_utils import build_and_push_docker_i...
_____no_output_____
Apache-2.0
reinforcement_learning/rl_roboschool_ray/rl_roboschool_ray.ipynb
Amirosimani/amazon-sagemaker-examples
Setup S3 bucketSet up the linkage and authentication to the S3 bucket that you want to use for checkpoint and the metadata.
sage_session = sagemaker.session.Session() s3_bucket = sage_session.default_bucket() s3_output_path = "s3://{}/".format(s3_bucket) print("S3 bucket path: {}".format(s3_output_path))
_____no_output_____
Apache-2.0
reinforcement_learning/rl_roboschool_ray/rl_roboschool_ray.ipynb
Amirosimani/amazon-sagemaker-examples
Define Variables We define variables such as the job prefix for the training jobs *and the image path for the container (only when this is BYOC).*
# create a descriptive job name job_name_prefix = "rl-roboschool-" + roboschool_problem
_____no_output_____
Apache-2.0
reinforcement_learning/rl_roboschool_ray/rl_roboschool_ray.ipynb
Amirosimani/amazon-sagemaker-examples
Configure where training happensYou can train your RL training jobs using the SageMaker notebook instance or local notebook instance. In both of these scenarios, you can run the following in either local or SageMaker modes. The local mode uses the SageMaker Python SDK to run your code in a local container before deplo...
# run in local_mode on this machine, or as a SageMaker TrainingJob? local_mode = False if local_mode: instance_type = "local" else: # If on SageMaker, pick the instance type instance_type = "ml.c5.2xlarge"
_____no_output_____
Apache-2.0
reinforcement_learning/rl_roboschool_ray/rl_roboschool_ray.ipynb
Amirosimani/amazon-sagemaker-examples
Create an IAM roleEither get the execution role when running from a SageMaker notebook instance `role = sagemaker.get_execution_role()` or, when running from local notebook instance, use utils method `role = get_execution_role()` to create an execution role.
try: role = sagemaker.get_execution_role() except: role = get_execution_role() print("Using IAM role arn: {}".format(role))
_____no_output_____
Apache-2.0
reinforcement_learning/rl_roboschool_ray/rl_roboschool_ray.ipynb
Amirosimani/amazon-sagemaker-examples
Install docker for `local` modeIn order to work in `local` mode, you need to have docker installed. When running from you local machine, please make sure that you have docker and docker-compose (for local CPU machines) and nvidia-docker (for local GPU machines) installed. Alternatively, when running from a SageMaker n...
# only run from SageMaker notebook instance if local_mode: !/bin/bash ./common/setup.sh
_____no_output_____
Apache-2.0
reinforcement_learning/rl_roboschool_ray/rl_roboschool_ray.ipynb
Amirosimani/amazon-sagemaker-examples
Build docker containerWe must build a custom docker container with Roboschool installed. This takes care of everything:1. Fetching base container image2. Installing Roboschool and its dependencies3. Uploading the new container image to ECRThis step can take a long time if you are running on a machine with a slow inte...
%%time cpu_or_gpu = "gpu" if instance_type.startswith("ml.p") else "cpu" repository_short_name = "sagemaker-roboschool-ray-%s" % cpu_or_gpu docker_build_args = { "CPU_OR_GPU": cpu_or_gpu, "AWS_REGION": boto3.Session().region_name, } custom_image_name = build_and_push_docker_image(repository_short_name, build_a...
_____no_output_____
Apache-2.0
reinforcement_learning/rl_roboschool_ray/rl_roboschool_ray.ipynb
Amirosimani/amazon-sagemaker-examples
Write the Training CodeThe training code is written in the file “train-coach.py” which is uploaded in the /src directory. First import the environment files and the preset files, and then define the main() function.
!pygmentize src/train-{roboschool_problem}.py
_____no_output_____
Apache-2.0
reinforcement_learning/rl_roboschool_ray/rl_roboschool_ray.ipynb
Amirosimani/amazon-sagemaker-examples
Train the RL model using the Python SDK Script modeIf you are using local mode, the training will run on the notebook instance. When using SageMaker for training, you can select a GPU or CPU instance. The RLEstimator is used for training RL jobs. 1. Specify the source directory where the environment, presets and train...
%%time metric_definitions = RLEstimator.default_metric_definitions(RLToolkit.RAY) estimator = RLEstimator( entry_point="train-%s.py" % roboschool_problem, source_dir="src", dependencies=["common/sagemaker_rl"], image_uri=custom_image_name, role=role, instance_type=instance_type, instance_c...
_____no_output_____
Apache-2.0
reinforcement_learning/rl_roboschool_ray/rl_roboschool_ray.ipynb
Amirosimani/amazon-sagemaker-examples
VisualizationRL training can take a long time. So while it's running there are a variety of ways we can track progress of the running training job. Some intermediate output gets saved to S3 during training, so we'll set up to capture that.
print("Job name: {}".format(job_name)) s3_url = "s3://{}/{}".format(s3_bucket, job_name) intermediate_folder_key = "{}/output/intermediate/".format(job_name) intermediate_url = "s3://{}/{}".format(s3_bucket, intermediate_folder_key) print("S3 job path: {}".format(s3_url)) print("Intermediate folder path: {}".format(...
_____no_output_____
Apache-2.0
reinforcement_learning/rl_roboschool_ray/rl_roboschool_ray.ipynb
Amirosimani/amazon-sagemaker-examples
Fetch videos of training rolloutsVideos of certain rollouts get written to S3 during training. Here we fetch the last 10 videos from S3, and render the last one.
recent_videos = wait_for_s3_object( s3_bucket, intermediate_folder_key, tmp_dir, fetch_only=(lambda obj: obj.key.endswith(".mp4") and obj.size > 0), limit=10, training_job_name=job_name, ) last_video = sorted(recent_videos)[-1] # Pick which video to watch os.system("mkdir -p ./src/tmp_render/ &...
_____no_output_____
Apache-2.0
reinforcement_learning/rl_roboschool_ray/rl_roboschool_ray.ipynb
Amirosimani/amazon-sagemaker-examples
Plot metrics for training jobWe can see the reward metric of the training as it's running, using algorithm metrics that are recorded in CloudWatch metrics. We can plot this to see the performance of the model over time.
%matplotlib inline from sagemaker.analytics import TrainingJobAnalytics if not local_mode: df = TrainingJobAnalytics(job_name, ["episode_reward_mean"]).dataframe() num_metrics = len(df) if num_metrics == 0: print("No algorithm metrics found in CloudWatch") else: plt = df.plot(x="timesta...
_____no_output_____
Apache-2.0
reinforcement_learning/rl_roboschool_ray/rl_roboschool_ray.ipynb
Amirosimani/amazon-sagemaker-examples
Monitor training progressYou can repeatedly run the visualization cells to get the latest videos or see the latest metrics as the training job proceeds. Evaluation of RL modelsWe use the last checkpointed model to run evaluation for the RL Agent. Load checkpointed modelCheckpointed data from the previously trained m...
if local_mode: model_tar_key = "{}/model.tar.gz".format(job_name) else: model_tar_key = "{}/output/model.tar.gz".format(job_name) local_checkpoint_dir = "{}/model".format(tmp_dir) wait_for_s3_object(s3_bucket, model_tar_key, tmp_dir, training_job_name=job_name) if not os.path.isfile("{}/model.tar.gz".format(...
_____no_output_____
Apache-2.0
reinforcement_learning/rl_roboschool_ray/rl_roboschool_ray.ipynb
Amirosimani/amazon-sagemaker-examples
Visualize the output Optionally, you can run the steps defined earlier to visualize the output. Model deploymentNow let us deploy the RL policy so that we can get the optimal action, given an environment observation.
from sagemaker.tensorflow.model import TensorFlowModel model = TensorFlowModel(model_data=estimator.model_data, framework_version="2.1.0", role=role) predictor = model.deploy(initial_instance_count=1, instance_type=instance_type) # Mapping of environments to observation space observation_space_mapping = {"reacher": 9...
_____no_output_____
Apache-2.0
reinforcement_learning/rl_roboschool_ray/rl_roboschool_ray.ipynb
Amirosimani/amazon-sagemaker-examples
Now let us predict the actions using a dummy observation
# ray 0.8.2 requires all the following inputs # 'prev_action', 'is_training', 'prev_reward' and 'seq_lens' are placeholders for this example # they won't affect prediction results input = { "inputs": { "observations": np.ones(shape=(1, observation_space_mapping[roboschool_problem])).tolist(), "prev...
_____no_output_____
Apache-2.0
reinforcement_learning/rl_roboschool_ray/rl_roboschool_ray.ipynb
Amirosimani/amazon-sagemaker-examples
Clean up endpoint
predictor.delete_endpoint()
_____no_output_____
Apache-2.0
reinforcement_learning/rl_roboschool_ray/rl_roboschool_ray.ipynb
Amirosimani/amazon-sagemaker-examples
Plotting Asteroids
import matplotlib.pyplot as plt import numpy as np import pandas as pd
_____no_output_____
MIT
HW_Plotting.ipynb
UWashington-Astro300/Astro300-A21
Imports
import pandas as pd import numpy as np pd.set_option('display.max_colwidth', None) import re from wordcloud import WordCloud import contractions import matplotlib.pyplot as plt import seaborn as sns plt.style.use('ggplot') plt.rcParams['font.size'] = 15 import nltk from nltk.stem.porter import PorterStemmer from n...
_____no_output_____
MIT
Week 1/3_Natural_Disaster_Tweets_EDA_Visuals.ipynb
theaslo/NLP_Workshop_WAIA_Sept_2021
Data Load
df_train = pd.read_csv('../Datasets/disaster_tweet/train.csv') df_train.head(20) df_train.tail(20)
_____no_output_____
MIT
Week 1/3_Natural_Disaster_Tweets_EDA_Visuals.ipynb
theaslo/NLP_Workshop_WAIA_Sept_2021
Observation1. Mixed case2. Contractions3. Hashtags and mentions4. Incorrect spellings5. Punctuations6. websites and urls Functions
all_text = ' '.join(list(df_train['text'])) def check_texts(check_item, all_text): return check_item in all_text print(check_texts('<a', all_text)) print(check_texts('<div', all_text)) print(check_texts('<p', all_text)) print(check_texts('#x', all_text)) print(check_texts(':)', all_text)) print(check_texts('<3', a...
_____no_output_____
MIT
Week 1/3_Natural_Disaster_Tweets_EDA_Visuals.ipynb
theaslo/NLP_Workshop_WAIA_Sept_2021
Tweet Length Analysis
sns.histplot(x='words_per_tweet', hue='target', data=df_train, kde=True) plt.show()
_____no_output_____
MIT
Week 1/3_Natural_Disaster_Tweets_EDA_Visuals.ipynb
theaslo/NLP_Workshop_WAIA_Sept_2021
Punctuation Analysis
sns.countplot(x='target', hue='punctuation_count', data=df_train) plt.legend([]) plt.show()
_____no_output_____
MIT
Week 1/3_Natural_Disaster_Tweets_EDA_Visuals.ipynb
theaslo/NLP_Workshop_WAIA_Sept_2021
Tweet Text Analysis using WordCloud
real_disaster_tweets = ' '.join(list(df_train[df_train['target'] == 1]['text'])) real_disaster_tweets non_real_disaster_tweets = ' '. join(list(df_train[df_train['target'] == 0]['text'])) wc = WordCloud(background_color="black", max_words=100, width=1000, height=600, ...
_____no_output_____
MIT
Week 1/3_Natural_Disaster_Tweets_EDA_Visuals.ipynb
theaslo/NLP_Workshop_WAIA_Sept_2021
For both the parties the proportion of negative tweets is slightly greater than the positive tweets. Let's create a Word Cloud to identify which words occur frequetly in the tweets and try to derive what is their significance.
bjp_tweets = bjp_df['clean_text'] bjp_string =[] for t in bjp_tweets: bjp_string.append(t) bjp_string = pd.Series(bjp_string).str.cat(sep=' ') from wordcloud import WordCloud wordcloud = WordCloud(width=1600, height=800,max_font_size=200).generate(bjp_string) plt.figure(figsize=(12,10)) plt.title('BJP Word Cloud')...
_____no_output_____
MIT
Exploratory Data Analysis.ipynb
abhishek291994/Twitter-Sentiment-Analysis-for-Indian-Elections
The words like 'JhaSanjay', 'ArvindKejriwal', 'Delhi', 'Govindraj' occur frequently in our corpus and are highlighted by our Word Cloud.In the context of the BJP. Arivind Kejriwal are staunch opponents of the BJP government. Delhi is the Capital of India and also a state that the BJP has suffered heavy losses in the pr...
cong_tweets = congress_df['clean_text'] cong_string =[] for t in cong_tweets: cong_string.append(t) cong_string = pd.Series(cong_string).str.cat(sep=' ') from wordcloud import WordCloud wordcloud = WordCloud(width=1600, height=800,max_font_size=200).generate(cong_string) plt.figure(figsize=(12,10)) plt.title('Cong...
_____no_output_____
MIT
Exploratory Data Analysis.ipynb
abhishek291994/Twitter-Sentiment-Analysis-for-Indian-Elections
rocket functions> ROCKET (RandOm Convolutional KErnel Transform) functions for univariate and multivariate time series using GPU.
#export from tsai.imports import * from tsai.data.external import * #export from sklearn.linear_model import RidgeClassifierCV from numba import njit, prange #export # Angus Dempster, Francois Petitjean, Geoff Webb # Dempster A, Petitjean F, Webb GI (2019) ROCKET: Exceptionally fast and # accurate time series classifi...
_____no_output_____
Apache-2.0
nbs/010_rocket_functions.ipynb
williamsdoug/timeseriesAI
DictionariesWe've been learning about *sequences* in Python but now we're going to switch gears and learn about *mappings* in Python. If you're familiar with other languages you can think of these Dictionaries as hash tables. This section will serve as a brief introduction to dictionaries and consist of: 1.) Constr...
# Make a dictionary with {} and : to signify a key and a value my_dict = {'key1':'value1','key2':'value2'} # Call values by their key my_dict['key2']
_____no_output_____
MIT
Languages/Python/00 - Python Object and Data Structure/05-Dictionaries.ipynb
Tikam02/ReBuild
Its important to note that dictionaries are very flexible in the data types they can hold. For example:
my_dict = {'key1':123,'key2':[12,23,33],'key3':['item0','item1','item2']} # Let's call items from the dictionary my_dict['key3'] # Can call an index on that value my_dict['key3'][0] # Can then even call methods on that value my_dict['key3'][0].upper()
_____no_output_____
MIT
Languages/Python/00 - Python Object and Data Structure/05-Dictionaries.ipynb
Tikam02/ReBuild
We can affect the values of a key as well. For instance:
my_dict['key1'] # Subtract 123 from the value my_dict['key1'] = my_dict['key1'] - 123 #Check my_dict['key1']
_____no_output_____
MIT
Languages/Python/00 - Python Object and Data Structure/05-Dictionaries.ipynb
Tikam02/ReBuild
A quick note, Python has a built-in method of doing a self subtraction or addition (or multiplication or division). We could have also used += or -= for the above statement. For example:
# Set the object equal to itself minus 123 my_dict['key1'] -= 123 my_dict['key1']
_____no_output_____
MIT
Languages/Python/00 - Python Object and Data Structure/05-Dictionaries.ipynb
Tikam02/ReBuild
We can also create keys by assignment. For instance if we started off with an empty dictionary, we could continually add to it:
# Create a new dictionary d = {} # Create a new key through assignment d['animal'] = 'Dog' # Can do this with any object d['answer'] = 42 #Show d
_____no_output_____
MIT
Languages/Python/00 - Python Object and Data Structure/05-Dictionaries.ipynb
Tikam02/ReBuild
Nesting with DictionariesHopefully you're starting to see how powerful Python is with its flexibility of nesting objects and calling methods on them. Let's see a dictionary nested inside a dictionary:
# Dictionary nested inside a dictionary nested inside a dictionary d = {'key1':{'nestkey':{'subnestkey':'value'}}}
_____no_output_____
MIT
Languages/Python/00 - Python Object and Data Structure/05-Dictionaries.ipynb
Tikam02/ReBuild
Wow! That's a quite the inception of dictionaries! Let's see how we can grab that value:
# Keep calling the keys d['key1']['nestkey']['subnestkey']
_____no_output_____
MIT
Languages/Python/00 - Python Object and Data Structure/05-Dictionaries.ipynb
Tikam02/ReBuild
A few Dictionary MethodsThere are a few methods we can call on a dictionary. Let's get a quick introduction to a few of them:
# Create a typical dictionary d = {'key1':1,'key2':2,'key3':3} # Method to return a list of all keys d.keys() # Method to grab all values d.values() # Method to return tuples of all items (we'll learn about tuples soon) d.items()
_____no_output_____
MIT
Languages/Python/00 - Python Object and Data Structure/05-Dictionaries.ipynb
Tikam02/ReBuild
Measures of Income Mobility **Author: Wei Kang , Serge Rey **Income mobility could be viewed as a reranking pheonomenon where regions switch income positions while it could also be considered to be happening as long as regions move away from the previous income levels. The former is named absolute mobility and the lat...
from giddy import markov,mobility mobility.markov_mobility?
_____no_output_____
BSD-3-Clause
notebooks/Mobility measures.ipynb
mikiec84/giddy
US income mobility exampleSimilar to [Markov Based Methods notebook](Markov Based Methods.ipynb), we will demonstrate the usage of the mobility methods by an application to data on per capita incomes observed annually from 1929 to 2009 for the lower 48 US states.
import libpysal import numpy as np import mapclassify as mc income_path = libpysal.examples.get_path("usjoin.csv") f = libpysal.io.open(income_path) pci = np.array([f.by_col[str(y)] for y in range(1929, 2010)]) #each column represents an state's income time series 1929-2010 q5 = np.array([mc.Quantiles(y).yb for y in pc...
/Users/weikang/anaconda3/lib/python3.6/site-packages/scipy/stats/stats.py:1713: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in ...
BSD-3-Clause
notebooks/Mobility measures.ipynb
mikiec84/giddy
After acquiring the estimate of transition probability matrix, we could call the method $markov\_mobility$ to estimate any of the five Markov-based summary mobility indice. 1. Shorrock1's mobility measure$$M_{P} = \frac{m-\sum_{i=1}^m P_{ii}}{m-1}$$```pythonmeasure = "P"```
mobility.markov_mobility(m.p, measure="P")
_____no_output_____
BSD-3-Clause
notebooks/Mobility measures.ipynb
mikiec84/giddy
2. Shorroks2's mobility measure$$M_{D} = 1 - |\det(P)|$$```pythonmeasure = "D"```
mobility.markov_mobility(m.p, measure="D")
_____no_output_____
BSD-3-Clause
notebooks/Mobility measures.ipynb
mikiec84/giddy
3. Sommers and Conlisk's mobility measure$$M_{L2} = 1 - |\lambda_2|$$```pythonmeasure = "L2"```
mobility.markov_mobility(m.p, measure = "L2")
_____no_output_____
BSD-3-Clause
notebooks/Mobility measures.ipynb
mikiec84/giddy
4. Bartholomew1's mobility measure$$M_{B1} = \frac{m-m \sum_{i=1}^m \pi_i P_{ii}}{m-1}$$$\pi$: the inital income distribution```pythonmeasure = "B1"```
pi = np.array([0.1,0.2,0.2,0.4,0.1]) mobility.markov_mobility(m.p, measure = "B1", ini=pi)
_____no_output_____
BSD-3-Clause
notebooks/Mobility measures.ipynb
mikiec84/giddy
5. Bartholomew2's mobility measure$$M_{B2} = \frac{1}{m-1} \sum_{i=1}^m \sum_{j=1}^m \pi_i P_{ij} |i-j|$$$\pi$: the inital income distribution```pythonmeasure = "B1"```
pi = np.array([0.1,0.2,0.2,0.4,0.1]) mobility.markov_mobility(m.p, measure = "B2", ini=pi)
_____no_output_____
BSD-3-Clause
notebooks/Mobility measures.ipynb
mikiec84/giddy
FaceNet Keras DemoThis notebook demos the usage of the FaceNet model, and showshow to preprocess images before feeding them into the model.
%load_ext autoreload %autoreload 2 import sys sys.path.append('../') import os import tensorflow as tf import numpy as np import matplotlib.pyplot as plt import matplotlib as mpl from PIL import Image from skimage.transform import resize from skimage.util import img_as_ubyte, img_as_float from sklearn.metrics import p...
_____no_output_____
MIT
notebooks/demos/demo_embeddings.ipynb
psturmfels/adversarial_faces
Loading the ModelFirst you need to download the keras weights from https://github.com/nyoki-mtl/keras-facenet and put the downloaded weights file in the parent directory.
model_path = '../facenet_keras.h5' model = tf.keras.models.load_model(model_path)
WARNING:tensorflow:No training configuration found in save file: the model was *not* compiled. Compile it manually.
MIT
notebooks/demos/demo_embeddings.ipynb
psturmfels/adversarial_faces
Preprocessing the InputThis next cell preprocesses the input using Pillow and skimage, both of which can be installed using pip. We center crop the image to avoid scaling issues, then resize the image to 160 x 160, and then we standardize the images using the utils module in this repository.
images = [] images_whitened = [] image_path = '../images/' image_files = os.listdir(image_path) image_files = [image_file for image_file in image_files if image_file.endswith('.png')] for image_file in image_files: image = np.array(Image.open(os.path.join(image_path, image_file))) image = image[:, :, :3] im...
_____no_output_____
MIT
notebooks/demos/demo_embeddings.ipynb
psturmfels/adversarial_faces
Computing EmbeddingsFinally, we compute the embeddings and pairwise distances of the images. We can see that the model is able to distinguish the same identity from different identities!
image_batch = tf.convert_to_tensor(np.array(images_whitened)) embedding_batch = model.predict(image_batch) normalized_embedding_batch = l2_normalize(embedding_batch) np.sqrt(np.sum(np.square(normalized_embedding_batch[0] - normalized_embedding_batch[1]))) pairwise_distance_matrix = pairwise_distances(normalized_embeddi...
_____no_output_____
MIT
notebooks/demos/demo_embeddings.ipynb
psturmfels/adversarial_faces
Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages.
import sys import gym import numpy as np import random as rn from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values
_____no_output_____
MIT
temporal-difference/Temporal_Difference.ipynb
kdliaokueida/Deep_Reinforcement_Learning
Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment.
env = gym.make('CliffWalking-v0')
_____no_output_____
MIT
temporal-difference/Temporal_Difference.ipynb
kdliaokueida/Deep_Reinforcement_Learning
The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, sta...
print(env.action_space) print(env.observation_space)
Discrete(4) Discrete(48)
MIT
temporal-difference/Temporal_Difference.ipynb
kdliaokueida/Deep_Reinforcement_Learning
In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cli...
# define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt)
_____no_output_____
MIT
temporal-difference/Temporal_Difference.ipynb
kdliaokueida/Deep_Reinforcement_Learning
Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`...
def greedy_eps_action(Q, state, nA, eps): if rn.random()> eps: return np.argmax(Q[state]) else: return rn.choice(np.arange(nA)) def sarsa(env, num_episodes, alpha, gamma=1.0, eps_start = 1, eps_decay = .95, eps_min = 1e-2): # initialize action-value function (empty dictionary of arrays) ...
_____no_output_____
MIT
temporal-difference/Temporal_Difference.ipynb
kdliaokueida/Deep_Reinforcement_Learning
Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd ...
# obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) p...
Episode 5000/5000
MIT
temporal-difference/Temporal_Difference.ipynb
kdliaokueida/Deep_Reinforcement_Learning
Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction...
def q_learning(env, num_episodes, alpha, gamma=1.0, eps_start = 1, eps_decay = .95, eps_min = 1e-2): # initialize empty dictionary of arrays nA = env.action_space.n Q = defaultdict(lambda: np.zeros(env.nA)) eps = eps_start # loop over episodes for i_episode in range(1, num_episodes+1): #...
_____no_output_____
MIT
temporal-difference/Temporal_Difference.ipynb
kdliaokueida/Deep_Reinforcement_Learning
Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd l...
# obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_chec...
Episode 5000/5000
MIT
temporal-difference/Temporal_Difference.ipynb
kdliaokueida/Deep_Reinforcement_Learning
Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment int...
def expected_sarsa(env, num_episodes, alpha, gamma=1.0, eps_start = 1, eps_decay = .9, eps_min = 1e-2): # initialize empty dictionary of arrays nA = env.action_space.n Q = defaultdict(lambda: np.zeros(env.nA)) eps = eps_start # loop over episodes for i_episode in range(1, num_episodes+1): ...
_____no_output_____
MIT
temporal-difference/Temporal_Difference.ipynb
kdliaokueida/Deep_Reinforcement_Learning
Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd ...
# obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_che...
Episode 10000/10000
MIT
temporal-difference/Temporal_Difference.ipynb
kdliaokueida/Deep_Reinforcement_Learning
_____no_output_____
Apache-2.0
w1d1_Primer_Notebook_de_Jupyter_en_GoogleColab_120521.ipynb
talitacardenas/The_Bridge_School_DataScience_PT
Encabezado Encabezado tipo 2Se llama ***Markdown***1. Elemento de lista2. Elemento de lista En Jupyter oara ejecutar una celda tecleamos>- Ctrl + EnterO si queremos anadir una nueva línea de codigo y a la vez ejecutar- Alt + Enter
# Si quiero escribir un cmentario en una celda utilizamos el fragmento # # Si sin más líneas también
_____no_output_____
Apache-2.0
w1d1_Primer_Notebook_de_Jupyter_en_GoogleColab_120521.ipynb
talitacardenas/The_Bridge_School_DataScience_PT
30
1 + 1 print("Es resultado de una operacion es:" + suma)
_____no_output_____
Apache-2.0
w1d1_Primer_Notebook_de_Jupyter_en_GoogleColab_120521.ipynb
talitacardenas/The_Bridge_School_DataScience_PT
Veamos que nos devolverá un error porque estamos intentando imorimir una cadena de texto y un valor numericoLa solucion seria trnsformar nuestro valor numerico en String
print("El resultado de esta operacion es> + str (suma)")
_____no_output_____
Apache-2.0
w1d1_Primer_Notebook_de_Jupyter_en_GoogleColab_120521.ipynb
talitacardenas/The_Bridge_School_DataScience_PT
作業在鐵達尼資料集中,今天我們專注觀察變數之間的相關性,以Titanic_train.csv 中,首先將有遺失值的數值刪除,並回答下列問題。* Q1: 透過數值法計算 Age 和 Survived 是否有相關性?* Q2:透過數值法計算 Sex 和 Survived 是否有相關性?* Q3: 透過數值法計算 Age 和 Fare 是否有相關性? * 提示: 1.產稱一個新的變數 Survived_cate ,資料型態傳換成類別型態 2.把題目中的 Survived 用 Survived_cate 來做分析 3.首先觀察一下這些變數的資料型態後,再來想要以哪一種判斷倆倆的相關性。 ...
# import library import matplotlib.pyplot as plt import numpy as np import pandas as pd from scipy import stats import math import statistics import seaborn as sns from IPython.display import display import pingouin as pg import researchpy %matplotlib inline
_____no_output_____
MIT
Solution/Day_38_Solution.ipynb
YenLinWu/DataScienceMarathon
讀入資料
df_train = pd.read_csv("Titanic_train.csv") print(df_train.info()) ## 這邊我們做一個調整,把 Survived 變成離散型變數 Survived_cate df_train['Survived_cate']=df_train['Survived'] df_train['Survived_cate']=df_train['Survived_cate'].astype('object') print(df_train.info()) display(df_train.head(5))
_____no_output_____
MIT
Solution/Day_38_Solution.ipynb
YenLinWu/DataScienceMarathon
Q1: 透過數值法計算 Age 和 Survived 是否有相關性?
## Age:連續型 Survived_cate 為離散型,所以採用 Eta Squared
_____no_output_____
MIT
Solution/Day_38_Solution.ipynb
YenLinWu/DataScienceMarathon
計算相關係數,不能允許有遺失值,所以必須先補值,或者把遺失值刪除
## 取出資料後,把遺失值刪除 complete_data=df_train[['Age','Survived_cate']].dropna() display(complete_data) aov = pg.anova(dv='Age', between='Survived_cate', data=complete_data, detailed=True) aov etaSq = aov.SS[0] / (aov.SS[0] + aov.SS[1]) etaSq def judgment_etaSq(etaSq): if etaSq < .01: qual = 'Negligible' elif e...
_____no_output_____
MIT
Solution/Day_38_Solution.ipynb
YenLinWu/DataScienceMarathon
結論: 年紀和存活沒有相關性(complete_data),思考是否需要放入模型,或者要深入觀察特性,是否需要做特徵轉換 Q2:透過數值法計算 Sex 和 Survived 是否有相關性?
## Sex:離散型 Survived_cate 為離散型,所以採用 Cramér's V contTable = pd.crosstab(df_train['Sex'], df_train['Survived_cate']) contTable df = min(contTable.shape[0], contTable.shape[1]) - 1 df crosstab, res = researchpy.crosstab(df_train['Survived_cate'], df_train['Sex'], test='chi-square') #print(res) print("Cramer's value is",res...
_____no_output_____
MIT
Solution/Day_38_Solution.ipynb
YenLinWu/DataScienceMarathon
數值型態和圖形, 存活和性別存在高度的相關性,要預測存活,一定要把性別加上去。 Q3: 透過數值法計算 Age 和 Fare 是否有相關性?
## Age 連續 , Fare 連續,用 Pearson 相關係數 ## 取出資料後,把遺失值刪除 complete_data=df_train[['Age','Fare']].dropna() display(complete_data) # 由於 pearsonr 有兩個回傳結果,我們只需取第一個回傳值為相關係數 corr, _=stats.pearsonr(complete_data['Age'],complete_data['Fare']) print(corr) #代表身高和體重有高度線性相關 g = sns.regplot(x="Age", y="Fare", color="g",data=complete_data)...
_____no_output_____
MIT
Solution/Day_38_Solution.ipynb
YenLinWu/DataScienceMarathon
Imports
import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns from sklearn.model_selection import train_test_split, GridSearchCV, cross_val_score, validation_curve from sklearn.naive_bayes import MultinomialNB from sklearn.metrics import accuracy_score, confusion_matrix, classification...
_____no_output_____
MIT
MultinomialNB.ipynb
djendara/heart-disease-classification
Loading the data
df = pd.read_csv('../input/heart-disease-uci/heart.csv') df.head() df.shape
_____no_output_____
MIT
MultinomialNB.ipynb
djendara/heart-disease-classification
as we can see this data this data is about 303 rows and 14 column Exploring our dataset
df.sex.value_counts(normalize=True)
_____no_output_____
MIT
MultinomialNB.ipynb
djendara/heart-disease-classification
this mean we have more female then male. let's plot only people who got disease by sex
df.sex[df.target==1].value_counts().plot(kind="bar") # commenting the plot plt.title("people who got disease by sex") plt.xlabel("sex") plt.ylabel("effected"); plt.xticks(rotation = 0); df.target.value_counts(normalize=True)
_____no_output_____
MIT
MultinomialNB.ipynb
djendara/heart-disease-classification
the two classes are almost equal Ploting Heart Disease by Age / Max Heart Rate
sns.scatterplot(x=df.age, y=df.thalach, hue = df.target); # commenting the plot plt.title("Heart Disease by Age / Max Heart Rate") plt.xlabel("Age") plt.legend(["Disease", "No Disease"]) plt.ylabel("Max Heart Rate");
_____no_output_____
MIT
MultinomialNB.ipynb
djendara/heart-disease-classification
Correlation matrix
corr = df.corr() f, ax = plt.subplots(figsize=(12, 10)) sns.heatmap(corr, annot=True, fmt='.2f', ax=ax); df.head()
_____no_output_____
MIT
MultinomialNB.ipynb
djendara/heart-disease-classification
Modeling
df.head()
_____no_output_____
MIT
MultinomialNB.ipynb
djendara/heart-disease-classification
Features / Lable
X = df.drop('target', axis=1) X.head() y = df.target y.head()
_____no_output_____
MIT
MultinomialNB.ipynb
djendara/heart-disease-classification
Spliting our dataset with 20% for test
np.random.seed(42) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2) y_train.head()
_____no_output_____
MIT
MultinomialNB.ipynb
djendara/heart-disease-classification
Evaluation metrics Function for geting score (f1 and acc) and ploting the confusion metrix
def getScore(model, X_test, y_test): y_pred = model.predict(X_test) print('f1_score') print(f1_score(y_test,y_pred,average='binary')) print('accuracy') acc = accuracy_score(y_test,y_pred, normalize=True) print(acc) print('Confusion Matrix :') plot_confusion_matrix(model, X_test, y_test) ...
f1_score 0.8524590163934426 accuracy 0.8524590163934426 Confusion Matrix :
MIT
MultinomialNB.ipynb
djendara/heart-disease-classification
Classification report
print(classification_report(y_test, clf.predict(X_test)))
precision recall f1-score support 0 0.81 0.90 0.85 29 1 0.90 0.81 0.85 32 accuracy 0.85 61 macro avg 0.85 0.85 0.85 61 weighted avg 0.86 0.85 0.85 ...
MIT
MultinomialNB.ipynb
djendara/heart-disease-classification
ROC curve
plot_roc_curve(clf, X_test, y_test);
_____no_output_____
MIT
MultinomialNB.ipynb
djendara/heart-disease-classification
Feature importance
clf.coef_ f_dict = dict(zip(X.columns , clf.coef_[0])) f_data = pd.DataFrame(f_dict, index=[0]) f_data.T.plot.bar(title="Feature Importance", legend=False, figsize=(10,4)); plt.xticks(rotation = 0);
_____no_output_____
MIT
MultinomialNB.ipynb
djendara/heart-disease-classification
from this plot we can see features who have importance or not Cross-validation
cv_precision = np.mean(cross_val_score(MultinomialNB(), X, y, cv=5)) cv_precision
_____no_output_____
MIT
MultinomialNB.ipynb
djendara/heart-disease-classification
GreadSearcheCV
np.random.seed(42) param_grid = { 'alpha': [0.01, 0.1, 0.5, 1.0, 10.0] } grid_search = GridSearchCV(estimator = MultinomialNB(), param_grid = param_grid, cv = 10, n_jobs = -1, verbose = 2) grid_search.fit(X_train, y_train) best_grid = grid_search.best_params_ print('best grid = ', ...
_____no_output_____
MIT
MultinomialNB.ipynb
djendara/heart-disease-classification
Comparing results
model_scores = {'MNB': clf_accuracy, 'grid_searche': grid_accuracy} model_compare = pd.DataFrame(model_scores, index=['accuracy']) model_compare.T.plot.bar(); plt.xticks(rotation = 0);
_____no_output_____
MIT
MultinomialNB.ipynb
djendara/heart-disease-classification
Preparação dos dados
clean_df = pd.DataFrame(columns=df.columns, index=df.index) residual_df = pd.DataFrame(columns=df.columns, index=df.index) for col in df.columns: residual, clean = remove_periodic(df[col].tolist(), df.index, frequency_threshold=0.01e12) clean_df[col] = clean.tolist() residual_df[col] = residual.tolist() tr...
_____no_output_____
MIT
notebooks/180418 - FCM SpatioTemporal.ipynb
cseveriano/spatio-temporal-forecasting
Fuzzy C-Means
from pyFTS.partitioners import FCM
_____no_output_____
MIT
notebooks/180418 - FCM SpatioTemporal.ipynb
cseveriano/spatio-temporal-forecasting
Adagrad from scratch
from mxnet import ndarray as nd # Adagrad. def adagrad(params, sqrs, lr, batch_size): eps_stable = 1e-7 for param, sqr in zip(params, sqrs): g = param.grad / batch_size sqr[:] += nd.square(g) div = lr * g / nd.sqrt(sqr + eps_stable) param[:] -= div import mxnet as mx from mxnet...
Batch size 10, Learning rate 0.900000, Epoch 1, loss 5.3231e-05 Batch size 10, Learning rate 0.900000, Epoch 2, loss 4.9388e-05 Batch size 10, Learning rate 0.900000, Epoch 3, loss 4.9256e-05 w: [[ 1.99946415 -3.39996123]] b: 4.19967
Apache-2.0
chapter06_optimization/adagrad-scratch.ipynb
sgeos/mxnet_the_straight_dope
Flatten all battles, campains, etc.
def _flatten_battles(battles, root=None): buttles_to_run = copy(battles) records = [] for name, data in battles.items(): if 'children' in data: children = data.pop('children') records.extend(_flatten_battles(children, root=name)) else: data['level'] =...
_____no_output_____
MIT
Chapter11/_json_to_table.ipynb
Drtaylor1701/Learn-Python-by-Building-Data-Science-Applications
Store as CSV
for front, data in records.items(): data.to_csv(f'./data/{front}.csv', index=None)
_____no_output_____
MIT
Chapter11/_json_to_table.ipynb
Drtaylor1701/Learn-Python-by-Building-Data-Science-Applications
Stanza Dependency Parsing Navigation:* [General Info](info)* [Setting up Stanza for training](setup)* [Preparing Dataset for DEPPARSE](prepare)* [Training a Dependency Parser with Stanza](depparse)* [Using Trained Model for Prediction](predict)* [Prediction and Saving to CONLL-U](save) General Info [`Link to Manual`...
#!pip install stanza # !pip install -U stanza
_____no_output_____
CC0-1.0
stanza/stanza_depparse.ipynb
steysie/parse-xplore
Run in terminal.1. Clone Stanza GitHub repository```$git clone https://github.com/stanfordnlp/stanza```2. Move to cloned git repository & download embeddings ({lang}.vectors.xz format)(run in a screen, takes up several hours, depending on the Internet speed). Make sure the vectors are in `/extern_data/word2vec` folder....
import stanza stanza.download('ru')
Downloading https://raw.githubusercontent.com/stanfordnlp/stanza-resources/master/resources_1.0.0.json: 115kB [00:00, 2.47MB/s] 2020-07-24 11:27:31 INFO: Downloading default packages for language: ru (Russian)... 2020-07-24 11:27:32 INFO: File exists: /home/steysie/stanza_resources/ru/default.zip. 2...
CC0-1.0
stanza/stanza_depparse.ipynb
steysie/parse-xplore
Preparing Dataset for DEPPARSE
from corpuscula.corpus_utils import syntagrus, download_ud, Conllu from corpuscula import corpus_utils import junky import corpuscula.corpus_utils as cu import stanza # cu.set_root_dir('.') # !pip install -U junky corpus_utils.download_syntagrus(root_dir=corpus_utils.get_root_dir(), overwrite=True) junky.clear_tqdm() ...
Load corpus
CC0-1.0
stanza/stanza_depparse.ipynb
steysie/parse-xplore
Training a Dependency Parser with Stanza **`STEP 1`**`Input files for DEPPARSE model training should be placed here:` **`{UDBASE}/{corpus}/{corpus_short}-ud-{train,dev,test}.conllu`**, where * **`{UDBASE}`** is `./stanza/udbase/` (specified in `config.sh`), * **`{corpus}`** is full corpus name (e.g. `UD_Russian-SynTag...
nlp = stanza.Pipeline('ru', processors='tokenize,pos,lemma,ner,depparse', depparse_model_path='stanza/saved_models/depparse/ru_syntagrus_parser.pt', tokenize_pretokenized=True) doc = nlp(' '.join(test[0])) print(*[f'id: {word.id}\tword: {word...
_____no_output_____
CC0-1.0
stanza/stanza_depparse.ipynb
steysie/parse-xplore
Prediction and Saving Results to CONLL-U
junky.clear_tqdm() Conllu.save(stanza_parse(test), 'stanza_syntagrus.conllu', fix=True, log_file=None)
2020-07-28 13:24:45 INFO: Loading these models for language: ru (Russian): ======================================= | Processor | Package | --------------------------------------- | tokenize | syntagrus | | pos | syntagrus | | lemma | syntagrus | | dep...
CC0-1.0
stanza/stanza_depparse.ipynb
steysie/parse-xplore
Inference on Test Corpus
# !pip install mordl from mordl import conll18_ud_eval gold_file = 'corpus/_UD/UD_Russian-SynTagRus/ru_syntagrus-ud-test.conllu' system_file = 'stanza_syntagrus.conllu' conll18_ud_eval(gold_file, system_file, verbose=True, counts=False)
0%| | 0/6491 [34:50<?, ?it/s]
CC0-1.0
stanza/stanza_depparse.ipynb
steysie/parse-xplore
ISFOG 2020 - Pile driving prediction eventData science techniques are rapidly transforming businesses in a broad range of sectors. While marketing and social applications have received most attention to date, geotechnical engineering can also benefit from data science tools that are now readily available. In the conte...
import pandas as pd import numpy as np import matplotlib.pyplot as plt import sklearn %matplotlib inline
_____no_output_____
Apache-2.0
analysis/isfog-2020-linear-model-demo.ipynb
alexandershires/offshore-geo
2. Pile driving dataThe dataset is kindly provided by [Cathie Group](http://www.cathiegroup.com). 2.1. Importing dataThe first step in any data science exercise is to get familiar with the data. The data is provided in a csv file (```training_data.csv```). We can import the data with Pandas and display the first five ...
data = pd.read_csv("/kaggle/input/training_data.csv") # Store the contents of the csv file in the variable 'data' data.head()
_____no_output_____
Apache-2.0
analysis/isfog-2020-linear-model-demo.ipynb
alexandershires/offshore-geo
The data has 12 columns, containing PCPT data ($ q_c $, $ f_s $ and $ u_2 $), recorded hammer data (blowcount, normalised hammer energy, normalised ENTHRU and total number of blows), pile data (diameter, bottom wall thickness and pile final penetration). A unique ID identifies the location and $ z $ defines the depth b...
data.describe()
_____no_output_____
Apache-2.0
analysis/isfog-2020-linear-model-demo.ipynb
alexandershires/offshore-geo
2.3. PlottingWe can plot the cone tip resistance, blowcount and normalised ENTHRU energy for all locations to show how the data varies with depth. We can generate this plot using the ```Matplotlib``` package.
fig, ((ax1, ax2, ax3)) = plt.subplots(1, 3, sharey=True, figsize=(16,9)) ax1.scatter(data["qc [MPa]"], data["z [m]"], s=5) # Create the cone tip resistance vs depth plot ax2.scatter(data["Blowcount [Blows/m]"], data["z [m]"], s=5) # Create the Blowcount vs depth plot ax3.scatter(data["Normalised ENTRHU [-]"], data[...
_____no_output_____
Apache-2.0
analysis/isfog-2020-linear-model-demo.ipynb
alexandershires/offshore-geo
The cone resistance data shows that the site mainly consists of sand of varying relative density. In certain profiles, clay is present below 10m. There are also locations with very high cone resistance (>70MPa).The blowcount profile shows that blowcount is relatively well clustered around a generally increasing trend w...
# Select the data where the column 'Location ID' is equal to the location name location_data = data[data["Location ID"] == "EK"]
_____no_output_____
Apache-2.0
analysis/isfog-2020-linear-model-demo.ipynb
alexandershires/offshore-geo