markdown stringlengths 0 1.02M | code stringlengths 0 832k | output stringlengths 0 1.02M | license stringlengths 3 36 | path stringlengths 6 265 | repo_name stringlengths 6 127 |
|---|---|---|---|---|---|
`enumerate()`En caso de que también necesiten saber el indice: | for indice, valor in enumerate(mi_lista):
print('indice: {}, valoror: {}'.format(indice, valor)) | _____no_output_____ | MIT | notebooks/beginner/notebooks/for_loops.ipynb | mateodif/learn-python3 |
Iterando diccionarios | mi_dicc = {'hacker': True, 'edad': 72, 'nombre': 'John Doe'}
for valor in mi_dicc:
print(valor)
for llave, valor in mi_dicc.items():
print('{}={}'.format(llave, valor)) | _____no_output_____ | MIT | notebooks/beginner/notebooks/for_loops.ipynb | mateodif/learn-python3 |
`range()` | for numero in range(5):
print(numero)
for numero in range(2, 5):
print(numero)
for numero in range(0, 10, 2): # el ultimo son los pasos
print(numero) | _____no_output_____ | MIT | notebooks/beginner/notebooks/for_loops.ipynb | mateodif/learn-python3 |
Pos-Tagging & Feature ExtractionFollowing normalisation, we can now proceed to the process of pos-tagging and feature extraction. Let's start with pos-tagging. POS-taggingPart-of-speech tagging is one of the most important text analysis tasks used to classify words into their part-of-speech and label them according t... | import pandas as pd
df0 = pd.read_csv("../../data/interim/001_normalised_keyed_reviews.csv", sep="\t", low_memory=False)
df0.head()
# For monitoring duration of pandas processes
from tqdm import tqdm, tqdm_pandas
# To avoid RuntimeError: Set changed size during iteration
tqdm.monitor_interval = 0
# Register `pandas.p... | Progress:: 100%|██████████| 582711/582711 [00:12<00:00, 48007.55it/s]
| MIT | notebooks/test/002_pos_tagging-Copy1.ipynb | VictorQuintana91/Thesis |
Unfortunatelly, this tagger, though much better and accurate, takes a lot of time. In order to process the above data set it would need close to 3 days of running. Follow this link for more info on the tagger: https://nlp.stanford.edu/software/tagger.shtmlHistory | from nltk.tag import StanfordPOSTagger
from nltk import word_tokenize
# import os
# os.getcwd()
# Add the jar and model via their path (instead of setting environment variables):
jar = '../../models/stanford-postagger-full-2017-06-09/stanford-postagger.jar'
model = '../../models/stanford-postagger-full-2017-06-09/mod... | _____no_output_____ | MIT | notebooks/test/002_pos_tagging-Copy1.ipynb | VictorQuintana91/Thesis |
Thankfully, `nltk` provides documentation for each tag, which can be queried using the tag, e.g., `nltk.help.upenn_tagset(‘RB’)`, or a regular expression. `nltk` also provides batch pos-tagging method for document pos-tagging: | tagged_df['reviewText'][8] | _____no_output_____ | MIT | notebooks/test/002_pos_tagging-Copy1.ipynb | VictorQuintana91/Thesis |
The list of all possible tags appears below:| Tag | Description ||------|------------------------------------------|| CC | Coordinating conjunction || CD | Cardinal number || DT | Determiner || EX | ExistentialĘ... | ## Join with Original Key and Persist Locally to avoid RE-processing
uniqueKey_series_df = df0[['uniqueKey']]
uniqueKey_series_df.head()
pos_tagged_keyed_reviews = pd.concat([uniqueKey_series_df, tagged_df], axis=1);
pos_tagged_keyed_reviews.head()
pos_tagged_keyed_reviews.to_csv("../data/interim/002_pos_tagged_keyed_r... | _____no_output_____ | MIT | notebooks/test/002_pos_tagging-Copy1.ipynb | VictorQuintana91/Thesis |
NounsNouns generally refer to people, places, things, or concepts, e.g.: woman, Scotland, book, intelligence. Nouns can appear after determiners and adjectives, and can be the subject or object of the verb.The simplified noun tags are `N` for common nouns like book, and `NP` for proper nouns like Scotland. | def noun_collector(word_tag_list):
if(len(word_tag_list)>0):
return [word for (word, tag) in word_tag_list if tag in {'NN', 'NNS', 'NNP', 'NNPS'}]
nouns_df = pd.DataFrame(tagged_df['reviewText'].progress_apply(lambda review: noun_collector(review)))
nouns_df.head()
keyed_nouns_df = pd.concat([uniqueKey_seri... | _____no_output_____ | MIT | notebooks/test/002_pos_tagging-Copy1.ipynb | VictorQuintana91/Thesis |
CNTK 201B: Hands On Labs Image Recognition This hands-on lab shows how to implement image recognition task using [convolution network][] with CNTK v2 Python API. You will start with a basic feedforward CNN architecture in order to classify Cifar dataset, then you will keep adding advanced feature to your network. Fina... | # Figure 1
Image(url="https://cntk.ai/jup/201/cifar-10.png", width=500, height=500) | _____no_output_____ | RSA-MD | Tutorials/CNTK_201B_CIFAR-10_ImageHandsOn.ipynb | StillKeepTry/CNTK |
The above image is from: https://www.cs.toronto.edu/~kriz/cifar.html Convolution Neural Network (CNN)Convolution Neural Network (CNN) is a feedforward network comprise of a bunch of layers in such a way that the output of one layer is fed to the next layer (There are more complex architecture that skip layers, we will ... | # Figure 2
Image(url="https://cntk.ai/jup/201/Conv2D.png") | _____no_output_____ | RSA-MD | Tutorials/CNTK_201B_CIFAR-10_ImageHandsOn.ipynb | StillKeepTry/CNTK |
The stack of feature maps output are the input to the next layer. | # Figure 3
Image(url="https://cntk.ai/jup/201/Conv2DFeatures.png") | _____no_output_____ | RSA-MD | Tutorials/CNTK_201B_CIFAR-10_ImageHandsOn.ipynb | StillKeepTry/CNTK |
> Gradient-Based Learning Applied to Document Recognition, Proceedings of the IEEE, 86(11):2278-2324, November 1998> Y. LeCun, L. Bottou, Y. Bengio and P. Haffner In CNTK:Here the [convolution][] layer in Python:```pythondef Convolution(filter_shape, e.g. (3,3) num_filters, e.g. 64 ... | # Figure 4
Image(url="https://cntk.ai/jup/201/MaxPooling.png", width=400, height=400) | _____no_output_____ | RSA-MD | Tutorials/CNTK_201B_CIFAR-10_ImageHandsOn.ipynb | StillKeepTry/CNTK |
In CNTK:Here the [pooling][] layer in Python:```python Max poolingdef MaxPooling(filter_shape, e.g. (3,3) strides, (2,2) pad) True or False Average poolingdef AveragePooling(filter_shape, e.g. (3,3) strides, (2,2) pad) ... | from __future__ import print_function
import os
import numpy as np
import matplotlib.pyplot as plt
import math
from cntk.layers import default_options, Convolution, MaxPooling, AveragePooling, Dropout, BatchNormalization, Dense, Sequential, For
from cntk.io import MinibatchSource, ImageDeserializer, StreamDef, StreamD... | _____no_output_____ | RSA-MD | Tutorials/CNTK_201B_CIFAR-10_ImageHandsOn.ipynb | StillKeepTry/CNTK |
Now that we imported the needed modules, let's implement our first CNN, as shown in Figure 5 above.Let's implement the above network using CNTK layer API: | def create_basic_model(input, out_dims):
net = Convolution((5,5), 32, init=glorot_uniform(), activation=relu, pad=True)(input)
net = MaxPooling((3,3), strides=(2,2))(net)
net = Convolution((5,5), 32, init=glorot_uniform(), activation=relu, pad=True)(net)
net = MaxPooling((3,3), strides=(2,2))(net)... | _____no_output_____ | RSA-MD | Tutorials/CNTK_201B_CIFAR-10_ImageHandsOn.ipynb | StillKeepTry/CNTK |
To train the above model we need two things:* Read the training images and their corresponding labels.* Define a cost function, compute the cost for each mini-batch and update the model weights according to the cost value.To read the data in CNTK, we will use CNTK readers which handle data augmentation and can fetch da... | # model dimensions
image_height = 32
image_width = 32
num_channels = 3
num_classes = 10
#
# Define the reader for both training and evaluation action.
#
def create_reader(map_file, mean_file, train):
if not os.path.exists(map_file) or not os.path.exists(mean_file):
raise RuntimeError("This tutorials depe... | _____no_output_____ | RSA-MD | Tutorials/CNTK_201B_CIFAR-10_ImageHandsOn.ipynb | StillKeepTry/CNTK |
Now let us write the the training and validation loop. | #
# Train and evaluate the network.
#
def train_and_evaluate(reader_train, reader_test, max_epochs, model_func):
# Input variables denoting the features and label data
input_var = input_variable((num_channels, image_height, image_width))
label_var = input_variable((num_classes))
# Normalize the input
... | Training 116906 parameters in 10 parameter tensors.
Finished Epoch[1 of 300]: [Training] loss = 2.062444 * 50000, metric = 75.3% * 50000 13.316s (3754.8 samples per second);
Finished Epoch[2 of 300]: [Training] loss = 1.675133 * 50000, metric = 61.7% * 50000 13.772s (3630.5 samples per second);
Finished Epoch[3 of 300... | RSA-MD | Tutorials/CNTK_201B_CIFAR-10_ImageHandsOn.ipynb | StillKeepTry/CNTK |
Although, this model is very simple, it still has too much code, we can do better. Here the same model in more terse format: | def create_basic_model_terse(input, out_dims):
with default_options(activation=relu):
model = Sequential([
For(range(3), lambda i: [
Convolution((5,5), [32,32,64][i], init=glorot_uniform(), pad=True),
MaxPooling((3,3), strides=(2,2))
]),
D... | Training 116906 parameters in 10 parameter tensors.
Finished Epoch[1 of 300]: [Training] loss = 2.054147 * 50000, metric = 75.0% * 50000 13.674s (3656.6 samples per second);
Finished Epoch[2 of 300]: [Training] loss = 1.695077 * 50000, metric = 62.6% * 50000 14.271s (3503.7 samples per second);
Finished Epoch[3 of 300... | RSA-MD | Tutorials/CNTK_201B_CIFAR-10_ImageHandsOn.ipynb | StillKeepTry/CNTK |
Now that we have a trained model, let's classify the following image: | # Figure 6
Image(url="https://cntk.ai/jup/201/00014.png", width=64, height=64)
import PIL
def eval(pred_op, image_path):
label_lookup = ["airplane", "automobile", "bird", "cat", "deer", "dog", "frog", "horse", "ship", "truck"]
image_mean = 133.0
image_data = np.array(PIL.Image.open(image_path), dtype=n... | Top 3 predictions:
Label: truck , confidence: 98.95%
Label: ship , confidence: 0.46%
Label: automobile, confidence: 0.26%
| RSA-MD | Tutorials/CNTK_201B_CIFAR-10_ImageHandsOn.ipynb | StillKeepTry/CNTK |
Adding dropout layer, with drop rate of 0.25, before the last dense layer: | def create_basic_model_with_dropout(input, out_dims):
with default_options(activation=relu):
model = Sequential([
For(range(3), lambda i: [
Convolution((5,5), [32,32,64][i], init=glorot_uniform(), pad=True),
MaxPooling((3,3), strides=(2,2))
]),
... | Training 116906 parameters in 10 parameter tensors.
Finished Epoch[1 of 300]: [Training] loss = 2.123667 * 50000, metric = 78.7% * 50000 16.391s (3050.5 samples per second);
Finished Epoch[2 of 300]: [Training] loss = 1.817045 * 50000, metric = 67.9% * 50000 16.894s (2959.5 samples per second);
Finished Epoch[3 of 300... | RSA-MD | Tutorials/CNTK_201B_CIFAR-10_ImageHandsOn.ipynb | StillKeepTry/CNTK |
Add batch normalization after each convolution and before the last dense layer: | def create_basic_model_with_batch_normalization(input, out_dims):
with default_options(activation=relu):
model = Sequential([
For(range(3), lambda i: [
Convolution((5,5), [32,32,64][i], init=glorot_uniform(), pad=True),
BatchNormalization(map_rank=1),
... | Training 117290 parameters in 18 parameter tensors.
Finished Epoch[1 of 300]: [Training] loss = 1.512835 * 50000, metric = 54.1% * 50000 15.499s (3226.1 samples per second);
Finished Epoch[2 of 300]: [Training] loss = 1.206524 * 50000, metric = 42.8% * 50000 16.071s (3111.2 samples per second);
Finished Epoch[3 of 300... | RSA-MD | Tutorials/CNTK_201B_CIFAR-10_ImageHandsOn.ipynb | StillKeepTry/CNTK |
Let's implement an inspired VGG style network, using layer API, here the architecture:| VGG9 || ------------- || conv3-64 || conv3-64 || max3 || || conv3-96 || conv3-96 || max3 || || conv3-128 || conv3-128 || max3 || ... | def create_vgg9_model(input, out_dims):
with default_options(activation=relu):
model = Sequential([
For(range(3), lambda i: [
Convolution((3,3), [64,96,128][i], init=glorot_uniform(), pad=True),
Convolution((3,3), [64,96,128][i], init=glorot_uniform(), pad=True),
... | Training 2675978 parameters in 18 parameter tensors.
Finished Epoch[1 of 300]: [Training] loss = 2.253115 * 50000, metric = 83.6% * 50000 46.007s (1086.8 samples per second);
Finished Epoch[2 of 300]: [Training] loss = 1.931100 * 50000, metric = 71.8% * 50000 46.236s (1081.4 samples per second);
Finished Epoch[3 of 30... | RSA-MD | Tutorials/CNTK_201B_CIFAR-10_ImageHandsOn.ipynb | StillKeepTry/CNTK |
Residual Network (ResNet)One of the main problem of a Deep Neural Network is how to propagate the error all the way to the first layer. For a deep network, the gradient keep getting smaller until it has no effect on the network weights. [ResNet](https://arxiv.org/abs/1512.03385) was designed to overcome such problem, ... | # Figure 7
Image(url="https://cntk.ai/jup/201/ResNetBlock2.png") | _____no_output_____ | RSA-MD | Tutorials/CNTK_201B_CIFAR-10_ImageHandsOn.ipynb | StillKeepTry/CNTK |
The idea of the above block is 2 folds:* During back propagation the gradient have a path that doesn't affect its magnitude.* The network need to learn residual mapping (delta to x).So let's implements ResNet blocks using CNTK: ResNetNode ResNetNodeInc | ... | from cntk.ops import combine, times, element_times, AVG_POOLING
def convolution_bn(input, filter_size, num_filters, strides=(1,1), init=he_normal(), activation=relu):
if activation is None:
activation = lambda x: x
r = Convolution(filter_size, num_filters, strides=strides, init=init, activatio... | _____no_output_____ | RSA-MD | Tutorials/CNTK_201B_CIFAR-10_ImageHandsOn.ipynb | StillKeepTry/CNTK |
Let's write the full model: | def create_resnet_model(input, out_dims):
conv = convolution_bn(input, (3,3), 16)
r1_1 = resnet_basic_stack(conv, 16, 3)
r2_1 = resnet_basic_inc(r1_1, 32)
r2_2 = resnet_basic_stack(r2_1, 32, 2)
r3_1 = resnet_basic_inc(r2_2, 64)
r3_2 = resnet_basic_stack(r3_1, 64, 2)
# Global average pooli... | Training 272474 parameters in 65 parameter tensors.
Finished Epoch[1 of 300]: [Training] loss = 1.859668 * 50000, metric = 69.3% * 50000 47.499s (1052.7 samples per second);
Finished Epoch[2 of 300]: [Training] loss = 1.583096 * 50000, metric = 58.7% * 50000 48.541s (1030.0 samples per second);
Finished Epoch[3 of 300... | RSA-MD | Tutorials/CNTK_201B_CIFAR-10_ImageHandsOn.ipynb | StillKeepTry/CNTK |
Objective* 20190815: * Given stock returns for the last N days, we do prediction for the next N+H days, where H is the forecast horizon * We use double exponential smoothing to predict | %matplotlib inline
import math
import matplotlib
import numpy as np
import pandas as pd
import seaborn as sns
import time
from collections import defaultdict
from datetime import date, datetime, time, timedelta
from matplotlib import pyplot as plt
from pylab import rcParams
from sklearn.metrics import mean_squared_er... | We will start forecasting on day 1009
| Apache-2.0 | StockReturnsPrediction_fh21/StockReturnsPrediction_v2_DExpSmoothing.ipynb | clairvoyant/Stocks |
Common functions | def get_smape(y_true, y_pred):
"""
Compute symmetric mean absolute percentage error
"""
y_true, y_pred = np.array(y_true), np.array(y_pred)
return 100/len(y_true) * np.sum(2 * np.abs(y_pred - y_true) / (np.abs(y_true) + np.abs(y_pred)))
def get_mape(y_true, y_pred):
"""
Compute mean absolu... | _____no_output_____ | Apache-2.0 | StockReturnsPrediction_fh21/StockReturnsPrediction_v2_DExpSmoothing.ipynb | clairvoyant/Stocks |
Load data | df = pd.read_csv(stk_path, sep = ",")
# Convert Date column to datetime
df.loc[:, 'Date'] = pd.to_datetime(df['Date'],format='%Y-%m-%d')
# Change all column headings to be lower case, and remove spacing
df.columns = [str(x).lower().replace(' ', '_') for x in df.columns]
# Sort by datetime
df.sort_values(by='date', i... | _____no_output_____ | Apache-2.0 | StockReturnsPrediction_fh21/StockReturnsPrediction_v2_DExpSmoothing.ipynb | clairvoyant/Stocks |
Get Stock Returns | df['returns'] = df['adj_close'].pct_change() * 100
df.loc[0, 'returns'] = 0 # set the first value of returns to be 0 for simplicity
df.head()
# Plot returns over time
rcParams['figure.figsize'] = 10, 8 # width 10, height 8
ax = df.plot(x='date', y='returns', style='b-', grid=True)
ax.set_xlabel("date")
ax.set_ylabel("r... | _____no_output_____ | Apache-2.0 | StockReturnsPrediction_fh21/StockReturnsPrediction_v2_DExpSmoothing.ipynb | clairvoyant/Stocks |
Predict for a specific H (forecast horizon) and a specific date | i = train_val_size # Predict for day i, for the next H-1 days. Note indexing of days start from 0.
print("Predicting on day %d, date %s, with forecast horizon H = %d" % (i, df.iloc[i]['date'], H))
# Predict
preds_list = double_exponential_smoothing(df['returns'][i-train_val_size:i].values, H)
print("For forecast horizo... | _____no_output_____ | Apache-2.0 | StockReturnsPrediction_fh21/StockReturnsPrediction_v2_DExpSmoothing.ipynb | clairvoyant/Stocks |
Predict for a specific H (forecast horizon) and a specific date, with hyperparameter tuning - alpha, beta | i = train_val_size # Predict for day i, for the next H-1 days. Note indexing of days start from 0.
print("Predicting on day %d, date %s, with forecast horizon H = %d" % (i, df.iloc[i]['date'], H))
# Get optimum hyperparams
alpha_opt, beta_opt, err_df = hyperpram_tune_alpha_beta(df['returns'][i-train_val_size:i].values,... | _____no_output_____ | Apache-2.0 | StockReturnsPrediction_fh21/StockReturnsPrediction_v2_DExpSmoothing.ipynb | clairvoyant/Stocks |
Predict for a specific H (forecast horizon), and various dates, using model trained in previous step | print("alpha_opt = " + str(alpha_opt))
print("beta_opt = " + str(beta_opt))
# Predict and compute error metrics also
rmse = [] # root mean square error
mape = [] # mean absolute percentage error
mae = [] # mean absolute error
smape = [] # symmetric mean absolute percentage error
preds_dict = {}
i_list = range(train_va... | _____no_output_____ | Apache-2.0 | StockReturnsPrediction_fh21/StockReturnsPrediction_v2_DExpSmoothing.ipynb | clairvoyant/Stocks |
Predict for a specific H (forecast horizon), and various dates, tuning model for every prediction | # Predict and compute error metrics also
preds_dict = {}
results_final = defaultdict(list)
i_list = range(train_val_size, train_val_size+84*5+42+1, 42)
for i in i_list:
print("Predicting on day %d, date %s" % (i, df.iloc[i]['date']))
# Get optimum hyperparams
alpha_opt, beta_opt, err_df = hyperpram_tun... | _____no_output_____ | Apache-2.0 | StockReturnsPrediction_fh21/StockReturnsPrediction_v2_DExpSmoothing.ipynb | clairvoyant/Stocks |
Notebook generates plots of activation functionsFigures generated include:- Fig. 1a- Supp Fig. 7 | import os
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
x = np.linspace(-5,5,50)
x_neg = np.linspace(-5,0,50)
x_pos = np.linspace(0, 5,50)
x_elu = np.concatenate([x_neg, x_pos])
elu = np.concatenate([.5*(np.exp(x_neg)-1), x_pos])
fig = plt.figure(figsize=(5,5))
ax = plt.subplot(111)
plt.plo... | _____no_output_____ | MIT | code/plot_activation_functions.ipynb | p-koo/exponential_activations |
Developing an AI applicationGoing forward, AI algorithms will be incorporated into more and more everyday applications. For example, you might want to include an image classifier in a smart phone app. To do this, you'd use a deep learning model trained on hundreds of thousands of images as part of the overall applicat... | # Imports here
import numpy as np
import torch
import data_utils
import train_f as train
from utils import get_saved_model, get_device, get_checkpoints_path, evaluate_model
import predict_f as predict
import matplotlib.pyplot as plt | _____no_output_____ | MIT | Image Classifier Project.ipynb | stoykostanchev/aipnd-project |
Load the dataHere you'll use `torchvision` to load the data ([documentation](http://pytorch.org/docs/0.3.0/torchvision/index.html)). The data should be included alongside this notebook, otherwise you can [download it here](https://s3.amazonaws.com/content.udacity-data.com/nd089/flower_data.tar.gz). The dataset is spli... | data_dir = 'flowers'
train_dir = data_dir + '/train'
valid_dir = data_dir + '/valid'
test_dir = data_dir + '/test'
# TODO: Define your transforms for the training, validation, and testing sets
dataloaders, image_datasets, data_transforms = data_utils.get_data(data_dir, train_dir, valid_dir, test_dir) | _____no_output_____ | MIT | Image Classifier Project.ipynb | stoykostanchev/aipnd-project |
Label mappingYou'll also need to load in a mapping from category label to category name. You can find this in the file `cat_to_name.json`. It's a JSON object which you can read in with the [`json` module](https://docs.python.org/2/library/json.html). This will give you a dictionary mapping the integer encoded categori... | import json
with open('cat_to_name.json', 'r') as f:
cat_to_name = json.load(f) | _____no_output_____ | MIT | Image Classifier Project.ipynb | stoykostanchev/aipnd-project |
Building and training the classifierNow that the data is ready, it's time to build and train the classifier. As usual, you should use one of the pretrained models from `torchvision.models` to get the image features. Build and train a new feed-forward classifier using those features.We're going to leave this part up to... | # arch = 'resnet34'
# arch = 'inception_v3' -> Expected tensor for argument #1 'input' to have the same dimension as tensor for 'result'; but 4 does not equal 2 (while checking arguments for cudnn_convolution)
# arch = 'densenet161'
arch = 'vgg16'
train.train(data_dir, cat_to_name, './', max_epochs=1, arch=arch) | - Loaded model from a checkpoint -
- Input features: - 25088
- Continuing training from a previous state -
- Starting from epoch: 0
- End epoch: 1
- Training ... -
Epoch: 1/1 ... Steps: 50 ... Train loss: 1.3225 ... Train accuracy: 65%
Epoch: 1/1 ... Steps: 100 ... Train loss: 1.3011 ... Train accuracy: 66%
Evaluatin... | MIT | Image Classifier Project.ipynb | stoykostanchev/aipnd-project |
Testing your networkIt's good practice to test your trained network on test data, images the network has never seen either in training or validation. This will give you a good estimate for the model's performance on completely new images. Run the test images through the network and measure the accuracy, the same way y... | model = get_saved_model(arch=arch)
model.to(get_device())
model.eval()
acc, _ = evaluate_model(dataloaders['test'], model)
print('Accuracy on the test dataset: %d %%' % (acc)) | - Loaded model from a checkpoint -
- Input features: - 25088
- Calculating accuracy and loss ... -
Accuracy on the test dataset: 65 %
| MIT | Image Classifier Project.ipynb | stoykostanchev/aipnd-project |
Save the checkpointNow that your network is trained, save the model so you can load it later for making predictions. You probably want to save other things such as the mapping of classes to indices which you get from one of the image datasets: `image_datasets['train'].class_to_idx`. You can attach this to the model as... | # See utils.save_checkpoint | _____no_output_____ | MIT | Image Classifier Project.ipynb | stoykostanchev/aipnd-project |
Loading the checkpointAt this point it's good to write a function that can load a checkpoint and rebuild the model. That way you can come back to this project and keep working on it without having to retrain the network. | # See utils.get_saved_model | _____no_output_____ | MIT | Image Classifier Project.ipynb | stoykostanchev/aipnd-project |
Inference for classificationNow you'll write a function to use a trained network for inference. That is, you'll pass an image into the network and predict the class of the flower in the image. Write a function called `predict` that takes an image and a model, then returns the top $K$ most likely classes along with the... | # See predict.process_image | _____no_output_____ | MIT | Image Classifier Project.ipynb | stoykostanchev/aipnd-project |
To check your work, the function below converts a PyTorch tensor and displays it in the notebook. If your `process_image` function works, running the output through this function should return the original image (except for the cropped out portions). | def imshow(image, ax=None, title=None):
"""Imshow for Tensor."""
if ax is None:
fig, ax = plt.subplots()
# PyTorch tensors assume the color channel is the first dimension
# but matplotlib assumes is the third dimension
image = image.numpy().transpose((1, 2, 0))
# Undo preproces... | _____no_output_____ | MIT | Image Classifier Project.ipynb | stoykostanchev/aipnd-project |
Class PredictionOnce you can get images in the correct format, it's time to write a function for making predictions with your model. A common practice is to predict the top 5 or so (usually called top-$K$) most probable classes. You'll want to calculate the class probabilities then find the $K$ largest values.To get t... |
predict.predict('./flowers/valid/59/image_05034.jpg', get_saved_model(arch=arch)) | - Loaded model from a checkpoint -
- Input features: - 25088
| MIT | Image Classifier Project.ipynb | stoykostanchev/aipnd-project |
Sanity CheckingNow that you can use a trained model for predictions, check to make sure it makes sense. Even if the testing accuracy is high, it's always good to check that there aren't obvious bugs. Use `matplotlib` to plot the probabilities for the top 5 classes as a bar graph, along with the input image. It should ... | # TODO: Display an image along with the top 5 classes
model = get_saved_model(arch=arch)
model.eval()
checkpoint = torch.load(get_checkpoints_path(), map_location=str(get_device()))
img_path = './flowers/valid/76/image_02458.jpg'
real_class = checkpoint['cat_to_name'].get(str(img_path.split('/')[3]))
print(real_class)... | - Loaded model from a checkpoint -
- Input features: - 25088
morning glory
| MIT | Image Classifier Project.ipynb | stoykostanchev/aipnd-project |
스파크를 이용한 기본 지표 생성 예제> 기본 지표를 생성하는 데에 있어, 정해진 틀을 그대로 따라하기 보다, 가장 직관적인 방법을 지속적으로 개선하는 과정을 설명하기 위한 예제입니다. 첫 번째 예제인 만큼 지표의 복잡도를 줄이기 위해 해당 서비스를 오픈 일자는 2020/10/25 이며, 지표를 집계하는 시점은 2020/10/26 일 입니다* 원본 데이터를 그대로 읽는 방법* dataframe api 를 이용하는 방법* spark.sql 을 이용하는 방법* 기본 지표 (DAU, PU)를 추출하는 예제 실습* 날짜에 대한 필터를 넣는 방법* 날짜에 대한 필터를 데이터 ... | from pyspark.sql import SparkSession
from pyspark.sql.functions import *
spark = SparkSession \
.builder \
.appName("Data Engineer Basic Day3") \
.config("spark.dataengineer.basic.day3", "tutorial-1") \
.getOrCreate()
spark.read.option("inferSchema", "true").option("header", "true").json("access/202010... | +--------+---------+-------+
|col_name|data_type|comment|
+--------+---------+-------+
| u_id| int| null|
| u_name| string| null|
|u_gender| string| null|
|u_signup| int| null|
+--------+---------+-------+
+--------+---------+-------+
|col_name|data_type|comment|
+--------+---------+-------+
... | Apache-2.0 | day1/notebooks/.ipynb_checkpoints/lgde_basic-checkpoint.ipynb | sw-woo/data-engineer-basic-training |
과제 1. 주어진 데이터를 이용하여 2020/10/25 기준의 DAU, PU 를 구하시오* DAU : Daily Active User, 일 별 접속자 수 - log_access 를 통해 unique 한 a_uid 값을 구합니다* PU : Purchase User, 일 별 구매자 수 - tbl_purchase 를 통해 unique 한 p_uid 값을 구합니다> 값을 구하기 전에 Spark API 대신 Spark SQL 을 이용하기 위해 [createOrReplaceTempView](https://spark.apache.org/docs/latest/api/pytho... | # DAU - access
spark.sql("select a_time as a_time, a_uid from access").show()
dau = spark.sql("select count(distinct a_uid) as DAU from access where a_time >= '2020-10-25 00:00:00' and a_time < '2020-10-26 00:00:00'")
dau.show()
# PU - purchase
spark.sql("select p_time, p_uid from purchase").show()
pu = spark.sql("sele... | _____no_output_____ | Apache-2.0 | day1/notebooks/.ipynb_checkpoints/lgde_basic-checkpoint.ipynb | sw-woo/data-engineer-basic-training |
과제 2. 주어진 데이터를 이용하여 2020/10/25 기준의 ARPU, ARPPU 를 구하시오* ARPU : Average Revenue Per User, 유저 당 평균 수익 - 해당 일자의 전체 수익 (Total Purchase Amount) / 해당 일에 접속한 유저 수 (DAU)* ARPPU : Average Revenue Per Purchase User, 구매 유저 당 평균 수익 - 해당 일자의 전체 수익 (Total Purchase Amount) / 해당 일에 접속한 구매 유저 수 (PU) | # ARPU - total purchase amount, dau
query="select sum(p_amount) / {} from purchase where p_time >= '2020-10-25 00:00:00' and p_time < '2020-10-26 00:00:00'".format(v_dau)
print(query)
total_purchase_amount = spark.sql("select sum(p_amount) as total_purchase_amount from purchase where p_time >= '2020-10-25 00:00:00' a... | | ARPPU | 3000000.0 |
| Apache-2.0 | day1/notebooks/.ipynb_checkpoints/lgde_basic-checkpoint.ipynb | sw-woo/data-engineer-basic-training |
과제 3. 주어진 데이터를 이용하여 2020/10/26 현재의 "누적 매출 금액" 과 "누적 접속 유저수"를 구하시오* 누적 매출 금액 : 10/25 (오픈) ~ 현재 - 전체 로그를 읽어서 매출 금액의 합을 구한다 - 유저별 매출 정보를 누적하여 저장해두고 재활용한다* 누적 접속 유저수 : 10/25 (오픈) ~ 현재 - 전체 로그를 읽어서 접속자의 유일한 수를 구한다 - 유저별 접속 정보를 누적하여 저장해두고 재활용한다 | # 누적 매출 금액
spark.sql("select sum(p_amount) from purchase ").show()
# 누적 접속 유저수
spark.sql("select count(distinct a_uid) from access").show() | +-------------+
|sum(p_amount)|
+-------------+
| 16700000|
+-------------+
+---------------------+
|count(DISTINCT a_uid)|
+---------------------+
| 9|
+---------------------+
| Apache-2.0 | day1/notebooks/.ipynb_checkpoints/lgde_basic-checkpoint.ipynb | sw-woo/data-engineer-basic-training |
과제 4. 유저별 정보를 누적시키기 위한 디멘젼 테이블을 설계하고 생성합니다 User Dimension 테이블 설계| 컬럼 명 | 컬럼 타입 | 컬럼 설명 || :- | :-: | :- || d_uid | int | 유저 아이디 || d_name | string | 고객 이름 || d_pamount | int | 누적 구매 금액 || d_pcount | int | 누적 구매 횟수 || d_acount | int | 누적 접속 횟수 | | # 오픈 첫 날의 경우 예외적으로 별도의 프로그램을 작성합니다
#
# 1. 가장 큰 레코드 수를 가진 정보가 접속정보이므로 해당 일자의 이용자 별 접속 횟수를 추출합니다
# 단, login 횟수를 접속 횟수로 가정합니다 - logout 만 있는 경우는 login 유실 혹은 전일자의 로그이므로 이러한 경우는 제외합니다
spark.sql("describe access").show()
spark.sql("select * from access where a_id = 'login' and a_time >= '2020-10-25 00:00:00' and a_time < '20... | _____no_output_____ | Apache-2.0 | day1/notebooks/.ipynb_checkpoints/lgde_basic-checkpoint.ipynb | sw-woo/data-engineer-basic-training |
과제 5. 전일자 디멘젼 정보를 이용하여 누적된 접속, 매출 지표를 생성합니다 | # 이전 일자 기준의 고객의 상태를 유지하여 활용합니다
yesterday = spark.read.parquet(target)
yesterday.sort(yesterday["d_uid"].asc()).show()
# 5. 다음 날 동일한 지표를 생성하되 이전 일자의 정보에 누적한 지표를 생성합니다
# 기존 테이블의 고객과 오늘 신규 고객을 모두 포함한 전체 데이터집합을 생성합니다
yesterday.show()
# 새로운 모수를 추가해야 하므로 전체 모수에 해당하는 uid 만을 추출합니다
uid = yesterday.select("d_uid").join(accs.sel... | _____no_output_____ | Apache-2.0 | day1/notebooks/.ipynb_checkpoints/lgde_basic-checkpoint.ipynb | sw-woo/data-engineer-basic-training |
과제 6. inner, left_outer, right_outer, full_outer 조인 실습 예제를 작성하시오 | valuesA = [('A',1),('B',2),('C',3),('D',4)]
A = spark.createDataFrame(valuesA,['a_id','a_value'])
valuesB = [('C',10),('D',20),('E',30),('F',40)]
B = spark.createDataFrame(valuesB,['b_id','b_value'])
A.join(B, A.a_id == B.b_id, "inner").sort(A.a_id.asc()).show() # C, D
# A.join(B, A.a_id == B.b_id, "left").sort(A.a_... | +---+
| id|
+---+
| A|
| B|
| C|
| D|
| E|
| F|
+---+
| Apache-2.0 | day1/notebooks/.ipynb_checkpoints/lgde_basic-checkpoint.ipynb | sw-woo/data-engineer-basic-training |
AI for Medicine Course 1 Week 1 lecture exercises Counting labelsAs you saw in the lecture videos, one way to avoid having class imbalance impact the loss function is to weight the losses differently. To choose the weights, you first need to calculate the class frequencies.For this exercise, you'll just get the coun... | # Import the necessary packages
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
# Read csv file containing training datadata
train_df = pd.read_csv("nih/train-small.csv")
# Count up the number of instances of each class (drop non-class columns from the cou... | _____no_output_____ | MIT | Week1/Counting Labels and weight loss function.ipynb | Armos05/Ai-for-Medical-Diagnosis-Specialization |
Weighted Loss function Below is an example of calculating weighted loss. In the assignment, you will calculate a weighted loss function. This sample code will give you some intuition for what the weighted loss function is doing, and also help you practice some syntax you will use in the graded assignment.For this ex... | # Generate an array of 4 binary label values, 3 positive and 1 negative
y_true = np.array(
[[1],
[1],
[1],
[0]])
print(f"y_true: \n{y_true}") | _____no_output_____ | MIT | Week1/Counting Labels and weight loss function.ipynb | Armos05/Ai-for-Medical-Diagnosis-Specialization |
Two modelsTo better understand the loss function, you will pretend that you have two models.- Model 1 always outputs a 0.9 for any example that it's given. - Model 2 always outputs a 0.1 for any example that it's given. | # Make model predictions that are always 0.9 for all examples
y_pred_1 = 0.9 * np.ones(y_true.shape)
print(f"y_pred_1: \n{y_pred_1}")
print()
y_pred_2 = 0.1 * np.ones(y_true.shape)
print(f"y_pred_2: \n{y_pred_2}") | _____no_output_____ | MIT | Week1/Counting Labels and weight loss function.ipynb | Armos05/Ai-for-Medical-Diagnosis-Specialization |
Problems with the regular loss functionThe learning goal here is to notice that with a regular loss function (not a weighted loss), the model that always outputs 0.9 has a smaller loss (performs better) than model 2.- This is because there is a class imbalance, where 3 out of the 4 labels are 1.- If the data were perf... | loss_reg_1 = -1 * np.sum(y_true * np.log(y_pred_1)) + \
-1 * np.sum((1 - y_true) * np.log(1 - y_pred_1))
print(f"loss_reg_1: {loss_reg_1:.4f}")
loss_reg_2 = -1 * np.sum(y_true * np.log(y_pred_2)) + \
-1 * np.sum((1 - y_true) * np.log(1 - y_pred_2))
print(f"loss_reg_2: {loss_reg_2:.4f}")
... | _____no_output_____ | MIT | Week1/Counting Labels and weight loss function.ipynb | Armos05/Ai-for-Medical-Diagnosis-Specialization |
Notice that the loss function gives a greater loss when the predictions are always 0.1, because the data is imbalanced, and has three labels of `1` but only one label for `0`.Given a class imbalance with more positive labels, the regular loss function implies that the model with the higher prediction of 0.9 performs be... | # calculate the positive weight as the fraction of negative labels
w_p = 1/4
# calculate the negative weight as the fraction of positive labels
w_n = 3/4
print(f"positive weight w_p: {w_p}")
print(f"negative weight w_n {w_n}") | _____no_output_____ | MIT | Week1/Counting Labels and weight loss function.ipynb | Armos05/Ai-for-Medical-Diagnosis-Specialization |
Model 1 weighted lossRun the next two cells to calculate the two loss terms separately.Here, `loss_1_pos` and `loss_1_neg` are calculated using the `y_pred_1` predictions. | # Calculate and print out the first term in the loss function, which we are calling 'loss_pos'
loss_1_pos = -1 * np.sum(w_p * y_true * np.log(y_pred_1 ))
print(f"loss_1_pos: {loss_1_pos:.4f}")
# Calculate and print out the second term in the loss function, which we're calling 'loss_neg'
loss_1_neg = -1 * np.sum(w_n * (... | _____no_output_____ | MIT | Week1/Counting Labels and weight loss function.ipynb | Armos05/Ai-for-Medical-Diagnosis-Specialization |
Model 2 weighted lossNow do the same calculations for when the predictions are from `y_pred_2'. Calculate the two terms of the weighted loss function and add them together. | # Calculate and print out the first term in the loss function, which we are calling 'loss_pos'
loss_2_pos = -1 * np.sum(w_p * y_true * np.log(y_pred_2))
print(f"loss_2_pos: {loss_2_pos:.4f}")
# Calculate and print out the second term in the loss function, which we're calling 'loss_neg'
loss_2_neg = -1 * np.sum(w_n * (1... | _____no_output_____ | MIT | Week1/Counting Labels and weight loss function.ipynb | Armos05/Ai-for-Medical-Diagnosis-Specialization |
Compare model 1 and model 2 weighted loss | print(f"When the model always predicts 0.9, the total loss is {loss_1:.4f}")
print(f"When the model always predicts 0.1, the total loss is {loss_2:.4f}") | _____no_output_____ | MIT | Week1/Counting Labels and weight loss function.ipynb | Armos05/Ai-for-Medical-Diagnosis-Specialization |
What do you notice?Since you used a weighted loss, the calculated loss is the same whether the model always predicts 0.9 or always predicts 0.1. You may have also noticed that when you calculate each term of the weighted loss separately, there is a bit of symmetry when comparing between the two sets of predictions. | print(f"loss_1_pos: {loss_1_pos:.4f} \t loss_1_neg: {loss_1_neg:.4f}")
print()
print(f"loss_2_pos: {loss_2_pos:.4f} \t loss_2_neg: {loss_2_neg:.4f}") | _____no_output_____ | MIT | Week1/Counting Labels and weight loss function.ipynb | Armos05/Ai-for-Medical-Diagnosis-Specialization |
Even though there is a class imbalance, where there are 3 positive labels but only one negative label, the weighted loss accounts for this by giving more weight to the negative label than to the positive label. Weighted Loss for more than one classIn this week's assignment, you will calculate the multi-class weighted ... | # View the labels (true values) that you will practice with
y_true = np.array(
[[1,0],
[1,0],
[1,0],
[1,0],
[0,1]
])
y_true | _____no_output_____ | MIT | Week1/Counting Labels and weight loss function.ipynb | Armos05/Ai-for-Medical-Diagnosis-Specialization |
Choosing axis=0 or axis=1You will use `numpy.sum` to count the number of times column `0` has the value 0. First, notice the difference when you set axis=0 versus axis=1 | # See what happens when you set axis=0
print(f"using axis = 0 {np.sum(y_true,axis=0)}")
# Compare this to what happens when you set axis=1
print(f"using axis = 1 {np.sum(y_true,axis=1)}") | _____no_output_____ | MIT | Week1/Counting Labels and weight loss function.ipynb | Armos05/Ai-for-Medical-Diagnosis-Specialization |
Notice that if you choose `axis=0`, the sum is taken for each of the two columns. This is what you want to do in this case. If you set `axis=1`, the sum is taken for each row. Calculate the weightsPreviously, you visually inspected the data to calculate the fraction of negative and positive labels. Here, you can do ... | # set the positive weights as the fraction of negative labels (0) for each class (each column)
w_p = np.sum(y_true == 0,axis=0) / y_true.shape[0]
w_p
# set the negative weights as the fraction of positive labels (1) for each class
w_n = np.sum(y_true == 1, axis=0) / y_true.shape[0]
w_n | _____no_output_____ | MIT | Week1/Counting Labels and weight loss function.ipynb | Armos05/Ai-for-Medical-Diagnosis-Specialization |
In the assignment, you will train a model to try and make useful predictions. In order to make this example easier to follow, you will pretend that your model always predicts the same value for every example. | # Set model predictions where all predictions are the same
y_pred = np.ones(y_true.shape)
y_pred[:,0] = 0.3 * y_pred[:,0]
y_pred[:,1] = 0.7 * y_pred[:,1]
y_pred | _____no_output_____ | MIT | Week1/Counting Labels and weight loss function.ipynb | Armos05/Ai-for-Medical-Diagnosis-Specialization |
As before, calculate the two terms that make up the loss function. Notice that you are working with more than one class (represented by columns). In this case, there are two classes.Start by calculating the loss for class `0`.$$ loss^{(i)} = loss_{pos}^{(i)} + los_{neg}^{(i)} $$$$loss_{pos}^{(i)} = -1 \times weight_{... | # Print and view column zero of the weight
print(f"w_p[0]: {w_p[0]}")
print(f"y_true[:,0]: {y_true[:,0]}")
print(f"y_pred[:,0]: {y_pred[:,0]}")
# calculate the loss from the positive predictions, for class 0
loss_0_pos = -1 * np.sum(w_p[0] *
y_true[:, 0] *
np.log(y_pred[:, 0])
... | _____no_output_____ | MIT | Week1/Counting Labels and weight loss function.ipynb | Armos05/Ai-for-Medical-Diagnosis-Specialization |
View the zero column for the weights, true values, and predictions that you will use to calculate the loss from the negative predictions. | # Print and view column zero of the weight
print(f"w_n[0]: {w_n[0]}")
print(f"y_true[:,0]: {y_true[:,0]}")
print(f"y_pred[:,0]: {y_pred[:,0]}")
# Calculate the loss from the negative predictions, for class 0
loss_0_neg = -1 * np.sum(
w_n[0] *
(1 - y_true[:, 0]) *
np.lo... | _____no_output_____ | MIT | Week1/Counting Labels and weight loss function.ipynb | Armos05/Ai-for-Medical-Diagnosis-Specialization |
Now you are familiar with the array slicing that you would use when there are multiple disease classes stored in a two-dimensional array. Now it's your turn!* Can you calculate the loss for class (column) `1`? | # calculate the loss from the positive predictions, for class 1
loss_1_pos = None | _____no_output_____ | MIT | Week1/Counting Labels and weight loss function.ipynb | Armos05/Ai-for-Medical-Diagnosis-Specialization |
Expected output```CPPloss_1_pos: 0.2853``` | # Calculate the loss from the negative predictions, for class 1
loss_1_neg = None | _____no_output_____ | MIT | Week1/Counting Labels and weight loss function.ipynb | Armos05/Ai-for-Medical-Diagnosis-Specialization |
Expected output```CPPloss_1_neg: 0.9632``` | # add the two loss terms to get the total loss for class 0
loss_1 = None | _____no_output_____ | MIT | Week1/Counting Labels and weight loss function.ipynb | Armos05/Ai-for-Medical-Diagnosis-Specialization |
___ ___ NumPy Exercises - SolutionsNow that we've learned about NumPy let's test your knowledge. We'll start off with a few simple tasks and then you'll be asked some more complicated questions. Import NumPy as np | import numpy as np | _____no_output_____ | Apache-2.0 | 1-NumPy/Numpy Exercises - Solutions.ipynb | BhavyaSree/pythonForDataScience |
Create an array of 10 zeros | np.zeros(10) | _____no_output_____ | Apache-2.0 | 1-NumPy/Numpy Exercises - Solutions.ipynb | BhavyaSree/pythonForDataScience |
Create an array of 10 ones | np.ones(10) | _____no_output_____ | Apache-2.0 | 1-NumPy/Numpy Exercises - Solutions.ipynb | BhavyaSree/pythonForDataScience |
Create an array of 10 fives | np.ones(10) * 5 | _____no_output_____ | Apache-2.0 | 1-NumPy/Numpy Exercises - Solutions.ipynb | BhavyaSree/pythonForDataScience |
Create an array of the integers from 10 to 50 | np.arange(10,51) | _____no_output_____ | Apache-2.0 | 1-NumPy/Numpy Exercises - Solutions.ipynb | BhavyaSree/pythonForDataScience |
Create an array of all the even integers from 10 to 50 | np.arange(10,51,2) | _____no_output_____ | Apache-2.0 | 1-NumPy/Numpy Exercises - Solutions.ipynb | BhavyaSree/pythonForDataScience |
Create a 3x3 matrix with values ranging from 0 to 8 | np.arange(9).reshape(3,3) | _____no_output_____ | Apache-2.0 | 1-NumPy/Numpy Exercises - Solutions.ipynb | BhavyaSree/pythonForDataScience |
Create a 3x3 identity matrix | np.eye(3) | _____no_output_____ | Apache-2.0 | 1-NumPy/Numpy Exercises - Solutions.ipynb | BhavyaSree/pythonForDataScience |
Use NumPy to generate a random number between 0 and 1 | np.random.rand(1) | _____no_output_____ | Apache-2.0 | 1-NumPy/Numpy Exercises - Solutions.ipynb | BhavyaSree/pythonForDataScience |
Use NumPy to generate an array of 25 random numbers sampled from a standard normal distribution | np.random.randn(25) | _____no_output_____ | Apache-2.0 | 1-NumPy/Numpy Exercises - Solutions.ipynb | BhavyaSree/pythonForDataScience |
Create the following matrix: | np.arange(1,101).reshape(10,10) / 100 | _____no_output_____ | Apache-2.0 | 1-NumPy/Numpy Exercises - Solutions.ipynb | BhavyaSree/pythonForDataScience |
Create an array of 20 linearly spaced points between 0 and 1: | np.linspace(0,1,20) | _____no_output_____ | Apache-2.0 | 1-NumPy/Numpy Exercises - Solutions.ipynb | BhavyaSree/pythonForDataScience |
Numpy Indexing and SelectionNow you will be given a few matrices, and be asked to replicate the resulting matrix outputs: | mat = np.arange(1,26).reshape(5,5)
mat
# WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW
# BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T
# BE ABLE TO SEE THE OUTPUT ANY MORE
mat[2:,1:]
# WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW
# BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWI... | _____no_output_____ | Apache-2.0 | 1-NumPy/Numpy Exercises - Solutions.ipynb | BhavyaSree/pythonForDataScience |
Now do the following Get the sum of all the values in mat | np.sum(mat) | _____no_output_____ | Apache-2.0 | 1-NumPy/Numpy Exercises - Solutions.ipynb | BhavyaSree/pythonForDataScience |
Get the standard deviation of the values in mat | np.std(mat) | _____no_output_____ | Apache-2.0 | 1-NumPy/Numpy Exercises - Solutions.ipynb | BhavyaSree/pythonForDataScience |
Get the sum of all the columns in mat | mat.sum(axis=0) | _____no_output_____ | Apache-2.0 | 1-NumPy/Numpy Exercises - Solutions.ipynb | BhavyaSree/pythonForDataScience |
Opiods VA - Nolan Reilly | import pandas as pd
%matplotlib inline
import matplotlib.pyplot as plt
opiodsva = pd.read_csv('OpidsVA.csv') #importing data
opiodsva.head() | _____no_output_____ | CC0-1.0 | graphing/BasicGraphAssignment.ipynb | nolanpreilly/nolanpreilly.github.io |
Do opioid overdoes tend to be associated with less affluent areas—that is, areas where families have lower incomes? | plt.scatter(opiodsva['MedianHouseholdIncome'], opiodsva['FPOO-Rate'])
plt.xlabel('Median Household Income ($)')
plt.ylabel('Opiod Overdoses')
plt.suptitle("Median Household Income vs Opiod Overdoses")
plt.show() | _____no_output_____ | CC0-1.0 | graphing/BasicGraphAssignment.ipynb | nolanpreilly/nolanpreilly.github.io |
I used a scatterplot because they can easily show the realtionship between 2 variables based on the grouping of the data points. It appears that as median houshold income rises, expected overdoses goes down. Some people who start with opioid addictions are reported to transition to heroin use. What is the relationsh... | plt.scatter(opiodsva['FFHO-Rate'], opiodsva['FPOO-Rate'])
plt.xlabel('Fentanyl/Heroin Overdoses')
plt.ylabel('Opiod Overdoses')
plt.suptitle('VA Opiod Overdoes vs Fentanyl Overdoses')
plt.show() | _____no_output_____ | CC0-1.0 | graphing/BasicGraphAssignment.ipynb | nolanpreilly/nolanpreilly.github.io |
There is a relationship with high fentanyl/heroin overdoses increasing the number of opiod overdoses. The relationship is not as strong as I expected, I would like to see the reporting methods. Presidents Which states are associated with the greatest number of United States presidents in terms of the presidents’ bir... | presidents = pd.read_csv('presidents.csv')
presidents.head()
presidents['State'].value_counts().plot(kind = 'bar')
plt.xlabel('State')
plt.ylabel('Presidents born')
plt.suptitle('Presidential Birthplaces')
plt.show() | _____no_output_____ | CC0-1.0 | graphing/BasicGraphAssignment.ipynb | nolanpreilly/nolanpreilly.github.io |
A bar chart appropriatly shows the values for each state. Virginia and Ohio are the two most common states for a US president to be born in. Total NSA How have vehicle sales in the United States varied over time? | cars = pd.read_csv('TOTALNSA.csv')
cars.head()
plt.plot(cars['DATE'], cars['TOTALNSA'])
plt.xlabel('Date')
plt.ylabel('Car Sales')
plt.suptitle('Monthly US Car Sales Since Jan 1 1970')
plt.xticks(cars['DATE'])
plt.show() | _____no_output_____ | CC0-1.0 | graphing/BasicGraphAssignment.ipynb | nolanpreilly/nolanpreilly.github.io |
--- | # Read the CSV file from the Resources folder into a Pandas DataFrame
loans_df = pd.read_csv(Path('Resources/lending_data.csv'))
# Review the DataFrame
display(loans_df.head())
display(loans_df.tail())
# Separate the data into labels and features
# Separate the y variable, the labels
y = loans_df['loan_status']
# Sep... | _____no_output_____ | MIT | credit_risk_resampling.ipynb | talibkateeb/Logistic-Regression-Credit-Risk-Analysis |
--- | # Import the LogisticRegression module from SKLearn
from sklearn.linear_model import LogisticRegression
# Instantiate the Logistic Regression model
# Assign a random_state parameter of 1 to the model
logistic_regression_model = LogisticRegression(random_state=1)
# Fit the model using training data
logistic_regression... | pre rec spe f1 geo iba sup
0 1.00 0.99 0.91 1.00 0.95 0.91 18765
1 0.85 0.91 0.99 0.88 0.95 0.90 619
avg / total 0.99 0.99 0.91 0.99 0.95 0... | MIT | credit_risk_resampling.ipynb | talibkateeb/Logistic-Regression-Credit-Risk-Analysis |
**Question:** How well does the logistic regression model predict both the `0` (healthy loan) and `1` (high-risk loan) labels?**Answer:** The model appears to predict both of them with really well. It predicts the healthy loan almost perfectly, and predicts the high risk loan a little less accuratley but still very hig... | # Import the RandomOverSampler module form imbalanced-learn
from imblearn.over_sampling import RandomOverSampler
# Instantiate the random oversampler model
# Assign a random_state parameter of 1 to the model
random_oversampler = RandomOverSampler(random_state=1)
# Fit the original training data to the random_oversamp... | pre rec spe f1 geo iba sup
0 1.00 0.99 0.99 1.00 0.99 0.99 18765
1 0.84 0.99 0.99 0.91 0.99 0.99 619
avg / total 0.99 0.99 0.99 0.99 0.99 0... | MIT | credit_risk_resampling.ipynb | talibkateeb/Logistic-Regression-Credit-Risk-Analysis |
Introduction à Python > présentée par Loïc Messal Introduction aux flux de contrôles Les tests Ils permettent d'exécuter des déclarations sous certaines conditions. | age = 17
if age < 18:
print("Mineur") # executé si et seulement si la condition est vraie
age = 19
if age < 18:
print("Mineur") # executé si et seulement si la condition est vraie
else:
print("Majeur") # exécuté si et seulement si la condition est fausse
employeur = "JLR"
# employeur = "Jakarto"
# employ... | Richesse d'un employé de JLR : riche
| MIT | 03_Tests_et_boucles.ipynb | Tofull/introduction_python |
Les boucles Les boucles permettent d'itérer sur des itérables (composés de plusieurs éléments). | un_iterable = []
un_iterable.append({"nom": "Messal", "prénom": "Loïc", "employeur": "Jakarto", "age": 23})
un_iterable.append({"nom": "Lassem", "prénom": "Ciol", "employeur": "Otrakaj", "age": 17})
un_iterable.append({"nom": "Alssem", "prénom": "Icol", "employeur": "Torakaj", "age": 20})
un_iterable
for item in un_ite... | Loïc Messal travaille chez Jakarto.
Ciol Lassem travaille chez Otrakaj.
Icol Alssem travaille chez Torakaj.
| MIT | 03_Tests_et_boucles.ipynb | Tofull/introduction_python |
Il est possible de générer des séquences avec la fonction `range()`. | for compteur in range(5): # range(5) génére une séquence de 0 à 5 (exclus)
print(compteur)
for compteur in range(1, 5+1): # range(1, 5+1) génére une séquence de 1 à 5+1 (exclus)
print(compteur)
for index in range(len(un_iterable)):
item = un_iterable[index] # accède à l'item à partir de son index
prin... | 0
1
2
3
4
| MIT | 03_Tests_et_boucles.ipynb | Tofull/introduction_python |
Il est possible de contrôler les boucles avec certains mots clés:- `continue` passera à l'itération suivante sans exécuter les déclarations qui suivent- `break` quittera la boucle prématurément | for index, item in enumerate(un_iterable):
if item["age"] < 18:
continue # Si la condition est vraie, passage à l'itération suivante.
print("Item {} : {} {} (majeur) travaille chez {}.".format(index, item["prénom"], item["nom"], item["employeur"]))
for index, item in enumerate(un_iterable):
print("... | Item 0 : Loïc Messal travaille chez Jakarto.
| MIT | 03_Tests_et_boucles.ipynb | Tofull/introduction_python |
Logistic Regression | import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split,KFold
from sklearn.utils import shuffle
from sklearn.metrics import confusion_matrix,accuracy_score,precision_score,\
recall_score,roc_curve,auc
import expectation_reflection as ER
from sklearn.linear_model import LogisticRegr... | _____no_output_____ | MIT | 2020.07.2400_classification/.ipynb_checkpoints/LR_knn9-checkpoint.ipynb | danhtaihoang/classification |
First of all, the processed data are imported. | #data_list = ['1paradox','2peptide','3stigma']
#data_list = np.loadtxt('data_list.txt',dtype='str')
data_list = np.loadtxt('data_list_30sets.txt',dtype='str')
#data_list = ['9coag']
print(data_list)
def read_data(data_id):
data_name = data_list[data_id]
print('data_name:',data_name)
Xy = np.loadtxt('..... | _____no_output_____ | MIT | 2020.07.2400_classification/.ipynb_checkpoints/LR_knn9-checkpoint.ipynb | danhtaihoang/classification |
Importação dos dados* Um CSV para cada campus* data: de 2019-02-18 (segunda semana de aula) até 019-06-28 (penultima semana de aula)* Granularidade: 1h (potência agregada pela média)* Dados climáticos obtidos pela plataforma yr* Colunas * potencia ativa da fase A (Kw) * Temperatura (ºC) * Pressão (hPa) | raw = pd.read_csv ('../../datasets/2019-1 Fpolis.csv', sep=',')
raw.describe()
(ax1, ax2,ax3) = raw.plot(subplots=True)
ax1.legend(loc='upper left')
ax2.legend(loc='upper left')
ax3.legend(loc='upper left')
raw['pa'].plot.kde().set_xlabel("Potência Ativa (KW)")
raw['temp_celsius'].plot.kde().set_xlabel("Temperatura (ºC... | _____no_output_____ | MIT | artificial_intelligence/01 - ConsumptionRegression/All campus/Fpolis.ipynb | LeonardoSanBenitez/LorisWeb |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.