markdown stringlengths 0 1.02M | code stringlengths 0 832k | output stringlengths 0 1.02M | license stringlengths 3 36 | path stringlengths 6 265 | repo_name stringlengths 6 127 |
|---|---|---|---|---|---|
--- Padding sequencesTo deal with both short and very long reviews, we'll pad or truncate all our reviews to a specific length. For reviews shorter than some `seq_length`, we'll pad with 0s. For reviews longer than `seq_length`, we can truncate them to the first `seq_length` words. A good `seq_length`, in this case, is... | def pad_features(reviews_ints, seq_length):
''' Return features of review_ints, where each review is padded with 0's
or truncated to the input seq_length.
'''
## implement function
features=None
return features
# Test your implementation!
seq_length = 200
features = pad_features... | _____no_output_____ | MIT | sentiment-rnn/Sentiment_RNN_Exercise.ipynb | MiniMarvin/pytorch-v2 |
Training, Validation, TestWith our data in nice shape, we'll split it into training, validation, and test sets.> **Exercise:** Create the training, validation, and test sets. * You'll need to create sets for the features and the labels, `train_x` and `train_y`, for example. * Define a split fraction, `split_frac` as t... | split_frac = 0.8
## split data into training, validation, and test data (features and labels, x and y)
## print out the shapes of your resultant feature data
| _____no_output_____ | MIT | sentiment-rnn/Sentiment_RNN_Exercise.ipynb | MiniMarvin/pytorch-v2 |
**Check your work**With train, validation, and test fractions equal to 0.8, 0.1, 0.1, respectively, the final, feature data shapes should look like:``` Feature Shapes:Train set: (20000, 200) Validation set: (2500, 200) Test set: (2500, 200)``` --- DataLoaders and BatchingAfter creating traini... | import torch
from torch.utils.data import TensorDataset, DataLoader
# create Tensor datasets
train_data = TensorDataset(torch.from_numpy(train_x), torch.from_numpy(train_y))
valid_data = TensorDataset(torch.from_numpy(val_x), torch.from_numpy(val_y))
test_data = TensorDataset(torch.from_numpy(test_x), torch.from_numpy... | _____no_output_____ | MIT | sentiment-rnn/Sentiment_RNN_Exercise.ipynb | MiniMarvin/pytorch-v2 |
--- Sentiment Network with PyTorchBelow is where you'll define the network.The layers are as follows:1. An [embedding layer](https://pytorch.org/docs/stable/nn.htmlembedding) that converts our word tokens (integers) into embeddings of a specific size.2. An [LSTM layer](https://pytorch.org/docs/stable/nn.htmllstm) defin... | # First checking if GPU is available
train_on_gpu=torch.cuda.is_available()
if(train_on_gpu):
print('Training on GPU.')
else:
print('No GPU available, training on CPU.')
import torch.nn as nn
class SentimentRNN(nn.Module):
"""
The RNN model that will be used to perform Sentiment analysis.
"""
... | _____no_output_____ | MIT | sentiment-rnn/Sentiment_RNN_Exercise.ipynb | MiniMarvin/pytorch-v2 |
Instantiate the networkHere, we'll instantiate the network. First up, defining the hyperparameters.* `vocab_size`: Size of our vocabulary or the range of values for our input, word tokens.* `output_size`: Size of our desired output; the number of class scores we want to output (pos/neg).* `embedding_dim`: Number of co... | # Instantiate the model w/ hyperparams
vocab_size =
output_size =
embedding_dim =
hidden_dim =
n_layers =
net = SentimentRNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers)
print(net) | _____no_output_____ | MIT | sentiment-rnn/Sentiment_RNN_Exercise.ipynb | MiniMarvin/pytorch-v2 |
--- TrainingBelow is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. You can also add code to save a model by name.>We'll also be using a new kind of cross entropy loss, which is designed to work with a single Sigmoid output. [BCELoss](https://pyt... | # loss and optimization functions
lr=0.001
criterion = nn.BCELoss()
optimizer = torch.optim.Adam(net.parameters(), lr=lr)
# training params
epochs = 4 # 3-4 is approx where I noticed the validation loss stop decreasing
counter = 0
print_every = 100
clip=5 # gradient clipping
# move model to GPU, if available
if(tr... | _____no_output_____ | MIT | sentiment-rnn/Sentiment_RNN_Exercise.ipynb | MiniMarvin/pytorch-v2 |
--- TestingThere are a few ways to test your network.* **Test data performance:** First, we'll see how our trained model performs on all of our defined test_data, above. We'll calculate the average loss and accuracy over the test data.* **Inference on user-generated data:** Second, we'll see if we can input just one ex... | # Get test data loss and accuracy
test_losses = [] # track loss
num_correct = 0
# init hidden state
h = net.init_hidden(batch_size)
net.eval()
# iterate over test data
for inputs, labels in test_loader:
# Creating new variables for the hidden state, otherwise
# we'd backprop through the entire training hist... | _____no_output_____ | MIT | sentiment-rnn/Sentiment_RNN_Exercise.ipynb | MiniMarvin/pytorch-v2 |
Inference on a test reviewYou can change this test_review to any text that you want. Read it and think: is it pos or neg? Then see if your model predicts correctly! > **Exercise:** Write a `predict` function that takes in a trained net, a plain text_review, and a sequence length, and prints out a custom statement f... | # negative test review
test_review_neg = 'The worst movie I have seen; acting was terrible and I want my money back. This movie had bad acting and the dialogue was slow.'
def predict(net, test_review, sequence_length=200):
''' Prints out whether a give review is predicted to be
positive or negative in sen... | _____no_output_____ | MIT | sentiment-rnn/Sentiment_RNN_Exercise.ipynb | MiniMarvin/pytorch-v2 |
Scene Classification-Test_a 1. Preprocess-KerasFolderClasses- Import pkg- Extract zip file- Preview "scene_classes.csv"- Preview "scene_{0}_annotations_20170922.json"- Test the image and pickle function- Split data into serval pickle file This part need jupyter notebook start with "jupyter notebook --NotebookApp.iopub... | import numpy as np
import pandas as pd
# import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import seaborn as sns
%matplotlib inline
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
from keras.utils.np_utils import to_categorical # ... | _____no_output_____ | MIT | SceneClassification2017/1. Preprocess-KerasFolderClasses-Test_a.ipynb | StudyExchange/AIChallenger |
Extract zip file | input_path = 'input'
datasetName = 'test_a'
date = '20170922'
datasetFolder = input_path + '\\data_{0}'.format(datasetName)
zip_path = input_path + '\\ai_challenger_scene_{0}_{1}.zip'.format(datasetName, date)
extract_path = input_path + '\\ai_challenger_scene_{0}_{1}'.format(datasetName, date)
image_path = extract_pa... | _____no_output_____ | MIT | SceneClassification2017/1. Preprocess-KerasFolderClasses-Test_a.ipynb | StudyExchange/AIChallenger |
Preview "scene_classes.csv" | scene_classes = pd.read_csv(scene_classes_path, header=None)
display(scene_classes.head())
def get_scene_name(lable_number, scene_classes_path):
scene_classes = pd.read_csv(scene_classes_path, header=None)
return scene_classes.loc[lable_number, 2]
print(get_scene_name(0, scene_classes_path)) | airport_terminal
| MIT | SceneClassification2017/1. Preprocess-KerasFolderClasses-Test_a.ipynb | StudyExchange/AIChallenger |
Copy images to ./input/data_test_a/test | from shutil import copy2
cwd = os.getcwd()
test_folder = os.path.join(cwd, datasetFolder)
test_sub_folder = os.path.join(test_folder, 'test')
if not os.path.isdir(test_folder):
os.mkdir(test_folder)
os.mkdir(test_sub_folder)
print(test_folder)
print(test_sub_folder)
trainDir = test_sub_folder
for image_id in o... | Done!
| MIT | SceneClassification2017/1. Preprocess-KerasFolderClasses-Test_a.ipynb | StudyExchange/AIChallenger |
01 A logged editable table> Traceable editable table in flask | from flask import Flask
app = Flask(__name__)
@app.route('/')
def hello_world():
return 'Hello, World!' | _____no_output_____ | MIT | nbs/30_traceable_edit_in_flask.ipynb | raynardj/forgebox |
Run a simple applitcation | # default_exp editable
# export
import pandas as pd
from datetime import datetime
import json
from sqlalchemy import create_engine as ce
from sqlalchemy import text
from jinja2 import Template
# export
from pathlib import Path
def get_static():
import forgebox
return Path(forgebox.__path__[0])/"static"
# export... | _____no_output_____ | MIT | nbs/30_traceable_edit_in_flask.ipynb | raynardj/forgebox |
Create sample data | con = ce("sqlite:///sample.db")
sample_df = pd.DataFrame(dict(name=["Darrow","Virginia","Sevro",]*20,
house =["Andromedus","Augustus","Barca"]*20,
age=[20,18,17]*20))
sample_df.to_sql("sample_table",index_label="id",
index=True,
... | _____no_output_____ | MIT | nbs/30_traceable_edit_in_flask.ipynb | raynardj/forgebox |
Testing editable frontend | app = Flask(__name__)
# Create Editable pages around sample_table
Editable("table1", # route/task name
app, # flask app to wrap around
table_name="sample_table", # target table name
id_col="id", # unique column
con = con,
log_con=con
)
app.run(host="0.0.0.0",port =... | _____no_output_____ | MIT | nbs/30_traceable_edit_in_flask.ipynb | raynardj/forgebox |
Retrieve the log | from forgebox.df import PandasDisplay
with PandasDisplay(max_colwidth = 0,max_rows=100):
display(pd.read_sql('editable_log',con = con)) | _____no_output_____ | MIT | nbs/30_traceable_edit_in_flask.ipynb | raynardj/forgebox |
Creating a Sentiment Analysis Web App Using PyTorch and SageMaker_Deep Learning Nanodegree Program | Deployment_---Now that we have a basic understanding of how SageMaker works we will try to use it to construct a complete project from end to end. Our goal will be to have a simple web page which a user can use to ente... | %mkdir ../data
!wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -zxf ../data/aclImdb_v1.tar.gz -C ../data | mkdir: cannot create directory ‘../data’: File exists
--2019-12-22 07:15:17-- http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
Resolving ai.stanford.edu (ai.stanford.edu)... 171.64.68.10
Connecting to ai.stanford.edu (ai.stanford.edu)|171.64.68.10|:80... connected.
HTTP request sent, awaiting response...... | MIT | SageMaker Project.ipynb | praveenbandaru/Sentiment-Analysis-Web-App |
Step 2: Preparing and Processing the dataAlso, as in the XGBoost notebook, we will be doing some initial data processing. The first few steps are the same as in the XGBoost example. To begin with, we will read in each of the reviews and combine them into a single input structure. Then, we will split the dataset into a... | import os
import glob
def read_imdb_data(data_dir='../data/aclImdb'):
data = {}
labels = {}
for data_type in ['train', 'test']:
data[data_type] = {}
labels[data_type] = {}
for sentiment in ['pos', 'neg']:
data[data_type][sentiment] = []
labels[d... | IMDB reviews: train = 12500 pos / 12500 neg, test = 12500 pos / 12500 neg
| MIT | SageMaker Project.ipynb | praveenbandaru/Sentiment-Analysis-Web-App |
Now that we've read the raw training and testing data from the downloaded dataset, we will combine the positive and negative reviews and shuffle the resulting records. | from sklearn.utils import shuffle
def prepare_imdb_data(data, labels):
"""Prepare training and test sets from IMDb movie reviews."""
#Combine positive and negative reviews and labels
data_train = data['train']['pos'] + data['train']['neg']
data_test = data['test']['pos'] + data['test']['neg']
... | IMDb reviews (combined): train = 25000, test = 25000
| MIT | SageMaker Project.ipynb | praveenbandaru/Sentiment-Analysis-Web-App |
Now that we have our training and testing sets unified and prepared, we should do a quick check and see an example of the data our model will be trained on. This is generally a good idea as it allows you to see how each of the further processing steps affects the reviews and it also ensures that the data has been loade... | print(train_X[100])
print(train_y[100]) | I was not expecting much going in to this, but still came away disappointed. This was my least favorite Halestorm production I have seen. I thought it was supposed to be a comedy, but I only snickered at 3 or 4 jokes. Is it really a funny gag to see a fat guy eating donuts and falling down over and over? What was up wi... | MIT | SageMaker Project.ipynb | praveenbandaru/Sentiment-Analysis-Web-App |
The first step in processing the reviews is to make sure that any html tags that appear should be removed. In addition we wish to tokenize our input, that way words such as *entertained* and *entertaining* are considered the same with regard to sentiment analysis. | import nltk
from nltk.corpus import stopwords
from nltk.stem.porter import *
import re
from bs4 import BeautifulSoup
def review_to_words(review):
nltk.download("stopwords", quiet=True)
stemmer = PorterStemmer()
text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags
text = re.su... | _____no_output_____ | MIT | SageMaker Project.ipynb | praveenbandaru/Sentiment-Analysis-Web-App |
The `review_to_words` method defined above uses `BeautifulSoup` to remove any html tags that appear and uses the `nltk` package to tokenize the reviews. As a check to ensure we know how everything is working, try applying `review_to_words` to one of the reviews in the training set. | # TODO: Apply review_to_words to a review (train_X[100] or any other review)
review_to_words(train_X[100]) | _____no_output_____ | MIT | SageMaker Project.ipynb | praveenbandaru/Sentiment-Analysis-Web-App |
**Question:** Above we mentioned that `review_to_words` method removes html formatting and allows us to tokenize the words found in a review, for example, converting *entertained* and *entertaining* into *entertain* so that they are treated as though they are the same word. What else, if anything, does this method do t... | import pickle
cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files
os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists
def preprocess_data(data_train, data_test, labels_train, labels_test,
cache_dir=cache_dir, cache_file="preprocessed_data.pkl... | Read preprocessed data from cache file: preprocessed_data.pkl
| MIT | SageMaker Project.ipynb | praveenbandaru/Sentiment-Analysis-Web-App |
Transform the dataIn the XGBoost notebook we transformed the data from its word representation to a bag-of-words feature representation. For the model we are going to construct in this notebook we will construct a feature representation which is very similar. To start, we will represent each word as an integer. Of cou... | import numpy as np
def build_dict(data, vocab_size = 5000):
"""Construct and return a dictionary mapping each of the most frequently appearing words to a unique integer."""
# TODO: Determine how often each word appears in `data`. Note that `data` is a list of sentences and that a
# sentence is a... | _____no_output_____ | MIT | SageMaker Project.ipynb | praveenbandaru/Sentiment-Analysis-Web-App |
**Question:** What are the five most frequently appearing (tokenized) words in the training set? Does it makes sense that these words appear frequently in the training set? **Answer:**The five most frequently appearing words in the training set are ['movi', 'film', 'one', 'like', 'time'].Yes it makes sense that these w... | # TODO: Use this space to determine the five most frequently appearing words in the training set.
list(word_dict.keys())[:5] | _____no_output_____ | MIT | SageMaker Project.ipynb | praveenbandaru/Sentiment-Analysis-Web-App |
Save `word_dict`Later on when we construct an endpoint which processes a submitted review we will need to make use of the `word_dict` which we have created. As such, we will save it to a file now for future use. | data_dir = '../data/pytorch' # The folder we will use for storing data
if not os.path.exists(data_dir): # Make sure that the folder exists
os.makedirs(data_dir)
with open(os.path.join(data_dir, 'word_dict.pkl'), "wb") as f:
pickle.dump(word_dict, f) | _____no_output_____ | MIT | SageMaker Project.ipynb | praveenbandaru/Sentiment-Analysis-Web-App |
Transform the reviewsNow that we have our word dictionary which allows us to transform the words appearing in the reviews into integers, it is time to make use of it and convert our reviews to their integer sequence representation, making sure to pad or truncate to a fixed length, which in our case is `500`. | def convert_and_pad(word_dict, sentence, pad=500):
NOWORD = 0 # We will use 0 to represent the 'no word' category
INFREQ = 1 # and we use 1 to represent the infrequent words, i.e., words not appearing in word_dict
working_sentence = [NOWORD] * pad
for word_index, word in enumerate(sentence[:pa... | _____no_output_____ | MIT | SageMaker Project.ipynb | praveenbandaru/Sentiment-Analysis-Web-App |
As a quick check to make sure that things are working as intended, check to see what one of the reviews in the training set looks like after having been processeed. Does this look reasonable? What is the length of a review in the training set? | # Use this cell to examine one of the processed reviews to make sure everything is working as intended.
print(train_X[100])
print('Length of train_X[100]: {}'.format(len(train_X[100]))) | [ 443 64 630 1167 254 153 1816 2 174 2 56 47 4 24
84 4 1968 219 24 62 2 14 4 2042 2 1685 60 43
787 150 13 3518 1902 315 1486 224 6 47 858 2 244 1
736 315 13 133 685 2 3643 1 537 68 3707 8 329 267
747 201 121 12 3086 1079 779 ... | MIT | SageMaker Project.ipynb | praveenbandaru/Sentiment-Analysis-Web-App |
**Question:** In the cells above we use the `preprocess_data` and `convert_and_pad_data` methods to process both the training and testing set. Why or why not might this be a problem? **Answer:**I don't see any problem with the `preprocess_data` method as it cleanses and convert the data to a list of words. The `convert... | import pandas as pd
pd.concat([pd.DataFrame(train_y), pd.DataFrame(train_X_len), pd.DataFrame(train_X)], axis=1) \
.to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False) | _____no_output_____ | MIT | SageMaker Project.ipynb | praveenbandaru/Sentiment-Analysis-Web-App |
Uploading the training dataNext, we need to upload the training data to the SageMaker default S3 bucket so that we can provide access to it while training our model. | import sagemaker
sagemaker_session = sagemaker.Session()
bucket = sagemaker_session.default_bucket()
prefix = 'sagemaker/sentiment_rnn'
role = sagemaker.get_execution_role()
input_data = sagemaker_session.upload_data(path=data_dir, bucket=bucket, key_prefix=prefix) | _____no_output_____ | MIT | SageMaker Project.ipynb | praveenbandaru/Sentiment-Analysis-Web-App |
**NOTE:** The cell above uploads the entire contents of our data directory. This includes the `word_dict.pkl` file. This is fortunate as we will need this later on when we create an endpoint that accepts an arbitrary review. For now, we will just take note of the fact that it resides in the data directory (and so also ... | !pygmentize train/model.py | [34mimport[39;49;00m [04m[36mtorch.nn[39;49;00m [34mas[39;49;00m [04m[36mnn[39;49;00m
[34mclass[39;49;00m [04m[32mLSTMClassifier[39;49;00m(nn.Module):
[33m"""[39;49;00m
[33m This is the simple RNN model we will be using to perform Sentiment Analysis.[39;49;00m
[33m """[39;49;00m
... | MIT | SageMaker Project.ipynb | praveenbandaru/Sentiment-Analysis-Web-App |
The important takeaway from the implementation provided is that there are three parameters that we may wish to tweak to improve the performance of our model. These are the embedding dimension, the hidden dimension and the size of the vocabulary. We will likely want to make these parameters configurable in the training ... | import torch
import torch.utils.data
# Read in only the first 250 rows
train_sample = pd.read_csv(os.path.join(data_dir, 'train.csv'), header=None, names=None, nrows=250)
# Turn the input pandas dataframe into tensors
train_sample_y = torch.from_numpy(train_sample[[0]].values).float().squeeze()
train_sample_X = torch... | _____no_output_____ | MIT | SageMaker Project.ipynb | praveenbandaru/Sentiment-Analysis-Web-App |
(TODO) Writing the training methodNext we need to write the training code itself. This should be very similar to training methods that you have written before to train PyTorch models. We will leave any difficult aspects such as model saving / loading and parameter loading until a little later. | def train(model, train_loader, epochs, optimizer, loss_fn, device):
for epoch in range(1, epochs + 1):
model.train()
total_loss = 0
for batch in train_loader:
batch_X, batch_y = batch
batch_X = batch_X.to(device)
batch_y = batch_y.to(... | _____no_output_____ | MIT | SageMaker Project.ipynb | praveenbandaru/Sentiment-Analysis-Web-App |
Supposing we have the training method above, we will test that it is working by writing a bit of code in the notebook that executes our training method on the small sample training set that we loaded earlier. The reason for doing this in the notebook is so that we have an opportunity to fix any errors that arise early ... | import torch.optim as optim
from train.model import LSTMClassifier
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = LSTMClassifier(32, 100, 5000).to(device)
optimizer = optim.Adam(model.parameters())
loss_fn = torch.nn.BCELoss()
train(model, train_sample_dl, 5, optimizer, loss_fn, device) | Epoch: 1, BCELoss: 0.6940271496772766
Epoch: 2, BCELoss: 0.6846091747283936
Epoch: 3, BCELoss: 0.6763787031173706
Epoch: 4, BCELoss: 0.6675016164779664
Epoch: 5, BCELoss: 0.6570009350776672
| MIT | SageMaker Project.ipynb | praveenbandaru/Sentiment-Analysis-Web-App |
In order to construct a PyTorch model using SageMaker we must provide SageMaker with a training script. We may optionally include a directory which will be copied to the container and from which our training code will be run. When the training container is executed it will check the uploaded directory (if there is one)... | from sagemaker.pytorch import PyTorch
estimator = PyTorch(entry_point="train.py",
source_dir="train",
role=role,
framework_version='0.4.0',
train_instance_count=1,
train_instance_type='ml.p2.xlarge',
... | 2019-12-22 07:16:12 Starting - Starting the training job...
2019-12-22 07:16:14 Starting - Launching requested ML instances.........
2019-12-22 07:17:47 Starting - Preparing the instances for training......
2019-12-22 07:18:51 Downloading - Downloading input data...
2019-12-22 07:19:36 Training - Downloading the traini... | MIT | SageMaker Project.ipynb | praveenbandaru/Sentiment-Analysis-Web-App |
Step 5: Testing the modelAs mentioned at the top of this notebook, we will be testing this model by first deploying it and then sending the testing data to the deployed endpoint. We will do this so that we can make sure that the deployed model is working correctly. Step 6: Deploy the model for testingNow that we have ... | # TODO: Deploy the trained model
predictor = estimator.deploy(initial_instance_count = 1, instance_type = 'ml.p2.xlarge') | ---------------------------------------------------------------------------------------------------------------! | MIT | SageMaker Project.ipynb | praveenbandaru/Sentiment-Analysis-Web-App |
Step 7 - Use the model for testingOnce deployed, we can read in the test data and send it off to our deployed model to get some results. Once we collect all of the results we can determine how accurate our model is. | test_X = pd.concat([pd.DataFrame(test_X_len), pd.DataFrame(test_X)], axis=1)
# We split the data into chunks and send each chunk seperately, accumulating the results.
def predict(data, rows=512):
split_array = np.array_split(data, int(data.shape[0] / float(rows) + 1))
predictions = np.array([])
for array i... | _____no_output_____ | MIT | SageMaker Project.ipynb | praveenbandaru/Sentiment-Analysis-Web-App |
**Question:** How does this model compare to the XGBoost model you created earlier? Why might these two models perform differently on this dataset? Which do *you* think is better for sentiment analysis? **Answer:**- Both the models performed similarly on the IMDB dataset. The XGBoost (eXtreme Gradient Boosting) model w... | test_review = 'The simplest pleasures in life are the best, and this film is one of them. Combining a rather basic storyline of love and adventure this movie transcends the usual weekend fair with wit and unmitigated charm.' | _____no_output_____ | MIT | SageMaker Project.ipynb | praveenbandaru/Sentiment-Analysis-Web-App |
The question we now need to answer is, how do we send this review to our model?Recall in the first section of this notebook we did a bunch of data processing to the IMDb dataset. In particular, we did two specific things to the provided reviews. - Removed any html tags and stemmed the input - Encoded the review as a se... | # TODO: Convert test_review into a form usable by the model and save the results in test_data
converted, length = convert_and_pad(word_dict, review_to_words(test_review))
test_data = [[length, *converted]] | _____no_output_____ | MIT | SageMaker Project.ipynb | praveenbandaru/Sentiment-Analysis-Web-App |
Now that we have processed the review, we can send the resulting array to our model to predict the sentiment of the review. | predictor.predict(test_data) | _____no_output_____ | MIT | SageMaker Project.ipynb | praveenbandaru/Sentiment-Analysis-Web-App |
Since the return value of our model is close to `1`, we can be certain that the review we submitted is positive. Delete the endpointOf course, just like in the XGBoost notebook, once we've deployed an endpoint it continues to run until we tell it to shut down. Since we are done using our endpoint for now, we can delet... | estimator.delete_endpoint() | _____no_output_____ | MIT | SageMaker Project.ipynb | praveenbandaru/Sentiment-Analysis-Web-App |
Step 6 (again) - Deploy the model for the web appNow that we know that our model is working, it's time to create some custom inference code so that we can send the model a review which has not been processed and have it determine the sentiment of the review.As we saw above, by default the estimator which we created, w... | !pygmentize serve/predict.py | [34mimport[39;49;00m [04m[36margparse[39;49;00m
[34mimport[39;49;00m [04m[36mjson[39;49;00m
[34mimport[39;49;00m [04m[36mos[39;49;00m
[34mimport[39;49;00m [04m[36mpickle[39;49;00m
[34mimport[39;49;00m [04m[36msys[39;49;00m
[34mimport[39;49;00m [04m[36msagemaker_containers[39;49;00m
... | MIT | SageMaker Project.ipynb | praveenbandaru/Sentiment-Analysis-Web-App |
As mentioned earlier, the `model_fn` method is the same as the one provided in the training code and the `input_fn` and `output_fn` methods are very simple and your task will be to complete the `predict_fn` method. Make sure that you save the completed file as `predict.py` in the `serve` directory.**TODO**: Complete th... | from sagemaker.predictor import RealTimePredictor
from sagemaker.pytorch import PyTorchModel
class StringPredictor(RealTimePredictor):
def __init__(self, endpoint_name, sagemaker_session):
super(StringPredictor, self).__init__(endpoint_name, sagemaker_session, content_type='text/plain')
model = PyTorchMod... | ---------------------------------------------------------------------------------------------------------------! | MIT | SageMaker Project.ipynb | praveenbandaru/Sentiment-Analysis-Web-App |
Testing the modelNow that we have deployed our model with the custom inference code, we should test to see if everything is working. Here we test our model by loading the first `250` positive and negative reviews and send them to the endpoint, then collect the results. The reason for only sending some of the data is t... | import glob
def test_reviews(data_dir='../data/aclImdb', stop=250):
results = []
ground = []
# We make sure to test both positive and negative reviews
for sentiment in ['pos', 'neg']:
path = os.path.join(data_dir, 'test', sentiment, '*.txt')
files = glob.glob(path... | _____no_output_____ | MIT | SageMaker Project.ipynb | praveenbandaru/Sentiment-Analysis-Web-App |
As an additional test, we can try sending the `test_review` that we looked at earlier. | predictor.predict(test_review) | _____no_output_____ | MIT | SageMaker Project.ipynb | praveenbandaru/Sentiment-Analysis-Web-App |
Now that we know our endpoint is working as expected, we can set up the web page that will interact with it. If you don't have time to finish the project now, make sure to skip down to the end of this notebook and shut down your endpoint. You can deploy it again when you come back. Step 7 (again): Use the model for th... | predictor.endpoint | _____no_output_____ | MIT | SageMaker Project.ipynb | praveenbandaru/Sentiment-Analysis-Web-App |
Once you have added the endpoint name to the Lambda function, click on **Save**. Your Lambda function is now up and running. Next we need to create a way for our web app to execute the Lambda function. Setting up API GatewayNow that our Lambda function is set up, it is time to create a new API using API Gateway that wi... | predictor.delete_endpoint() | _____no_output_____ | MIT | SageMaker Project.ipynb | praveenbandaru/Sentiment-Analysis-Web-App |
Copyright 2019 The TensorFlow Hub Authors.Licensed under the Apache License, Version 2.0 (the "License"); | # Copyright 2019 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by app... | _____no_output_____ | Apache-2.0 | site/en-snapshot/hub/tutorials/tweening_conv3d.ipynb | phoenix-fork-tensorflow/docs-l10n |
Video Inbetweening using 3D Convolutions View on TensorFlow.org Run in Google Colab View on GitHub Download notebook See TF Hub model Yunpeng Li, Dominik Roblek, and Marco Tagliasacchi. From Here to There: Video Inbetweening Using Direct 3D Convolutions, 2019.https://arxiv.org/abs/1... | import tensorflow as tf
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import tensorflow_hub as hub
import tensorflow_datasets as tfds
from tensorflow_datasets.core import SplitGenerator
from tensorflow_datasets.video.bair_robot_pushing import BairRobotPushingSmall
import tempfile
import pa... | _____no_output_____ | Apache-2.0 | site/en-snapshot/hub/tutorials/tweening_conv3d.ipynb | phoenix-fork-tensorflow/docs-l10n |
BAIR: Demo based on numpy array inputs | # @title Load some example data (BAIR).
batch_size = 16
# If unable to download the dataset automatically due to "not enough disk space", please download manually to Google Drive and
# load using tf.data.TFRecordDataset.
ds = builder.as_dataset(split="test")
test_videos = ds.batch(batch_size)
first_batch = next(iter(t... | _____no_output_____ | Apache-2.0 | site/en-snapshot/hub/tutorials/tweening_conv3d.ipynb | phoenix-fork-tensorflow/docs-l10n |
Load Hub Module | hub_handle = 'https://tfhub.dev/google/tweening_conv3d_bair/1'
module = hub.load(hub_handle).signatures['default'] | _____no_output_____ | Apache-2.0 | site/en-snapshot/hub/tutorials/tweening_conv3d.ipynb | phoenix-fork-tensorflow/docs-l10n |
Generate and show the videos | filled_frames = module(input_frames)['default'] / 255.0
# Show sequences of generated video frames.
# Concatenate start/end frames and the generated filled frames for the new videos.
generated_videos = np.concatenate([input_frames[:, :1] / 255.0, filled_frames, input_frames[:, 1:] / 255.0], axis=1)
for video_id in ra... | _____no_output_____ | Apache-2.0 | site/en-snapshot/hub/tutorials/tweening_conv3d.ipynb | phoenix-fork-tensorflow/docs-l10n |
***This is a simple implement of basic backdoor attack on MNIST and CIFAR10*** ***Install all independency here*** | !pip3 install torch==1.10.2+cpu torchvision==0.11.3+cpu torchaudio==0.10.2+cpu -f https://download.pytorch.org/whl/cpu/torch_stable.html
!pip3 install ipywidgets
!jupyter nbextension enable --py widgetsnbextension | _____no_output_____ | MIT | Backdoor_attack.ipynb | Junjie-Chu/ML_Leak_and_Badnet |
***Import all dependency here*** | # to monitor the progress
from tqdm import tqdm
import time
# basic dependency
import numpy as np
import random
# pytorch related
import torch
from torch.utils.data import Dataset
from torchvision import datasets
from torch import nn
from torch.utils.data import DataLoader
from torch import optim
import torch.nn.functi... | Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount("/content/drive", force_remount=True).
| MIT | Backdoor_attack.ipynb | Junjie-Chu/ML_Leak_and_Badnet |
***Define a custom dataset class*** | # to use dataloader, a custom class is needed, since the 'MNIST' or 'CIFAR10' object does not support item assignment
# this class has some mandotory functions
# another way is to use TensorDataset(x,y), to make code more clear, I use custom dataset
class poisoned_dataset(Dataset):
def __init__(self, dataset, fake... | _____no_output_____ | MIT | Backdoor_attack.ipynb | Junjie-Chu/ML_Leak_and_Badnet |
***Define a Neural Network*** | class BadNet(nn.Module):
def __init__(self,inputchannels,outputclasses):
super().__init__()
self.conv1 = nn.Conv2d(inputchannels, 16, 5)
self.conv2 = nn.Conv2d(16, 32, 5)
self.pool = nn.AvgPool2d(2)
if inputchannels == 3:
inputfeatures = 800
else:
... | _____no_output_____ | MIT | Backdoor_attack.ipynb | Junjie-Chu/ML_Leak_and_Badnet |
***Functions for training and evaluating*** | def train(model, dataloader, criterion, opt):
running_loss = 0
# switch to model:train
# no difference here since no dropout and BN
model.train()
count = 0
for i, data in tqdm(enumerate(dataloader)):
opt.zero_grad()
imgs, labels = data
predict = model(imgs)
loss =... | _____no_output_____ | MIT | Backdoor_attack.ipynb | Junjie-Chu/ML_Leak_and_Badnet |
***Main part*** | # prepare original data
train_data_MNIST = datasets.MNIST(root="./data_1/", train=True,download=True)
test_data_MNIST = datasets.MNIST(root="./data_1/",train=False,download=True)
train_data_CIFAR10 = datasets.CIFAR10(root="./data_1/", train=True,download=True)
test_data_CIFAR10 = datasets.CIFAR10(root="./data_1/",trai... | start training:
| MIT | Backdoor_attack.ipynb | Junjie-Chu/ML_Leak_and_Badnet |
ORCID PeopleMost USGS staff who are publishing authors, data creators, or otherwise contributors to some published works now have ORCID identifiers as a matter of policy. Much more than just a convenient globally unique and persistent identifier, the ORCID system and its evolving schema provides a way for us to get at... | import isaid_helpers
import pandas as pd
pd.read_csv(isaid_helpers.f_graphable_orcid).head()
%%time
with isaid_helpers.graph_driver.session(database=isaid_helpers.graphdb) as session:
session.run("""
LOAD CSV WITH HEADERS FROM '%(source_path)s/%(source_file)s' AS row
WITH row
MATCH (p:Pe... | CPU times: user 7.56 ms, sys: 4.46 ms, total: 12 ms
Wall time: 5min 44s
| Unlicense | isaid/iSAID - Build Graph - ORCID.ipynb | skybristol/pylinkedcmd |
Tables of Content Linear Algebra Tools 1. Operator Matrices - Pauli: I, X, Y, Z - Hadamard: H - Phase: P - Sqrt(X): SX - Sqrt(Z): S - Sqrt(H): SH - 4rt (Z): T - X root: Xrt(s) - H root: Hrt(s) - Rotation Matrices: Rx($\theta$), Ry($\theta$), Rz($\theta$) - U3 Matrix: U3($\theta, \phi, ... | import numpy as np
import sympy as sp
from sympy.solvers.solveset import linsolve
import matplotlib
import matplotlib.pyplot as plt
matplotlib.use('Agg')
from sympy import Matrix, init_printing
import qiskit
from qiskit import *
from qiskit.aqua.circuits import *
# Representing Data
from qiskit.providers.aer impor... | Duplicate key in file '/Users/minhpham/.matplotlib/matplotlibrc' line #2.
Duplicate key in file '/Users/minhpham/.matplotlib/matplotlibrc' line #3.
| Apache-2.0 | frozenYoghourt/Qbraid - Implementing Improved Multiple Controlled Toffoli.ipynb | mehilagarwal/qchack |
Linear Algebra Tools | # Matrices
I = np.array([[1, 0], [0, 1]])
X = np.array([[0, 1], [1, 0]])
Y = np.array([[0, -1j], [1j, 0]])
Z = np.array([[1, 0], [0, -1]])
H = 1/np.sqrt(2)*np.array([[1, 1], [1, -1]])
P = lambda theta: np.array([[1, 0], [0, np.exp(1j*theta)]])
# sqrt(X)
SX = 1/2 * np.array([[1+1j, 1-1j], [1-1j, 1+1j]])
# sqrt(Z)
S = ... | _____no_output_____ | Apache-2.0 | frozenYoghourt/Qbraid - Implementing Improved Multiple Controlled Toffoli.ipynb | mehilagarwal/qchack |
Calculate Hermitian Conjugate | def dagger(mat):
# Calculate Hermitian conjugate
mat_dagger = np.conj(mat.T)
# Assert Hermitian identity
aae(np.dot(mat_dagger, mat), np.identity(mat.shape[0]))
return mat_dagger | _____no_output_____ | Apache-2.0 | frozenYoghourt/Qbraid - Implementing Improved Multiple Controlled Toffoli.ipynb | mehilagarwal/qchack |
CU Matrix | def cu_matrix(no_qubits, control, target, U, little_edian = True):
"""
Manually build the unitary matrix for non-adjacent CX gates
Parameters:
-----------
no_qubits: int
Number of qubits in the circuit
control: int
Index of the control qubit (1st qubit is index 0)
t... | _____no_output_____ | Apache-2.0 | frozenYoghourt/Qbraid - Implementing Improved Multiple Controlled Toffoli.ipynb | mehilagarwal/qchack |
Angles from Statevector | def angles_from_statevectors(output_statevector):
"""
Calculate correct x, y rotation angles from an arbitrary output statevector
Paramters:
----------
output_statevector: ndarray
Desired output state
Returns:
--------
phi: float
Angle to rotate about t... | _____no_output_____ | Apache-2.0 | frozenYoghourt/Qbraid - Implementing Improved Multiple Controlled Toffoli.ipynb | mehilagarwal/qchack |
View Matrix | def view(mat, rounding = 10):
display(Matrix(np.round(mat, rounding))) | _____no_output_____ | Apache-2.0 | frozenYoghourt/Qbraid - Implementing Improved Multiple Controlled Toffoli.ipynb | mehilagarwal/qchack |
Qiskit Tools Short-hand Qiskit Circuit | q = lambda *regs, name=None, global_phase=0: QuantumCircuit(*regs, name=None, global_phase=0) | _____no_output_____ | Apache-2.0 | frozenYoghourt/Qbraid - Implementing Improved Multiple Controlled Toffoli.ipynb | mehilagarwal/qchack |
Controlled Unitary | def control_unitary(circ, unitary, controls, target):
"""
Composed a multi-controlled single unitary target gate
Parameters:
-----------
circ: QuantumCircuit
Qiskit circuit of appropriate size, no less qubit than the size of the controlled gate
unitary: ndarray of (2, 2)
... | _____no_output_____ | Apache-2.0 | frozenYoghourt/Qbraid - Implementing Improved Multiple Controlled Toffoli.ipynb | mehilagarwal/qchack |
Controlled Phase | def control_phase(circ, angle, control_bit, target_bit, recip = True, pi_on = True):
"""
Add a controlled-phase gate
Parameters:
-----------
circ: QuantumCircuit
Inputted circuit
angle: float
Phase Angle
control_bit: int
Index of control bit
... | _____no_output_____ | Apache-2.0 | frozenYoghourt/Qbraid - Implementing Improved Multiple Controlled Toffoli.ipynb | mehilagarwal/qchack |
Draw Circuit | def milk(circ):
return circ.draw('mpl') | _____no_output_____ | Apache-2.0 | frozenYoghourt/Qbraid - Implementing Improved Multiple Controlled Toffoli.ipynb | mehilagarwal/qchack |
Draw Transpiled Circuit | def dtp(circ, print_details = True, nice = True, return_values = False):
"""
Draw and/or return information about the transpiled circuit
Parameters:
-----------
circ: QuantumCircuit
QuantumCircuit to br transpiled
print_details: bool (True)
Print the number of u3 and cx... | _____no_output_____ | Apache-2.0 | frozenYoghourt/Qbraid - Implementing Improved Multiple Controlled Toffoli.ipynb | mehilagarwal/qchack |
Get Unitary/StateVector Function | def get(circ, types = 'unitary', nice = True):
"""
This function return the statevector or the unitary of the inputted circuit
Parameters:
-----------
circ: QuantumCircuit
Inputted circuit without measurement gate
types: str ('unitary')
Get 'unitary' or 'statevector' op... | _____no_output_____ | Apache-2.0 | frozenYoghourt/Qbraid - Implementing Improved Multiple Controlled Toffoli.ipynb | mehilagarwal/qchack |
Displaying Histogram / Bloch / Counts | def sim(circ, visual = 'hist'):
"""
Displaying output of quantum circuit
Parameters:
-----------
circ: QuantumCircuit
QuantumCircuit with or without measurement gates
visual: str ('hist')
'hist' (counts on histogram) or 'bloch' (statevectors on Bloch sphere) or None (ge... | _____no_output_____ | Apache-2.0 | frozenYoghourt/Qbraid - Implementing Improved Multiple Controlled Toffoli.ipynb | mehilagarwal/qchack |
Unitary Checker | def unitary_check(test_unitary, perfect = False):
"""
Check if the CnX unitary is correct
Parameters:
-----------
test_unitary: ndarray
Unitary generated by the circuit
perfect: ndarray
Account for phase difference
"""
# Get length of unitary
... | _____no_output_____ | Apache-2.0 | frozenYoghourt/Qbraid - Implementing Improved Multiple Controlled Toffoli.ipynb | mehilagarwal/qchack |
Task: Implementing Improved Multiple Controlled Toffoli Abstract Multiple controlled Toffoli gates are crucial in the implementation of modular exponentiation [4], like that used in Shor's algorithm. In today's practical realm of small number of qubits devices, there is a real need for efficient realization of multip... | circ = q(3)
circ.ccx(0, 1, 2)
milk(circ) | _____no_output_____ | Apache-2.0 | frozenYoghourt/Qbraid - Implementing Improved Multiple Controlled Toffoli.ipynb | mehilagarwal/qchack |
And for higher $n$, $6$ for example, the circuit would take this form. | circ = q(7)
circ.mct(list(range(6)), 6)
milk(circ) | _____no_output_____ | Apache-2.0 | frozenYoghourt/Qbraid - Implementing Improved Multiple Controlled Toffoli.ipynb | mehilagarwal/qchack |
The cost for the Qiskit implementation of $CnX$ gate from $n = 2$ to $n = 11$ are listed above in terms of the basic operations ($CX$ and $U3$). Note that the general cost is defined as $10CX + U3$. n | CX | U3 | General Cost --- | --- | --- | --- 2 | 6 | 8 | 68 3 | 20 | 22 | 2224 | 44 | 46 | 4865 | 92 | 94 | 10146 | 1... | circ = q(8)
circ = control_unitary(circ, H, [0, 1, 2], 7)
circ = control_unitary(circ, Z, [3, 4, 5, 6], 7)
circ = control_unitary(circ, H, [0, 1, 2], 7)
milk(circ) | _____no_output_____ | Apache-2.0 | frozenYoghourt/Qbraid - Implementing Improved Multiple Controlled Toffoli.ipynb | mehilagarwal/qchack |
The two outer most gates are $C3H$, and the middle gate is $C4Z$. Together they create $C7X$ with a negative phase in 7 columns of the unitary. In general, the number of negative phase in the unitary has the form $2^a - 1$. Although $a$ can be varied, for each $n$, there exists a unique value of $a$ that is optimal for... | milk(CnX(2)) | _____no_output_____ | Apache-2.0 | frozenYoghourt/Qbraid - Implementing Improved Multiple Controlled Toffoli.ipynb | mehilagarwal/qchack |
$n = 3$ | dtp(CnX(3)) | cx: 6
u3: 7
Total cost: 67
| Apache-2.0 | frozenYoghourt/Qbraid - Implementing Improved Multiple Controlled Toffoli.ipynb | mehilagarwal/qchack |
We sketch the following for the general circuit of $CnX$ We also provide the qiskit code implementation of for the general $CnX$ below. At the end is the list of the best implementation for each CnX gate. To use, simply assign ```best[n] ``` to an object and use like a normal Quantum... | def CnX(n, control_list = None, target = None, circ = None, theta = 1):
"""
Create a CnX modulo phase shift gate
Parameters:
-----------
n: int
Number of control bits
control_list: list
Index of control bits on inputted circuit (if any)
target: int
Index of ... | _____no_output_____ | Apache-2.0 | frozenYoghourt/Qbraid - Implementing Improved Multiple Controlled Toffoli.ipynb | mehilagarwal/qchack |
CnH / Multi-Hadamard Composition | def CnH(n, control_list = None, target = None, circ = None, theta = 1):
"""
Create a CnH modulo phase shift gate
Parameters:
-----------
n: int
Number of control bits
control_list: list
Index of control bits on inputted circuit (if any)
target: int
Index of ... | _____no_output_____ | Apache-2.0 | frozenYoghourt/Qbraid - Implementing Improved Multiple Controlled Toffoli.ipynb | mehilagarwal/qchack |
Postulate for Complexity of the General Cost We have two lists below showing the number of $U3$ and $CX$ used for the qiskit technique and our technique | ## Qiskit
cx_q = np.array([6, 20, 44, 92, 188, 380, 764, 1532, 3068, 6140])
u3_q = np.array([8, 22, 46, 94, 190, 382, 766, 1534, 3070, 6142])
## Our
cx_o = np.array([3, 6, 20, 34, 50, 70, 102, 146, 222, 310])
u3_o = np.array([4, 7, 25, 53, 72, 101, 143, 196, 286, 395]) | _____no_output_____ | Apache-2.0 | frozenYoghourt/Qbraid - Implementing Improved Multiple Controlled Toffoli.ipynb | mehilagarwal/qchack |
We find the common ratios by taking $a_{n+1}/a_n$, and taking the average of these ratio when $n > 3$ to mitigate the impact of the additive factor. | ## Qiskit
rat_1 = cx_q[1:] / cx_q[:-1]
rat_1 = np.mean(rat_1[3:])
rat_2 = u3_q[1:] / u3_q[:-1]
rat_2 = np.mean(rat_2[3:])
## Our
rat_3 = cx_o[1:] / cx_o[:-1]
rat_3 = np.mean(rat_3[3:])
rat_4 = u3_o[1:] / u3_o[:-1]
rat_4 = np.mean(rat_4[3:])
rat_1, rat_2, rat_3, rat_4 | _____no_output_____ | Apache-2.0 | frozenYoghourt/Qbraid - Implementing Improved Multiple Controlled Toffoli.ipynb | mehilagarwal/qchack |
We see that the geometric ratio of our technique is superior to that of qiskit. In base $2$, we can roughly see the following complexity.$$CX \approx O(1.446^n) \approx O(2^{\frac{n}{2}})$$$$U3 \approx O(1.380^n) \approx O(2^{\frac{n}{2}})$$ Compare and Contrast with the $O(n^2)$ technique in Corollary 7.6 of [3] Lemm... | circ = q(10)
circ = control_unitary(circ, H, [0, 1], 9)
circ.h(9)
circ.mct([2, 3, 4, 5, 6, 7, 8], 9)
circ.h(9)
circ = control_unitary(circ, H, [0, 1], 9)
milk(circ) | _____no_output_____ | Apache-2.0 | frozenYoghourt/Qbraid - Implementing Improved Multiple Controlled Toffoli.ipynb | mehilagarwal/qchack |
The $3$ middle gates will have the effect of $C8Z$, and the two gate outside are $C2Z$. This will leads to $C10X$ with phase difference. Now we made one last modification to the implementation of Lemma 7.5. If we look back to the table from before, we can see that our implementation of $C7X$ has a lower than $950$. Bec... | print(1)
dtp(CnH(1), nice = False)
print('\n')
print(2)
dtp(CnH(2), nice = False)
print('\n')
print(3)
dtp(CnH(3), nice = False) | 1
cx: 1
u3: 2
Total cost: 12
2
cx: 8
u3: 16
Total cost: 96
3
cx: 18
u3: 31
Total cost: 211
| Apache-2.0 | frozenYoghourt/Qbraid - Implementing Improved Multiple Controlled Toffoli.ipynb | mehilagarwal/qchack |
Math Some extra math functions. | %nbdev_export
def smooth( newVal, oldVal, weight) :
"An exponential smoothing function. The weight is the smoothing factor applied to the old value."
return newVal * (1 - weight) + oldVal * weight;
smooth(2, 10, 0.9)
assert smooth(2, 10, 0.9)==9.2
#hide
from nbdev import *
notebook2script() | Converted 00_core.ipynb.
Converted 01_rmath.ipynb.
Converted 02_functions.ipynb.
Converted 03_nodes.ipynb.
Converted 04_hierarchy.ipynb.
Converted index.ipynb.
| Apache-2.0 | nbs/.ipynb_checkpoints/01_rmath-checkpoint.ipynb | perceptualrobots/pct |
Metacartel VenturesFirst block: 9484668 (Feb-15-2020 01:32:52 AM +UTC)https://etherscan.io/block/9484668Other blocks:10884668 (Sep-18-2020 06:57:11 AM +UTC)Recent block (for reference): 13316507 (Sep-28-2021 08:58:06 PM +UTC)summoner: 0x8c8b237bea3c23317d08b73d7137e90cafdf68e6 | # Approximate seconds per block
sec_per_block = (datetime(2021, 9, 28, 8, 58, 6) - datetime(2020, 2, 15, 1, 32, 52)).seconds / (13316507 - 9484668)
print("approx. this many blocks per days:", sec_per_block * 86400)
# 86400 seconds per day
# >>> from datetime import datetime
# >>> a = datetime(2011,11,24,0,0,0)
# >>>... | _____no_output_____ | MIT | MetacartelVentures/MetacartelVentures DAO Analysis.ipynb | Xqua/dao-research |
Project 1: Trading with Momentum InstructionsEach problem consists of a function to implement and instructions on how to implement the function. The parts of the function that need to be implemented are marked with a ` TODO` comment. After implementing the function, run the cell to test it against the unit tests we'v... | import sys
!{sys.executable} -m pip install -r requirements.txt | Requirement already satisfied: colour==0.1.5 in /opt/conda/lib/python3.6/site-packages (from -r requirements.txt (line 1)) (0.1.5)
Collecting cvxpy==1.0.3 (from -r requirements.txt (line 2))
[?25l Downloading https://files.pythonhosted.org/packages/a1/59/2613468ffbbe3a818934d06b81b9f4877fe054afbf4f99d2f43f398a0b34/cv... | Apache-2.0 | P1_Trading_with_Momentum/project_notebook.ipynb | hemang-75/AI_for_Trading |
Load Packages | import pandas as pd
import numpy as np
import helper
import project_helper
import project_tests | _____no_output_____ | Apache-2.0 | P1_Trading_with_Momentum/project_notebook.ipynb | hemang-75/AI_for_Trading |
Market Data Load DataThe data we use for most of the projects is end of day data. This contains data for many stocks, but we'll be looking at stocks in the S&P 500. We also made things a little easier to run by narrowing down our range of time period instead of using all of the data. | df = pd.read_csv('../../data/project_1/eod-quotemedia.csv', parse_dates=['date'], index_col=False)
close = df.reset_index().pivot(index='date', columns='ticker', values='adj_close')
print('Loaded Data') | Loaded Data
| Apache-2.0 | P1_Trading_with_Momentum/project_notebook.ipynb | hemang-75/AI_for_Trading |
View DataRun the cell below to see what the data looks like for `close`. | project_helper.print_dataframe(close) | _____no_output_____ | Apache-2.0 | P1_Trading_with_Momentum/project_notebook.ipynb | hemang-75/AI_for_Trading |
Stock ExampleLet's see what a single stock looks like from the closing prices. For this example and future display examples in this project, we'll use Apple's stock (AAPL). If we tried to graph all the stocks, it would be too much information. | apple_ticker = 'AAPL'
project_helper.plot_stock(close[apple_ticker], '{} Stock'.format(apple_ticker)) | _____no_output_____ | Apache-2.0 | P1_Trading_with_Momentum/project_notebook.ipynb | hemang-75/AI_for_Trading |
Resample Adjusted PricesThe trading signal you'll develop in this project does not need to be based on daily prices, for instance, you can use month-end prices to perform trading once a month. To do this, you must first resample the daily adjusted closing prices into monthly buckets, and select the last observation of... | def resample_prices(close_prices, freq='M'):
"""
Resample close prices for each ticker at specified frequency.
Parameters
----------
close_prices : DataFrame
Close prices for each ticker and date
freq : str
What frequency to sample at
For valid freq choices, see http... | Tests Passed
| Apache-2.0 | P1_Trading_with_Momentum/project_notebook.ipynb | hemang-75/AI_for_Trading |
View DataLet's apply this function to `close` and view the results. | monthly_close = resample_prices(close)
project_helper.plot_resampled_prices(
monthly_close.loc[:, apple_ticker],
close.loc[:, apple_ticker],
'{} Stock - Close Vs Monthly Close'.format(apple_ticker)) | _____no_output_____ | Apache-2.0 | P1_Trading_with_Momentum/project_notebook.ipynb | hemang-75/AI_for_Trading |
Compute Log ReturnsCompute log returns ($R_t$) from prices ($P_t$) as your primary momentum indicator:$$R_t = log_e(P_t) - log_e(P_{t-1})$$Implement the `compute_log_returns` function below, such that it accepts a dataframe (like one returned by `resample_prices`), and produces a similar dataframe of log returns. Use ... | def compute_log_returns(prices):
"""
Compute log returns for each ticker.
Parameters
----------
prices : DataFrame
Prices for each ticker and date
Returns
-------
log_returns : DataFrame
Log returns for each ticker and date
"""
# TODO: Implement Function... | Tests Passed
| Apache-2.0 | P1_Trading_with_Momentum/project_notebook.ipynb | hemang-75/AI_for_Trading |
View DataUsing the same data returned from `resample_prices`, we'll generate the log returns. | monthly_close_returns = compute_log_returns(monthly_close)
project_helper.plot_returns(
monthly_close_returns.loc[:, apple_ticker],
'Log Returns of {} Stock (Monthly)'.format(apple_ticker)) | _____no_output_____ | Apache-2.0 | P1_Trading_with_Momentum/project_notebook.ipynb | hemang-75/AI_for_Trading |
Shift ReturnsImplement the `shift_returns` function to shift the log returns to the previous or future returns in the time series. For example, the parameter `shift_n` is 2 and `returns` is the following:``` Returns A B C D2013-07-08 0.015 0.082 ... | def shift_returns(returns, shift_n):
"""
Generate shifted returns
Parameters
----------
returns : DataFrame
Returns for each ticker and date
shift_n : int
Number of periods to move, can be positive or negative
Returns
-------
shifted_returns : DataFrame
... | Tests Passed
| Apache-2.0 | P1_Trading_with_Momentum/project_notebook.ipynb | hemang-75/AI_for_Trading |
View DataLet's get the previous month's and next month's returns. | monthly_close_returns
prev_returns = shift_returns(monthly_close_returns, 1)
lookahead_returns = shift_returns(monthly_close_returns, -1)
project_helper.plot_shifted_returns(
prev_returns.loc[:, apple_ticker],
monthly_close_returns.loc[:, apple_ticker],
'Previous Returns of {} Stock'.format(apple_ticker))
... | _____no_output_____ | Apache-2.0 | P1_Trading_with_Momentum/project_notebook.ipynb | hemang-75/AI_for_Trading |
Generate Trading SignalA trading signal is a sequence of trading actions, or results that can be used to take trading actions. A common form is to produce a "long" and "short" portfolio of stocks on each date (e.g. end of each month, or whatever frequency you desire to trade at). This signal can be interpreted as reba... | def get_top_n(prev_returns, top_n):
"""
Select the top performing stocks
Parameters
----------
prev_returns : DataFrame
Previous shifted returns for each ticker and date
top_n : int
The number of top performing stocks to get
Returns
-------
top_stocks : Data... | Tests Passed
| Apache-2.0 | P1_Trading_with_Momentum/project_notebook.ipynb | hemang-75/AI_for_Trading |
View DataWe want to get the best performing and worst performing stocks. To get the best performing stocks, we'll use the `get_top_n` function. To get the worst performing stocks, we'll also use the `get_top_n` function. However, we pass in `-1*prev_returns` instead of just `prev_returns`. Multiplying by negative one ... | top_bottom_n = 50
df_long = get_top_n(prev_returns, top_bottom_n)
df_short = get_top_n(-1*prev_returns, top_bottom_n)
project_helper.print_top(df_long, 'Longed Stocks')
project_helper.print_top(df_short, 'Shorted Stocks') | 10 Most Longed Stocks:
INCY, AMD, AVGO, NFX, SWKS, NFLX, ILMN, UAL, NVDA, MU
10 Most Shorted Stocks:
RRC, FCX, CHK, MRO, GPS, WYNN, DVN, FTI, SPLS, TRIP
| Apache-2.0 | P1_Trading_with_Momentum/project_notebook.ipynb | hemang-75/AI_for_Trading |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.