markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
There's not a "title" column in the comments dataframe, so how is the comment tied to the original post?
# View the first entry in the dataframe and see if you can find that answer # permalink? blueorigin_comments.iloc[0]
_____no_output_____
CC0-1.0
code/sandbox-Blue-O.ipynb
MattPat1981/new_space_race_nlp
IN EDA below, we find: "We have empty rows in 'body' in many columns. It's likely that all of those are postings, not comments, and we should actually map the postings to the body for those before merging the datafrmes."
def strip_and_rep(word): if len(str(word).strip().replace(" ", "")) < 1: return 'replace_me' else: return word blueorigin['selftext'] = blueorigin['selftext'].map(strip_and_rep) spacex['selftext'] = spacex['selftext'].map(strip_and_rep) spacex.selftext.isna().sum() blueorigin.selftext.isna().su...
Accessing quick look at {dataframe} {dataframe}.head(2) >>> subreddit body \ 0 BlueOrigin I don't know why they would want to waste prop... 1 BlueOrigin Haha what if we stole one of his houses? permalink ...
CC0-1.0
code/sandbox-Blue-O.ipynb
MattPat1981/new_space_race_nlp
We have empty rows in 'body' in many columns. It's likely that all of those are postings, not comments, and we should actually map the postings to the body for those before merging the datafrmes. However, when trying that above, we ended up with more null values. Mapping 'replace_me' in to empty fileds kept the numb...
space_wars_2.dropna(inplace=True) space_wars_2.isna().sum() space_wars.to_csv('./data/betaset.csv', index=False)
_____no_output_____
CC0-1.0
code/sandbox-Blue-O.ipynb
MattPat1981/new_space_race_nlp
Before we split up the training and testing sets, establish our X and y. If you need to reset the dataframe, run the next cell FIRSTkeyword = RESET
space_wars_2 = pd.read_csv('./data/betaset.csv') space_wars_2.columns
_____no_output_____
CC0-1.0
code/sandbox-Blue-O.ipynb
MattPat1981/new_space_race_nlp
I believe that the 'permalink' will be almost as indicative as the 'subreddit' that we are trying to predict, so the X will only include the words...
space_wars_2.head()
_____no_output_____
CC0-1.0
code/sandbox-Blue-O.ipynb
MattPat1981/new_space_race_nlp
Convert target column to binary before moving forwardWe want to predict whether this post is Spacex, 1, or is not Spacex, 0
space_wars_2['subreddit'].value_counts() space_wars_2['subreddit'] = space_wars_2['subreddit'].map({'spacex': 1, 'BlueOrigin': 0}) space_wars_2['subreddit'].value_counts() X = space_wars_2.body y = space_wars_2.subreddit
_____no_output_____
CC0-1.0
code/sandbox-Blue-O.ipynb
MattPat1981/new_space_race_nlp
Calculate our baseline split
space_wars_2.subreddit.value_counts(normalize=True) base_set = space_wars_2.subreddit.value_counts(normalize=True) baseline = 0.0 if base_set[0] > base_set[1]: baseline = base_set[0] else: baseline = base_set[1] baseline
_____no_output_____
CC0-1.0
code/sandbox-Blue-O.ipynb
MattPat1981/new_space_race_nlp
Before we sift out stopwords, etc, let's just run a logistic regression on the words, as well as a decision tree:
from sklearn.linear_model import LogisticRegression from sklearn.tree import DecisionTreeClassifier from sklearn.model_selection import GridSearchCV, train_test_split, cross_val_score
_____no_output_____
CC0-1.0
code/sandbox-Blue-O.ipynb
MattPat1981/new_space_race_nlp
Before we can fit the models we need to convert the data to numbers...we can use CountVectorizer or TF-IDF for this
# from https://stackoverflow.com/questions/5511708/adding-words-to-nltk-stoplist # add certain words to the stop_words library import nltk stopwords = nltk.corpus.stopwords.words('english') new_words=('replace_me', 'removed', 'deleted', '0','1', '2', '3', '4', '5', '6', '7', '8','9', '00', '000') for i in new_words: ...
_____no_output_____
CC0-1.0
code/sandbox-Blue-O.ipynb
MattPat1981/new_space_race_nlp
Keyword = CHANGELING
y_test # This section, next number of cells, borrowed from Noelle's lesson on NLP EDA # fit_transform() does two things: First, it fits the model and # learns the vocabulary; second, it transforms our training data # into feature vectors. The input to fit_transform should be a # list of strings. train_data_features...
Logistic Regression without doing anything, really: 0.8318177373618852 Decision Tree without doing anything, really: 0.8375867800919136 ******************************************************************************** Logistic Regression Test Score without doing anything, really: 0.7876417676965194 Decision Tree Test Sc...
CC0-1.0
code/sandbox-Blue-O.ipynb
MattPat1981/new_space_race_nlp
So we see that we are above our baseline of 57% accuracy by only guessing a single subreddit without trying to predict. We also see that our initial runs without any GridSearch or HPO tuning gives us a fairly overfit model for either mode. **Let's see next what happens when we sift through our data with stopwords, etc...
space_wars.shape space_wars.describe()
_____no_output_____
CC0-1.0
code/sandbox-Blue-O.ipynb
MattPat1981/new_space_race_nlp
Feature Engineering Map word count and character length funcitons on to the 'body' column to see a difference in each.
def word_count(string): ''' returns the number of words or tokens in a string literal, splitting on spaces, regardless of word lenth. This function will include space-separated punctuation as a word, such as " : " where the colon would be counted string, a string ''' str_list = stri...
_____no_output_____
CC0-1.0
code/sandbox-Blue-O.ipynb
MattPat1981/new_space_race_nlp
Text Feature Extraction Follow along in the NLP EDA II video and do some analysis
X_train_df = pd.DataFrame(train_data_features.toarray(), columns=cntv.get_feature_names()) X_train_df X_train_df['subreddit'] # get count of top-occurring words # empty dictionary top_words = {} # loop through columns for i in X_train_df.columns: # save sum of each column in dictionary ...
_____no_output_____
CC0-1.0
code/sandbox-Blue-O.ipynb
MattPat1981/new_space_race_nlp
Implement Naive Bayes because it's in the project instructionsMultinomial Naive Bayes often outperforms other models despite text data being non-independent data
pipe = Pipeline([ ('count_v', CountVectorizer()), ('nb', MultinomialNB()) ]) pipe_params = { 'count_v__max_features': [2000, 5000, 9000], 'count_v__stop_words': [stopwords], 'count_v__min_df': [2, 3, 10], 'count_v__max_df': [.9, .8, .7], 'count_v__ngram_range': [(1, 1), (1, 2)] } gs = GridSe...
_____no_output_____
CC0-1.0
code/sandbox-Blue-O.ipynb
MattPat1981/new_space_race_nlp
So far, the Multinomial Naive Bayes Algorithm is the top function at 79.28% Accuracy. The confusion matrix below is very simiar to that of other models
# Get predictions preds = gs.predict(X_test) # Save confusion matrix values tn, fp, fn, tp = confusion_matrix(y_test, preds).ravel() # View confusion matrix plot_confusion_matrix(gs, X_test, y_test, cmap='Blues', values_format='d'); # Calculate the specificity spec = tn / (tn + fp) print('Specificity:', spec)
Specificity: 0.5670289855072463
CC0-1.0
code/sandbox-Blue-O.ipynb
MattPat1981/new_space_race_nlp
None of the 1620 different models we tried in this pipeline performed noticibly better than the thrown-together Logistic Regression Classifier that we started out with. Let's try TF-IDF, then Random Cut Forest, and finally Vector Machines. Our last run brought the best accuracy score to 79.3% TF-IDF
# Redefine the training and testing sets X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = .1, stratify = y, random_state=42) tvec = TfidfVectori...
Specificity: 0.5489130434782609
CC0-1.0
code/sandbox-Blue-O.ipynb
MattPat1981/new_space_race_nlp
Random Cut Forest, Bagging, and Support Vector Machines
from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier
_____no_output_____
CC0-1.0
code/sandbox-Blue-O.ipynb
MattPat1981/new_space_race_nlp
Before we run the decision tree model or RandomForestClassifier(), we need to convert all of the data to numeric data
rf = RandomForestClassifier() et = ExtraTreesClassifier() cross_val_score(rf, train_data_features, X_train_df['subreddit']).mean() cross_val_score(et, train_data_features, X_train_df['subreddit']).mean() #cross_val_score(rf, test_data_features, y_test).mean()
_____no_output_____
CC0-1.0
code/sandbox-Blue-O.ipynb
MattPat1981/new_space_race_nlp
Make sure that we are using X and y data that are completely numeric and free of nulls
space_wars.head(1) space_wars.shape pipe_rf = Pipeline([ ('count_v', CountVectorizer()), ('rf', RandomForestClassifier()), ]) pipe_ef = Pipeline([ ('count_v', CountVectorizer()), ('ef', ExtraTreesClassifier()), ]) pipe_params = 'count_v__max_features': [2000, 5000, 9000], 'count_v__stop_words':...
_____no_output_____
CC0-1.0
code/sandbox-Blue-O.ipynb
MattPat1981/new_space_race_nlp
Now run through Gradient Boosting and SVM
from sklearn.ensemble import GradientBoostingClassifier, AdaBoostClassifier, VotingClassifier from sklearn.preprocessing import StandardScaler from sklearn.neighbors import KNeighborsClassifier
_____no_output_____
CC0-1.0
code/sandbox-Blue-O.ipynb
MattPat1981/new_space_race_nlp
Using samples from Riley's Lessons:
AdaBoostClassifier() GradientBoostingClassifier()
_____no_output_____
CC0-1.0
code/sandbox-Blue-O.ipynb
MattPat1981/new_space_race_nlp
Use the CountVectorizer to convert the data to numeric data prior to running it through the below VotingClassifier
'count_v__max_df': 0.9, 'count_v__max_features': 9000, 'count_v__min_df': 2, 'count_v__ngram_range': (1, 1), knn_pipe = Pipeline([ ('ss', StandardScaler()), ('knn', KNeighborsClassifier()) ]) %%time vote = VotingClassifier([ ('ada', AdaBoostClassifier(base_estimator=DecisionTreeClassifier())), ('gra...
_____no_output_____
CC0-1.0
code/sandbox-Blue-O.ipynb
MattPat1981/new_space_race_nlp
Uses Fine-Tuned BERT network to classify biomechanics papers from PubMed
# Check date !rm /etc/localtime !ln -s /usr/share/zoneinfo/America/Los_Angeles /etc/localtime !date # might need to restart runtime if timezone didn't change ## Install & load libraries !pip install tensorflow==2.7.0 try: from official.nlp import optimization except: !pip install -q -U tf-models-official==2.4.0 ...
_____no_output_____
Apache-2.0
classify_papers.ipynb
jouterleys/BiomchBERT
Wilcoxon and Chi Squared
import numpy as np import pandas as pd df = pd.read_csv("prepared_neuror2_data.csv") def stats_for_neuror2_range(lo, hi): admissions = df[df.NR2_Score.between(lo, hi)] total_patients = admissions.shape[0] readmits = admissions[admissions.UnplannedReadmission] total_readmits = readmits.shape[0] retu...
_____no_output_____
BSD-2-Clause
Wilcoxon and Chi Squared.ipynb
massie/readmission-study
Note: This notebook was executed on google colab pro.
!pip3 install pytorch-lightning --quiet from google.colab import drive drive.mount('/content/drive') import os os.chdir('/content/drive/MyDrive/Colab Notebooks/atmacup11/experiments')
_____no_output_____
MIT
experiments/exp027.ipynb
Quvotha/atmacup11
Settings
EXP_NO = 27 SEED = 1 N_SPLITS = 5 TARGET = 'target' GROUP = 'art_series_id' REGRESSION = False assert((TARGET, REGRESSION) in (('target', True), ('target', False), ('sorting_date', True))) MODEL_NAME = 'resnet' BATCH_SIZE = 512 NUM_EPOCHS = 500
_____no_output_____
MIT
experiments/exp027.ipynb
Quvotha/atmacup11
Library
from collections import defaultdict from functools import partial import gc import glob import json from logging import getLogger, StreamHandler, FileHandler, DEBUG, Formatter import pickle import os import sys import time import matplotlib.pyplot as plt import numpy as np import pandas as pd import seaborn as sns fro...
_____no_output_____
MIT
experiments/exp027.ipynb
Quvotha/atmacup11
Prepare directory
output_dir = experiment_dir_of(EXP_NO) output_dir
_____no_output_____
MIT
experiments/exp027.ipynb
Quvotha/atmacup11
Prepare logger
logger = getLogger(__name__) '''Refference https://docs.python.org/ja/3/howto/logging-cookbook.html ''' logger.setLevel(DEBUG) # create file handler which logs even debug messages fh = FileHandler(os.path.join(output_dir, 'log.log')) fh.setLevel(DEBUG) # create console handler with a higher log level ch = StreamHandler...
2021-07-21 19:41:13,190 - __main__ - INFO - Experiment no: 27 2021-07-21 19:41:13,192 - __main__ - INFO - CV: StratifiedGroupKFold 2021-07-21 19:41:13,194 - __main__ - INFO - SEED: 1 2021-07-21 19:41:13,197 - __main__ - INFO - REGRESSION: False
MIT
experiments/exp027.ipynb
Quvotha/atmacup11
Load csv files
SINCE = time.time() logger.debug('Start loading csv files ({:.3f} seconds passed)'.format(time.time() - SINCE)) train, test, materials, techniques, sample_submission = load_csvfiles() logger.debug('Complete loading csv files ({:.3f} seconds passed)'.format(time.time() - SINCE)) train test
_____no_output_____
MIT
experiments/exp027.ipynb
Quvotha/atmacup11
Cross validation
seed_everything(SEED) train.set_index('object_id', inplace=True) fold_object_ids = load_cv_object_ids() for i, (train_object_ids, valid_object_ids) in enumerate(zip(fold_object_ids[0], fold_object_ids[1])): assert(set(train_object_ids) & set(valid_object_ids) == set()) num_fold = i + 1 logger.debug('Start f...
2021-07-21 19:41:14,941 - __main__ - DEBUG - Start fold 1 (1.737 seconds passed) 2021-07-21 19:41:14,948 - __main__ - DEBUG - Start training model (1.744 seconds passed) 2021-07-21 19:41:21,378 - __main__ - DEBUG - Epoch 0/499 /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:718: UserWarning: Named tensors...
MIT
experiments/exp027.ipynb
Quvotha/atmacup11
Evaluation
rmse = partial(mean_squared_error, squared=False) # qwk = partial(cohen_kappa_score, labels=np.sort(train['target'].unique()), weights='quadratic') @np.vectorize def predict(proba_0: float, proba_1: float, proba_2: float, proba_3: float) -> int: return np.argmax((proba_0, proba_1, proba_2, proba_3)) metrics = defau...
_____no_output_____
MIT
experiments/exp027.ipynb
Quvotha/atmacup11
Training set
pred_train_dfs = [] for i in range(N_SPLITS): num_fold = i + 1 logger.debug('Evaluate cv result (training set) Fold {}'.format(num_fold)) # Read cv result filepath_fold_train = os.path.join(output_dir, f'cv_fold{num_fold}_training.csv') pred_train_df = pd.read_csv(filepath_fold_train) pred_train...
2021-07-22 02:18:12,072 - __main__ - DEBUG - Write cv result to ../scripts/../experiments/exp027/prediction_train.csv
MIT
experiments/exp027.ipynb
Quvotha/atmacup11
Validation set
pred_valid_dfs = [] for i in range(N_SPLITS): num_fold = i + 1 logger.debug('Evaluate cv result (validation set) Fold {}'.format(num_fold)) # Read cv result filepath_fold_valid = os.path.join(output_dir, f'cv_fold{num_fold}_validation.csv') pred_valid_df = pd.read_csv(filepath_fold_valid) pred_v...
2021-07-22 02:18:12,298 - __main__ - DEBUG - Write metrics to ../scripts/../experiments/exp027/metrics.json
MIT
experiments/exp027.ipynb
Quvotha/atmacup11
Prediction
pred_test_dfs = [] for i in range(N_SPLITS): num_fold = i + 1 # Read cv result filepath_fold_test = os.path.join(output_dir, f'cv_fold{num_fold}_test.csv') pred_test_df = pd.read_csv(filepath_fold_test) pred_test_dfs.append(pred_test_df) pred_test = pd.concat(pred_test_dfs).groupby('object_id').sum(...
2021-07-22 02:18:12,639 - __main__ - DEBUG - Complete (23819.435 seconds passed)
MIT
experiments/exp027.ipynb
Quvotha/atmacup11
Qonto - Get statement aggregated by date **Tags:** qonto bank statement naas_drivers Input Import library
from naas_drivers import qonto
_____no_output_____
BSD-3-Clause
Qonto/Qonto_Get_statement_aggregated_by_date.ipynb
Charles-de-Montigny/awesome-notebooks
Get your Qonto credentialsHow to get your credentials ?
QONTO_USER_ID = 'YOUR_USER_ID' QONTO_SECRET_KEY = 'YOUR_SECRET_KEY'
_____no_output_____
BSD-3-Clause
Qonto/Qonto_Get_statement_aggregated_by_date.ipynb
Charles-de-Montigny/awesome-notebooks
Parameters
# Date to start extraction, format: "AAAA-MM-JJ", example: "2021-01-01" date_from = None # Date to end extraction, format: "AAAA-MM-JJ", example: "2021-01-01", default = now date_to = None
_____no_output_____
BSD-3-Clause
Qonto/Qonto_Get_statement_aggregated_by_date.ipynb
Charles-de-Montigny/awesome-notebooks
Model Get statement aggregated by date
df_statement = qonto.connect(QONTO_USER_ID, QONTO_SECRET_KEY).statement.aggregated(date_from, date_to)
_____no_output_____
BSD-3-Clause
Qonto/Qonto_Get_statement_aggregated_by_date.ipynb
Charles-de-Montigny/awesome-notebooks
Output Display result
df_statement
_____no_output_____
BSD-3-Clause
Qonto/Qonto_Get_statement_aggregated_by_date.ipynb
Charles-de-Montigny/awesome-notebooks
Essential ObjectsThis tutorial covers several object types that are foundational to much of what pyGSTi does: [circuits](circuits), [processor specifications](pspecs), [models](models), and [data sets](datasets). Our objective is to explain what these objects are and how they relate to one another at a high level whi...
import pygsti from pygsti.circuits import Circuit from pygsti.models import Model from pygsti.data import DataSet
_____no_output_____
Apache-2.0
jupyter_notebooks/Tutorials/01-Essential-Objects.ipynb
maij/pyGSTi
CircuitsThe `Circuit` object encapsulates a quantum circuit as a sequence of *layers*, each of which contains zero or more non-identity *gates*. A `Circuit` has some number of labeled *lines* and each gate label is assigned to one or more lines. Line labels can be integers or strings. Gate labels have two parts: a...
c = Circuit( [('Gx',0),('Gcnot',0,1),(),('Gy',3)], line_labels=[0,1,2,3]) print(c)
_____no_output_____
Apache-2.0
jupyter_notebooks/Tutorials/01-Essential-Objects.ipynb
maij/pyGSTi
If you want multiple gates in a single layer, just put those gate labels in their own nested list:
c = Circuit( [('Gx',0),[('Gcnot',0,1),('Gy',3)],()] , line_labels=[0,1,2,3]) print(c)
_____no_output_____
Apache-2.0
jupyter_notebooks/Tutorials/01-Essential-Objects.ipynb
maij/pyGSTi
We distinguish three basic types of circuit layers. We call layers containing quantum gates *operation layers*. All the circuits we've seen so far just have operation layers. It's also possible to have a *preparation layer* at the beginning of a circuit and a *measurement layer* at the end of a circuit. There can a...
c = Circuit( ['rho',('Gz',1),[('Gswap',0,1),('Gy',2)],'Mz'] , line_labels=[0,1,2]) print(c)
_____no_output_____
Apache-2.0
jupyter_notebooks/Tutorials/01-Essential-Objects.ipynb
maij/pyGSTi
Finally, when dealing with small systems (e.g. 1 or 2 qubits), we typically just use a `str`-type label (without any line-labels) to denote every possible layer. In this case, all the labels operate on the entire state space so we don't need the notion of 'lines' in a `Circuit`. When there are no line-labels, a `Circ...
c = Circuit( ['Gx','Gy','Gi'] ) print(c)
_____no_output_____
Apache-2.0
jupyter_notebooks/Tutorials/01-Essential-Objects.ipynb
maij/pyGSTi
Pretty simple, right? The `Circuit` object allows you to easily manipulate its labels (similar to a NumPy array) and even perform some basic operations like depth reduction and simple compiling. For lots more details on how to create, modify, and use circuit objects see the [circuit tutorial](objects/Circuit.ipynb). ...
pspec = pygsti.processors.QubitProcessorSpec(num_qubits=2, gate_names=['Gxpi2', 'Gypi2', 'Gcnot'], geometry="line") print("Qubit labels are", pspec.qubit_labels) print("X(pi/2) gates on qubits: ", pspec.resolved_availability('Gxpi2')) print("CNOT gates on qubits: ", pspec.re...
_____no_output_____
Apache-2.0
jupyter_notebooks/Tutorials/01-Essential-Objects.ipynb
maij/pyGSTi
creates a processor specification for a 2-qubits with $X(\pi/2)$, $Y(\pi/2)$, and CNOT gates. Setting the geometry to `"line"` causes 1-qubit gates to be available on each qubit and the CNOT between the two qubits (in either control/target direction). Processor specifications are used to build experiment designs and ...
mdl = pygsti.models.create_explicit_model(pspec)
_____no_output_____
Apache-2.0
jupyter_notebooks/Tutorials/01-Essential-Objects.ipynb
maij/pyGSTi
This creates an `ExplicitOpModel` with a default preparation (prepares all qubits in the zero-state) labeled `'rho0'`, a default measurement labeled `'Mdefault'` in the Z-basis and with 5 layer-operations given by the labels in the 2nd argument (the first argument is akin to a circuit's line labels and the third argume...
print("Preparations: ", ', '.join(map(str,mdl.preps.keys()))) print("Measurements: ", ', '.join(map(str,mdl.povms.keys()))) print("Layer Ops: ", ', '.join(map(str,mdl.operations.keys())))
_____no_output_____
Apache-2.0
jupyter_notebooks/Tutorials/01-Essential-Objects.ipynb
maij/pyGSTi
We can now use this model to do what models were made to do: compute the outcome probabilities of circuits.
c = Circuit( [('Gxpi2',0),('Gcnot',0,1),('Gypi2',1)] , line_labels=[0,1]) print(c) mdl.probabilities(c) # Compute the outcome probabilities of circuit `c`
_____no_output_____
Apache-2.0
jupyter_notebooks/Tutorials/01-Essential-Objects.ipynb
maij/pyGSTi
An `ExplictOpModel` only "knows" how to operate on circuit layers it explicitly contains in its dictionaries,so, for example, a circuit layer with two X gates in parallel (layer-label = `[('Gxpi2',0),('Gxpi2',1)]`) cannot be used with our model until we explicitly associate an operation with the layer-label `[('Gxpi2',...
import numpy as np c = Circuit( [[('Gxpi2',0),('Gxpi2',1)],('Gxpi2',1)] , line_labels=[0,1]) print(c) try: p = mdl.probabilities(c) except KeyError as e: print("!!KeyError: ",str(e)) #Create an operation for two parallel X-gates & rerun (now it works!) mdl.operations[ [('Gxpi2',0),('Gxpi2',1)] ]...
_____no_output_____
Apache-2.0
jupyter_notebooks/Tutorials/01-Essential-Objects.ipynb
maij/pyGSTi
Implicit layer-operation modelsIn the above example, you saw how it is possible to manually add a layer-operation to an `ExplicitOpModel` based on its other, more primitive layer operations. This often works fine for a few qubits, but can quickly become tedious as the number of qubits increases (since the number of p...
dataset_txt = \ """## Columns = 00 count, 01 count, 10 count, 11 count {} 100 0 0 0 Gxpi2:0 55 5 40 0 Gxpi2:0Gypi2:1 20 27 23 30 Gxpi2:0^4 85 3 10 2 Gxpi2:0Gcnot:0:1 45 1 4 50 [Gxpi2:0Gxpi2:1]Gypi2:0 25 32 17 26 """ with open("tutor...
_____no_output_____
Apache-2.0
jupyter_notebooks/Tutorials/01-Essential-Objects.ipynb
maij/pyGSTi
The second is by simulating a `Model` and thereby generating "fake data". This essentially calls `mdl.probabilities(c)` for each circuit in a given list, and samples from the output probability distribution to obtain outcome counts:
circuit_list = pygsti.circuits.to_circuits([ (), (('Gxpi2',0),), (('Gxpi2',0),('Gypi2',1)), (('Gxpi2',0),)*4, (('Gxpi2',0),('Gcnot',0,1)), ...
_____no_output_____
Apache-2.0
jupyter_notebooks/Tutorials/01-Essential-Objects.ipynb
maij/pyGSTi
Outcome counts are accessible by indexing a `DataSet` as if it were a dictionary with `Circuit` keys:
c = Circuit( (('Gxpi2',0),('Gypi2',1)), line_labels=(0,1) ) print(ds[c]) # index using a Circuit print(ds[ [('Gxpi2',0),('Gypi2',1)] ]) # or with something that can be converted to a Circuit
_____no_output_____
Apache-2.0
jupyter_notebooks/Tutorials/01-Essential-Objects.ipynb
maij/pyGSTi
Because `DataSet` object can also store *timestamped* data (see the [time-dependent data tutorial](objects/advanced/TimestampedDataSets.ipynb), the values or "rows" of a `DataSet` aren't simple dictionary objects. When you'd like a `dict` of counts use the `.counts` member of a data set row:
row = ds[c] row['00'] # this is ok for outlbl, cnt in row.counts.items(): # Note: `row` doesn't have .items(), need ".counts" print(outlbl, cnt)
_____no_output_____
Apache-2.0
jupyter_notebooks/Tutorials/01-Essential-Objects.ipynb
maij/pyGSTi
Another thing to note is that `DataSet` objects are "sparse" in that 0-counts are not typically stored:
c = Circuit([('Gxpi2',0)], line_labels=(0,1)) print("No 01 or 11 outcomes here: ",ds_fake[c]) for outlbl, cnt in ds_fake[c].counts.items(): print("Item: ",outlbl, cnt) # Note: this loop never loops over 01 or 11!
_____no_output_____
Apache-2.0
jupyter_notebooks/Tutorials/01-Essential-Objects.ipynb
maij/pyGSTi
Hacker Factory Cyber Hackathon Solution by Team Jugaad (Abhiraj Singh Rajput, Deepanshu Gupta, Manuj Mehrotra) We are a team of members, that are NOT moved by the buzzwords like Machine Learning, Data Science, AI etc. However we are a team of people who get adrenaline rush for seeking the solution to a problem. And ...
import pandas as pd df = pd.read_csv("train.csv", sep=";") df.head() df.columns df.shape
_____no_output_____
MIT
Cyber Hackathon.ipynb
MANUJMEHROTRA/CyberHacathon
Let's get the top 10 of permissions that are used for our malware samples Malicious
series = pd.Series.sort_values(df[df.type==1].sum(axis=0), ascending=False)[1:11] series pd.Series.sort_values(df[df.type==0].sum(axis=0), ascending=False)[:10] import matplotlib.pyplot as plt fig, axs = plt.subplots(nrows=2, sharex=True) pd.Series.sort_values(df[df.type==0].sum(axis=0), ascending=False)[:10].plot.ba...
_____no_output_____
MIT
Cyber Hackathon.ipynb
MANUJMEHROTRA/CyberHacathon
Now will try to predict with the exsisting data set, i.e. model creation Machine Learning Models
from sklearn.naive_bayes import GaussianNB, BernoulliNB from sklearn.metrics import accuracy_score, classification_report,roc_auc_score from sklearn.ensemble import BaggingClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.linear_model import SGDClassifier from sklearn.model_selection import tra...
DecisionTreeClassifier(class_weight=None, criterion='gini', max_depth=None, max_features=None, max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, presort=False, random_state=None,...
MIT
Cyber Hackathon.ipynb
MANUJMEHROTRA/CyberHacathon
Dynamic AnalysisFor this approach, we used a set of pcap files from the DroidCollector project integrated by 4705 benign and 7846 malicious applications. All of the files were processed by our feature extractor script (a result from [4]), the idea of this analysis is to answer the next question, according to the stati...
import pandas as pd data = pd.read_csv("android_traffic.csv", sep=";") data.head() data.columns data.shape data.type.value_counts() data.isna().sum() data = data.drop(['duracion','avg_local_pkt_rate','avg_remote_pkt_rate'], axis=1).copy() data.describe() sns.pairplot(data) data.loc[data.tcp_urg_packet > 0].shape[0] dat...
_____no_output_____
MIT
Cyber Hackathon.ipynb
MANUJMEHROTRA/CyberHacathon
Salary Data
import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn.linear_model import LinearRegression from sklearn.model_selection import train_test_split import seaborn as sns salary = pd.read_csv("Salary_Data.csv") salary.head() salary.info() salary.describe() X = salary['YearsExperience'].values y...
_____no_output_____
MIT
salary-data.ipynb
JCode1986/data_analysis
Day and Night Image Classifier---The day/night image dataset consists of 200 RGB color images in two categories: day and night. There are equal numbers of each example: 100 day images and 100 night images.We'd like to build a classifier that can accurately label these images as day or night, and that relies on finding...
import cv2 # computer vision library import helpers import numpy as np import matplotlib.pyplot as plt import matplotlib.image as mpimg %matplotlib inline
_____no_output_____
MIT
1_1_Image_Representation/6_2. Standardizing the Data.ipynb
georgiagn/CVND_Exercises
Training and Testing DataThe 200 day/night images are separated into training and testing datasets. * 60% of these images are training images, for you to use as you create a classifier.* 40% are test images, which will be used to test the accuracy of your classifier.First, we set some variables to keep track of some w...
# Image data directories image_dir_training = "day_night_images/training/" image_dir_test = "day_night_images/test/"
_____no_output_____
MIT
1_1_Image_Representation/6_2. Standardizing the Data.ipynb
georgiagn/CVND_Exercises
Load the datasetsThese first few lines of code will load the training day/night images and store all of them in a variable, `IMAGE_LIST`. This list contains the images and their associated label ("day" or "night"). For example, the first image-label pair in `IMAGE_LIST` can be accessed by index: ``` IMAGE_LIST[0][:]``...
# Using the load_dataset function in helpers.py # Load training data IMAGE_LIST = helpers.load_dataset(image_dir_training)
_____no_output_____
MIT
1_1_Image_Representation/6_2. Standardizing the Data.ipynb
georgiagn/CVND_Exercises
--- 1. Visualize the input images
# Print out 1. The shape of the image and 2. The image's label # Select an image and its label by list index image_index = 0 selected_image = IMAGE_LIST[image_index][0] selected_label = IMAGE_LIST[image_index][1] # Display image and data about it plt.imshow(selected_image) print("Shape: "+str(selected_image.shape)) p...
Shape: (458, 800, 3) Label: day
MIT
1_1_Image_Representation/6_2. Standardizing the Data.ipynb
georgiagn/CVND_Exercises
2. Pre-process the DataAfter loading in each image, you have to standardize the input and output. Solution codeYou are encouraged to try to complete this code on your own, but if you are struggling or want to make sure your code is correct, there i solution code in the `helpers.py` file in this directory. You can loo...
# This function should take in an RGB image and return a new, standardized version def standardize_input(image): ## TODO: Resize image so that all "standard" images are the same size 600x1100 (hxw) standard_im = image[0:600, 0:1100] return standard_im
_____no_output_____
MIT
1_1_Image_Representation/6_2. Standardizing the Data.ipynb
georgiagn/CVND_Exercises
TODO: Standardize the outputWith each loaded image, you also need to specify the expected output. For this, use binary numerical values 0/1 = night/day.
# Examples: # encode("day") should return: 1 # encode("night") should return: 0 def encode(label): numerical_val = 0 ## TODO: complete the code to produce a numerical label if label == "day": numerical_val = 1 return numerical_val
_____no_output_____
MIT
1_1_Image_Representation/6_2. Standardizing the Data.ipynb
georgiagn/CVND_Exercises
Construct a `STANDARDIZED_LIST` of input images and output labels.This function takes in a list of image-label pairs and outputs a **standardized** list of resized images and numerical labels.This uses the functions you defined above to standardize the input and output, so those functions must be complete for this sta...
def standardize(image_list): # Empty image data array standard_list = [] # Iterate through all the image-label pairs for item in image_list: image = item[0] label = item[1] # Standardize the image standardized_im = standardize_input(image) # Create a numer...
_____no_output_____
MIT
1_1_Image_Representation/6_2. Standardizing the Data.ipynb
georgiagn/CVND_Exercises
Visualize the standardized dataDisplay a standardized image from STANDARDIZED_LIST.
# Display a standardized image and its label # Select an image by index image_num = 0 selected_image = STANDARDIZED_LIST[image_num][0] selected_label = STANDARDIZED_LIST[image_num][1] # Display image and data about it ## TODO: Make sure the images have numerical labels and are of the same size plt.imshow(selected_ima...
Shape: (458, 800, 3) Label [1 = day, 0 = night]: 1
MIT
1_1_Image_Representation/6_2. Standardizing the Data.ipynb
georgiagn/CVND_Exercises
生成随机数
# 这里使用默认的参数,按照均匀分布的中心点 # TODO: task 上说可以尝试有趣的pattern,我们可以手动给定centroid再生成周围点,详见 gen.py 的文档 centroids, points, N = gen_data() y_true = np.repeat(np.arange(len(N)),N) len(y_true) len(points) # 简单画个图 plt.figure(figsize=(10,10)) plot_generated_data(centroids, points, N) len(points)
_____no_output_____
Apache-2.0
optimize.ipynb
QSCTech-Sange/Optimization_Project
AGM Sample
lbd = 0.05 delta = 1e-3 n = len(points) step = step_size(n,lbd,delta) grad = lambda X,B,D: grad_hub_matrix(X,delta,points,lbd,B,D) ans,AGM_loss = AGM(grad,points,step,0.005) groups = get_group(ans, tol=1.5) groups purity_score(y_true,groups) plt.figure(figsize=(10,10)) plot_res_data(points,ans,groups) plt.figure(figsiz...
_____no_output_____
Apache-2.0
optimize.ipynb
QSCTech-Sange/Optimization_Project
GM Sample
lbd = 0.05 delta = 1e-3 func = lambda X,B: loss_func(X,points,lbd,delta,B) grad = lambda X,B,D: grad_hub_matrix(X,delta,points,lbd,B,D) ans2,GM_loss = GM(points,func,grad,1e-2) len(GM_loss) groups = get_group(ans2, tol=2) plt.figure(figsize=(10,10)) plot_res_data(points,ans2,groups,way='points') plt.rc_context({'axes.e...
_____no_output_____
Apache-2.0
optimize.ipynb
QSCTech-Sange/Optimization_Project
GM_BB Sample
lbd = 0.05 delta = 1e-3 func = lambda X,B: loss_func(X,points,lbd,delta,B) grad = lambda X,B,D: grad_hub_matrix(X,delta,points,lbd,B,D) ans_BB,GM_BB_loss = GM_BB(points,func,grad,1e-5) len(GM_BB_loss) groups = get_group(ans_BB, tol=2) plt.figure(figsize=(10,10)) plot_res_data(points,ans_BB,groups,way='points') plt.rc_c...
_____no_output_____
Apache-2.0
optimize.ipynb
QSCTech-Sange/Optimization_Project
BFGStol=0.03 is quite almost minimum, if smaller s.y is too small 1/s.y=nan
lbd = 0.05 delta = 1e-3 func = lambda X,B: loss_func(X,points,lbd,delta,B) grad = lambda X,B,D: grad_hub_matrix(X,delta,points,lbd,B,D) ans_BFGS,BFGS_loss = BFGS(points,func,grad,0.003) groups = get_group(ans_BFGS, tol=2) plt.figure(figsize=(10,10)) plot_res_data(points,ans_BFGS,groups) plt.rc_context({'axes.edgecolor'...
_____no_output_____
Apache-2.0
optimize.ipynb
QSCTech-Sange/Optimization_Project
LBFGStol=0.03 is quite almost minimum, if smaller s.y is too small 1/s.y=nan
lbd = 0.05 delta = 1e-3 func = lambda X,B: loss_func(X,points,lbd,delta,B) grad = lambda X,B,D: grad_hub_matrix(X,delta,points,lbd,B,D) ans_LBFGS,LBFGS_loss = LBFGS(points,func,grad,0.003,1,5) groups = get_group(ans_LBFGS, tol=2) plt.figure(figsize=(10,10)) plot_res_data(points,ans_LBFGS,groups) plt.rc_context({'axes.e...
_____no_output_____
Apache-2.0
optimize.ipynb
QSCTech-Sange/Optimization_Project
计算Hessian
from itertools import combinations def huber(x, delta): ''' Args: x: input that has been norm2ed (n*(n-1)/2,) delta: threshold Output: (n*(n-1)/2,) ''' return np.where(x > delta ** 2, np.sqrt(x) - delta / 2, x / (2 * delta)) def pair_col_diff_norm2(x, idx): ''' comp...
_____no_output_____
Apache-2.0
optimize.ipynb
QSCTech-Sange/Optimization_Project
测试Hessian
n, d = 4,2 test = OBJ(d,n,0.1) X = np.arange(n*d).reshape(n,d) t = np.arange(n*d).reshape((d,n)).astype(float) test.hessiant(X.T, t, 0.1) test.hessiant(X.T, t, 0.1) n, d = 4,2 delta = 0.1 X = np.arange(n*d).reshape(n,d) B = gen_B(n, sparse=False) p = np.arange(n*d).reshape((n*d,1)) Hessian_hub(X, p, delta, B) i=1 B = g...
_____no_output_____
Apache-2.0
optimize.ipynb
QSCTech-Sange/Optimization_Project
Copyright 2018 The AdaNet Authors.
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under...
_____no_output_____
Apache-2.0
adanet/examples/tutorials/customizing_adanet.ipynb
xhlulu/adanet
Customizing AdaNetOften times, as a researcher or machine learning practitioner, you will havesome prior knowledge about a dataset. Ideally you should be able to encode thatknowledge into your machine learning algorithm. With `adanet`, you can do so bydefining the *neural architecture search space* that the AdaNet alg...
from __future__ import absolute_import from __future__ import division from __future__ import print_function import functools import adanet from adanet.examples import simple_dnn import tensorflow as tf # The random seed to use. RANDOM_SEED = 42
_____no_output_____
Apache-2.0
adanet/examples/tutorials/customizing_adanet.ipynb
xhlulu/adanet
Fashion MNIST datasetIn this example, we will use the Fashion MNIST dataset[[Xiao et al., 2017](https://arxiv.org/abs/1708.07747)] for classifying fashionapparel images into one of ten categories:1. T-shirt/top2. Trouser3. Pullover4. Dress5. Coat6. Sandal7. Shirt8. Sneaker9. Bag10. Ankle boot![Fashion MNIST](...
(x_train, y_train), (x_test, y_test) = ( tf.keras.datasets.fashion_mnist.load_data())
_____no_output_____
Apache-2.0
adanet/examples/tutorials/customizing_adanet.ipynb
xhlulu/adanet
Supply the data in TensorFlowOur first task is to supply the data in TensorFlow. Using thetf.estimator.Estimator covention, we will define a function that returns an`input_fn` which returns feature and label `Tensors`.We will also use the `tf.data.Dataset` API to feed the data into our models.
FEATURES_KEY = "images" def generator(images, labels): """Returns a generator that returns image-label pairs.""" def _gen(): for image, label in zip(images, labels): yield image, label return _gen def preprocess_image(image, label): """Preprocesses an image for an `Estimator`.""" # First let's...
_____no_output_____
Apache-2.0
adanet/examples/tutorials/customizing_adanet.ipynb
xhlulu/adanet
Establish baselinesThe next task should be to get somes baselines to see how our model performs onthis dataset.Let's define some information to share with all our `tf.estimator.Estimators`:
# The number of classes. NUM_CLASSES = 10 # We will average the losses in each mini-batch when computing gradients. loss_reduction = tf.losses.Reduction.SUM_OVER_BATCH_SIZE # A `Head` instance defines the loss function and metrics for `Estimators`. head = tf.contrib.estimator.multi_class_head( NUM_CLASSES, loss_r...
_____no_output_____
Apache-2.0
adanet/examples/tutorials/customizing_adanet.ipynb
xhlulu/adanet
Let's start simple, and train a linear model:
#@test {"skip": true} #@title Parameters LEARNING_RATE = 0.001 #@param {type:"number"} TRAIN_STEPS = 5000 #@param {type:"integer"} BATCH_SIZE = 64 #@param {type:"integer"} estimator = tf.estimator.LinearClassifier( feature_columns=feature_columns, n_classes=NUM_CLASSES, optimizer=tf.train.RMSPropOptimiz...
Accuracy: 0.8413 Loss: 0.464809
Apache-2.0
adanet/examples/tutorials/customizing_adanet.ipynb
xhlulu/adanet
The linear model with default parameters achieves about **84.13% accuracy**.Let's see if we can do better with the `simple_dnn` AdaNet:
#@test {"skip": true} #@title Parameters LEARNING_RATE = 0.003 #@param {type:"number"} TRAIN_STEPS = 5000 #@param {type:"integer"} BATCH_SIZE = 64 #@param {type:"integer"} ADANET_ITERATIONS = 2 #@param {type:"integer"} estimator = adanet.Estimator( head=head, subnetwork_generator=simple_dnn.Generator( ...
Accuracy: 0.8566 Loss: 0.408646
Apache-2.0
adanet/examples/tutorials/customizing_adanet.ipynb
xhlulu/adanet
The `simple_dnn` AdaNet model with default parameters achieves about **85.66%accuracy**.This improvement can be attributed to `simple_dnn` searching overfully-connected neural networks which have more expressive power than the linearmodel due to their non-linear activations.Fully-connected layers are permutation invari...
class SimpleCNNBuilder(adanet.subnetwork.Builder): """Builds a CNN subnetwork for AdaNet.""" def __init__(self, learning_rate, max_iteration_steps, seed): """Initializes a `SimpleCNNBuilder`. Args: learning_rate: The float learning rate to use. max_iteration_steps: The number of steps per iter...
_____no_output_____
Apache-2.0
adanet/examples/tutorials/customizing_adanet.ipynb
xhlulu/adanet
Next, we extend a `adanet.subnetwork.Generator`, which defines the searchspace of candidate `SimpleCNNBuilders` to consider including the final network.It can create one or more at each iteration with different parameters, and theAdaNet algorithm will select the candidate that best improves the overall neuralnetwork's ...
class SimpleCNNGenerator(adanet.subnetwork.Generator): """Generates a `SimpleCNN` at each iteration. """ def __init__(self, learning_rate, max_iteration_steps, seed=None): """Initializes a `Generator` that builds `SimpleCNNs`. Args: learning_rate: The float learning rate to use. max_iteratio...
_____no_output_____
Apache-2.0
adanet/examples/tutorials/customizing_adanet.ipynb
xhlulu/adanet
With these defined, we pass them into a new `adanet.Estimator`:
#@title Parameters LEARNING_RATE = 0.05 #@param {type:"number"} TRAIN_STEPS = 5000 #@param {type:"integer"} BATCH_SIZE = 64 #@param {type:"integer"} ADANET_ITERATIONS = 2 #@param {type:"integer"} max_iteration_steps = TRAIN_STEPS // ADANET_ITERATIONS estimator = adanet.Estimator( head=head, subnetwork_gene...
Accuracy: 0.9041 Loss: 0.26544
Apache-2.0
adanet/examples/tutorials/customizing_adanet.ipynb
xhlulu/adanet
CW Attack ExampleTJ Kim 1.28.21 Summary: Implement CW attack on toy network example given in the readme of the github. https://github.com/tj-kim/pytorch-cw2?organization=tj-kim&organization=tj-kimA dummy network is made using CIFAR example. https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html Build Dumm...
import torch import torchvision import torchvision.transforms as transforms
_____no_output_____
MIT
Run Example.ipynb
tj-kim/pytorch-cw2
Download a few classes from the dataset.
batch_size = 10 transform = transforms.Compose( [transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform) trainloader = torch.utils.data.DataLo...
Files already downloaded and verified Files already downloaded and verified
MIT
Run Example.ipynb
tj-kim/pytorch-cw2
Show a few images from the dataset.
import matplotlib.pyplot as plt import numpy as np # functions to show an image def imshow(img): img = img / 2 + 0.5 # unnormalize npimg = img.numpy() plt.imshow(np.transpose(npimg, (1, 2, 0))) plt.show() # get some random training images dataiter = iter(trainloader) images, labels = dataiter.n...
_____no_output_____
MIT
Run Example.ipynb
tj-kim/pytorch-cw2
Define a NN.
import torch.nn as nn import torch.nn.functional as F class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(3, 6, 5) self.pool = nn.MaxPool2d(2, 2) self.conv2 = nn.Conv2d(6, 16, 5) self.fc1 = nn.Linear(16 * 5 * 5, 120) self.fc2...
_____no_output_____
MIT
Run Example.ipynb
tj-kim/pytorch-cw2
Define loss and optimizer
import torch.optim as optim net = Net() criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
_____no_output_____
MIT
Run Example.ipynb
tj-kim/pytorch-cw2
Train the Network
train_flag = False PATH = './cifar_net.pth' if train_flag: for epoch in range(2): # loop over the dataset multiple times running_loss = 0.0 for i, data in enumerate(trainloader, 0): # get the inputs; data is a list of [inputs, labels] inputs, labels = data # ...
_____no_output_____
MIT
Run Example.ipynb
tj-kim/pytorch-cw2
Save Existing Network.
if train_flag: torch.save(net.state_dict(), PATH)
_____no_output_____
MIT
Run Example.ipynb
tj-kim/pytorch-cw2
Test Acc.
correct = 0 total = 0 with torch.no_grad(): for data in testloader: images, labels = data outputs = net(images) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() print('Accuracy of the network on the 10000 test im...
Accuracy of the network on the 10000 test images: 52 %
MIT
Run Example.ipynb
tj-kim/pytorch-cw2
C&W AttackPerform attack on toy network.Before running the example code, we have to set the following parameters:- dataloader- mean- std The mean and std are one value per each channel of input
dataloader = trainloader mean = (0.5,0.5,0.5) std = (0.5,0.5,0.5) import torch import cw inputs_box = (min((0 - m) / s for m, s in zip(mean, std)), max((1 - m) / s for m, s in zip(mean, std))) """ # an untargeted adversary adversary = cw.L2Adversary(targeted=False, confidence=...
attacked: tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3]) original: tensor([6, 9, 6, 2, 6, 9, 2, 3, 8, 5])
MIT
Run Example.ipynb
tj-kim/pytorch-cw2
Singular Value Decomposition
import numpy as np from sklearn.datasets import fetch_20newsgroups from sklearn import decomposition from scipy import linalg categories = ['alt.atheism', 'talk.religion.misc', 'comp.graphics', 'sci.space'] remove = ('headers', 'footers', 'quotes') newsgroups_train = fetch_20newsgroups(subset='train', categories=catego...
_____no_output_____
MIT
Topic_modelling_with_svd_and_nmf.ipynb
AdityaVarmaUddaraju/Topic_Modelling
Non-negative Matrix Factorization
clf = decomposition.NMF(n_components=5, random_state=1) W1 = clf.fit_transform(vectors) H1 = clf.components_ show_topics(H1) W1[0]
_____no_output_____
MIT
Topic_modelling_with_svd_and_nmf.ipynb
AdityaVarmaUddaraju/Topic_Modelling
Truncated SVD
!pip install fbpca import fbpca %time u, s, v = np.linalg.svd(vectors, full_matrices=False) %time u, s, v = decomposition.randomized_svd(vectors, 10) %time u, s, v = fbpca.pca(vectors, 10) show_topics(v)
_____no_output_____
MIT
Topic_modelling_with_svd_and_nmf.ipynb
AdityaVarmaUddaraju/Topic_Modelling
Simple RNNIn this notebook, we're going to train a simple RNN to do **time-series prediction**. Given some set of input data, it should be able to generate a prediction for the next time step!> * First, we'll create our data* Then, define an RNN in PyTorch* Finally, we'll train our network and see how it performs Imp...
import torch from torch import nn import numpy as np import matplotlib.pyplot as plt %matplotlib inline plt.figure(figsize=(8,5)) # how many time steps/data pts are in one batch of data seq_length = 20 # generate evenly spaced data pts time_steps = np.linspace(0, np.pi, seq_length + 1) data = np.sin(time_steps) data....
_____no_output_____
MIT
recurrent-neural-networks/time-series/Simple_RNN.ipynb
johnsonjoseph37/deep-learning-v2-pytorch
--- Define the RNNNext, we define an RNN in PyTorch. We'll use `nn.RNN` to create an RNN layer, then we'll add a last, fully-connected layer to get the output size that we want. An RNN takes in a number of parameters:* **input_size** - the size of the input* **hidden_dim** - the number of features in the RNN output and...
class RNN(nn.Module): def __init__(self, input_size, output_size, hidden_dim, n_layers): super(RNN, self).__init__() self.hidden_dim=hidden_dim # define an RNN with specified parameters # batch_first means that the first dim of the input and output will be the batch_size ...
_____no_output_____
MIT
recurrent-neural-networks/time-series/Simple_RNN.ipynb
johnsonjoseph37/deep-learning-v2-pytorch