markdown stringlengths 0 1.02M | code stringlengths 0 832k | output stringlengths 0 1.02M | license stringlengths 3 36 | path stringlengths 6 265 | repo_name stringlengths 6 127 |
|---|---|---|---|---|---|
Searching for bouts for a day of ephys recording- microphone wav file is first exported in sglx_pipe-dev-sort-bouts-s_b1253_21-20210614- bouts are extracted in searchbout_s_b1253_21-ephys | import os
import glob
import socket
import logging
import pickle
import numpy as np
import pandas as pd
from scipy.io import wavfile
from scipy import signal
### Fuck matplotlib, I'm using poltly now
from plotly.subplots import make_subplots
import plotly.graph_objects as go
from importlib import reload
logger = log... | _____no_output_____ | MIT | notebooks/curate_bouts-s_b1253_21-plotly-ephys.ipynb | zekearneodo/ceciestunepipe |
Get the file locations for a session (day) of recordings | reload(et)
sess_par = {'bird': 's_b1253_21',
'sess': '2021-07-18',
'sort': 2}
exp_struct = et.get_exp_struct(sess_par['bird'], sess_par['sess'], ephys_software='sglx')
raw_folder = exp_struct['folders']['sglx']
derived_folder = exp_struct['folders']['derived']
bouts_folder = os.path.join(deri... | _____no_output_____ | MIT | notebooks/curate_bouts-s_b1253_21-plotly-ephys.ipynb | zekearneodo/ceciestunepipe |
load concatenated the files of the session | def read_session_auto_bouts(exp_struct):
# list all files of the session
# read into list of pandas dataframes and concatenate
# read the search parameters of the first session
# return the big pd and the search params
derived_folder = exp_struct['folders']['derived']
bout_pd_files = et.get_sgl_... | _____no_output_____ | MIT | notebooks/curate_bouts-s_b1253_21-plotly-ephys.ipynb | zekearneodo/ceciestunepipe |
if it wasnt saved (which is a bad mistake), read the sampling rate from the first file in the session | def sample_rate_from_wav(wav_path):
x, sample_rate = wavfile.read(wav_path)
return sample_rate
if hparams['sample_rate'] is None:
one_wav_path = bpd.loc[0, 'file']
logger.info('Sample rate not saved in parameters dict, searching it in ' + one_wav_path)
hparams['sample_rate'] = sample_rate_from_wav(... | _____no_output_____ | MIT | notebooks/curate_bouts-s_b1253_21-plotly-ephys.ipynb | zekearneodo/ceciestunepipe |
compute the spectrograms | bout_pd['spectrogram'] = bout_pd['waveform'].apply(lambda x: bs.gimmepower(x, hparams)[2])
logger.info('saving bout pandas with spectrogram to ' + sess_bouts_file)
bout_pd.to_pickle(sess_bouts_file)
bout_pd.head(2)
bout_pd['file'][0] | _____no_output_____ | MIT | notebooks/curate_bouts-s_b1253_21-plotly-ephys.ipynb | zekearneodo/ceciestunepipe |
inspect the bouts and curate them visualize one bout | bout_pd.iloc[0]
import plotly.express as px
import plotly.graph_objects as go
from ipywidgets import widgets
def viz_one_bout(df: pd.Series, sub_sample=1):
# get the power and the spectrogram
sxx = df['spectrogram'][:, ::sub_sample]
x = df['waveform'][::sub_sample]
# the trace
tr_waveform = go... | _____no_output_____ | MIT | notebooks/curate_bouts-s_b1253_21-plotly-ephys.ipynb | zekearneodo/ceciestunepipe |
use it in a widget add a 'confusing' label, for not/sure/mixed.we want to avoid having things we are not sure of in the training dataset | bout_pd.reset_index(drop=True, inplace=True)
## Set confusing by default, will only be False once asserted bout/or not
bout_pd['confusing'] = True
bout_pd['bout_check'] = False
### Create a counter object (count goes 1:1 to DataFrame index)
from traitlets import CInt, link
class Counter(widgets.DOMWidget):
value =... | _____no_output_____ | MIT | notebooks/curate_bouts-s_b1253_21-plotly-ephys.ipynb | zekearneodo/ceciestunepipe |
save it | hparams
### get the curated file path
##save to the curated file path
viz_bout.bouts_pd.to_pickle(sess_bouts_curated_file)
logger.info('saved curated bout pandas to pickle {}'.format(sess_bouts_curated_file))
viz_bout.bouts_pd['file'][0]
viz_bout.bouts_pd.head(5) | _____no_output_____ | MIT | notebooks/curate_bouts-s_b1253_21-plotly-ephys.ipynb | zekearneodo/ceciestunepipe |
Simple CNN on dataset | import numpy as np
import pandas as pd
import keras
import matplotlib.pyplot as plt
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '1'
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential, Model
from keras.layers import Conv2D, MaxPooling2D
from keras.layers import Activation, ... | _____no_output_____ | MIT | SimpleCNN-ShreyaRESNET50.ipynb | Daniel-Wu/HydraML |
Leaflet cluster map of talk locationsRun this from the _talks/ directory, which contains .md files of all your talks. This scrapes the location YAML field from each .md file, geolocates it with geopy/Nominatim, and uses the getorg library to output data, HTML, and Javascript for a standalone cluster map. | !pip install getorg --upgrade
import glob
import getorg
from geopy import Nominatim
g = glob.glob("*.md")
geocoder = Nominatim()
location_dict = {}
location = ""
permalink = ""
title = ""
for file in g:
with open(file, 'r') as f:
lines = f.read()
if lines.find('location: "') > 1:
loc_st... | _____no_output_____ | MIT | .ipynb_checkpoints/talkmap-checkpoint.ipynb | jialeishen/academicpages.github.io |
Regression Analysis: Seasonal Effects with Sklearn Linear RegressionIn this notebook, you will build a SKLearn linear regression model to predict Yen futures ("settle") returns with *lagged* Yen futures returns. | # Futures contract on the Yen-dollar exchange rate:
# This is the continuous chain of the futures contracts that are 1 month to expiration
yen_futures = pd.read_csv(
Path("./data/yen.csv"), index_col="Date", infer_datetime_format=True, parse_dates=True
)
yen_futures.head()
# Trim the dataset to begin on January 1st... | _____no_output_____ | MIT | regression_analysis.ipynb | jonowens/a_yen_for_the_future |
Data Preparation Returns | # Create a series using "Settle" price percentage returns, drop any nan"s, and check the results:
# (Make sure to multiply the pct_change() results by 100)
# In this case, you may have to replace inf, -inf values with np.nan"s
yen_futures['Return'] = (yen_futures['Settle'].pct_change() *100)
yen_futures = yen_futures.r... | _____no_output_____ | MIT | regression_analysis.ipynb | jonowens/a_yen_for_the_future |
Lagged Returns | # Create a lagged return using the shift function
yen_futures['Lagged_Return'] = yen_futures['Return'].shift()
yen_futures = yen_futures.dropna()
yen_futures.tail() | _____no_output_____ | MIT | regression_analysis.ipynb | jonowens/a_yen_for_the_future |
Train Test Split | # Create a train/test split for the data using 2018-2019 for testing and the rest for training
train = yen_futures[:'2017']
test = yen_futures['2018':]
# Create four dataframes:
# X_train (training set using just the independent variables), X_test (test set of just the independent variables)
# Y_train (training set usi... | _____no_output_____ | MIT | regression_analysis.ipynb | jonowens/a_yen_for_the_future |
Linear Regression Model | # Create a Linear Regression model and fit it to the training data
from sklearn.linear_model import LinearRegression
# Fit a SKLearn linear regression using just the training set (X_train, Y_train):
model = LinearRegression(copy_X=True, fit_intercept=True, n_jobs=None, normalize=False)
model.fit(X_train, y_train)
| _____no_output_____ | MIT | regression_analysis.ipynb | jonowens/a_yen_for_the_future |
Make predictions using the Testing DataNote: We want to evaluate the model using data that it has never seen before, in this case: X_test. | # Make a prediction of "y" values using just the test dataset
predictions = model.predict(X_test)
# Assemble actual y data (Y_test) with predicted y data (from just above) into two columns in a dataframe:
results = y_test.to_frame()
results['Predicted Return'] = predictions
results.head()
# Plot the first 20 predicti... | _____no_output_____ | MIT | regression_analysis.ipynb | jonowens/a_yen_for_the_future |
Out-of-Sample PerformanceEvaluate the model using "out-of-sample" data (X_test and y_test) | from sklearn.metrics import mean_squared_error, r2_score
# Calculate the mean_squared_error (MSE) on actual versus predicted test "y"
mse = mean_squared_error(results['Return'], results['Predicted Return'])
# Using that mean-squared-error, calculate the root-mean-squared error (RMSE):
rmse = np.sqrt(mse)
print(f'Out-o... | Out-of-Sample Root Mean Squared Error (RMSE): 0.4154832784856737
| MIT | regression_analysis.ipynb | jonowens/a_yen_for_the_future |
In-Sample PerformanceEvaluate the model using in-sample data (X_train and y_train) | # Construct a dataframe using just the "y" training data:
df_in_sample_results = y_train.to_frame()
# Add a column of "in-sample" predictions to that dataframe:
df_in_sample_results['In-Sample'] = model.predict(X_train)
# Calculate in-sample mean_squared_error (for comparison to out-of-sample)
mse = mean_squared_er... | In-Sample Root Mean Squared Error (RMSE): 0.5963660785073426
| MIT | regression_analysis.ipynb | jonowens/a_yen_for_the_future |
DAT210x - Programming with Python for DS Module5- Lab6 | import random, math
import pandas as pd
import numpy as np
import scipy.io
from sklearn.model_selection import train_test_split
from sklearn.decomposition import PCA
from sklearn import manifold
from sklearn.neighbors import KNeighborsClassifier
from mpl_toolkits.mplot3d import Axes3D
import matplotlib
import matplotl... | _____no_output_____ | MIT | Module5/Module5 - Lab6.ipynb | jacburge/pythonfordatascience |
A Convenience Function This method is for your visualization convenience only. You aren't expected to know how to put this together yourself, although you should be able to follow the code by now: | def Plot2DBoundary(DTrain, LTrain, DTest, LTest):
# The dots are training samples (img not drawn), and the pics are testing samples (images drawn)
# Play around with the K values. This is very controlled dataset so it should be able to get perfect classification on testing entries
# Play with the K for isom... | _____no_output_____ | MIT | Module5/Module5 - Lab6.ipynb | jacburge/pythonfordatascience |
The Assignment Use the same code from Module4/assignment4.ipynb to load up the `face_data.mat` file into a dataframe called `df`. Be sure to calculate the `num_pixels` value, and to rotate the images to being right-side-up instead of sideways. This was demonstrated in the [Lab Assignment 4](https://github.com/authman/... | # .. your code here ..
mat = scipy.io.loadmat('Datasets/face_data.mat')
df = pd.DataFrame(mat['images']).T
num_images, num_pixels = df.shape
num_pixels = int(math.sqrt(num_pixels))
# Rotate the pictures, so we don't have to crane our necks:
for i in range(num_images):
df.loc[i,:] = df.loc[i,:].reshape(num_pixels, ... | _____no_output_____ | MIT | Module5/Module5 - Lab6.ipynb | jacburge/pythonfordatascience |
Load up your face_labels dataset. It only has a single column, and you're only interested in that single column. You will have to slice the column out so that you have access to it as a "Series" rather than as a "Dataframe". This was discussed in the the "Slicin'" lecture of the "Manipulating Data" reading on the cou... | # .. your code here ..
face_labels = pd.read_csv('Datasets/face_labels.csv',header=None)
label = face_labels.iloc[:, 0]
len(label)
df.shape
df.head()
y.head | _____no_output_____ | MIT | Module5/Module5 - Lab6.ipynb | jacburge/pythonfordatascience |
Do `train_test_split`. Use the same code as on the EdX platform in the reading material, but set the random_state=7 for reproducibility, and the test_size to 0.15 (150%). Your labels are actually passed in as a series (instead of as an NDArray) so that you can access their underlying indices later on. This is necessary... | # .. your code here ..
x_train, x_test, y_train, y_test = train_test_split(df, label, test_size=0.15, random_state=7) | _____no_output_____ | MIT | Module5/Module5 - Lab6.ipynb | jacburge/pythonfordatascience |
Dimensionality Reduction | if Test_PCA:
# INFO: PCA is used *before* KNeighbors to simplify your high dimensionality
# image samples down to just 2 principal components! A lot of information
# (variance) is lost during the process, as I'm sure you can imagine. But
# you have to drop the dimension down to two, otherwise you wouldn... | _____no_output_____ | MIT | Module5/Module5 - Lab6.ipynb | jacburge/pythonfordatascience |
Implement `KNeighborsClassifier` here. You can use any K value from 1 through 20, so play around with it and attempt to get good accuracy. Fit the classifier against your training data and labels. | # .. your code here ..
knn = KNeighborsClassifier(n_neighbors=3)
knn = knn.fit(x_train, y_train)
knn.score(x_test, y_test) | _____no_output_____ | MIT | Module5/Module5 - Lab6.ipynb | jacburge/pythonfordatascience |
Calculate and display the accuracy of the testing set (data_test and label_test): | # .. your code here ..
knn.score(x_test, y_test)
scores = pd.DataFrame(columns=['n_neighbors', 'model_score'])
type(scores); scores.dtypes; scores.shape; scores.head(3)
for i in range(1, 21): # try K value from 1 through 20 in an attempt to find good accuracy
score = KNeighborsClassifier(n_neighbors=i).fit(x_train... | _____no_output_____ | MIT | Module5/Module5 - Lab6.ipynb | jacburge/pythonfordatascience |
Let's chart the combined decision boundary, the training data as 2D plots, and the testing data as small images so we can visually validate performance: | Plot2DBoundary(x_train, x_train, y_test, y_test) | _____no_output_____ | MIT | Module5/Module5 - Lab6.ipynb | jacburge/pythonfordatascience |
After submitting your answers, experiment with using using PCA instead of ISOMap. Are the results what you expected? Also try tinkering around with the test/train split percentage from 10-20%. Notice anything? | # .. your code changes above .. | _____no_output_____ | MIT | Module5/Module5 - Lab6.ipynb | jacburge/pythonfordatascience |
PIC data | from astropy.constants import m_e, e, k_B
k = k_B.value
me = m_e.value
q = e.value
import numpy as np
import matplotlib.pyplot as plt
import json
%matplotlib notebook
from scipy.interpolate import interp1d
from math import ceil
plt.style.use("presentation")
with open("NewPic1D.dat", "r") as f:
dataPIC = json.load... | /home/tavant/these/code/venv/stand/lib64/python3.7/site-packages/matplotlib/figure.py:2144: UserWarning: This figure was using constrained_layout==True, but that is incompatible with subplots_adjust and or tight_layout: setting constrained_layout==False.
warnings.warn("This figure was using constrained_layout==True,... | Unlicense | src/Chapitre3/figure/Figure_HeatFlux.ipynb | antoinetavant/PhD_thesis_manuscript |
Heatflux from EVDF | k = "0"
Nprob = -1
x, y, phic, phi0, phi1 = returnxy(probnames[Nprob], k=k)
plt.figure(figsize=(4.5,4.5))
plt.subplots_adjust(left=0.17, bottom=0.17, right=0.99, top=0.925, wspace=0.05, hspace=0.25)
plt.plot(x,y)
Nprob = 1
x, y, phic, phi0, phi1 = returnxy(probnames[Nprob], k=k)
plt.plot(x,y)
plt.yscale("log")
plt.v... | _____no_output_____ | Unlicense | src/Chapitre3/figure/Figure_HeatFlux.ipynb | antoinetavant/PhD_thesis_manuscript |
Capture SpectrumBelow we will generate plots for both the real and simulated capture spectra. If you are re-generating the plots, please be patient, as both load large amounts of data. Measured Spectra*(If you are not interested in the code itself, you can collapse it by selecting the cell and then clicking on the bar... | #Import libraries and settings
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
exec(open("../python/nb_setup.py").read())
from constants import *
import R68_load as r68
import R68_efficiencies as eff
meas=r68.load_measured()
import R68_spec_tools as s... | _____no_output_____ | MIT | 0-Analysis/Capture_Spectrum.ipynb | villano-lab/nrSiCap |
Overlaid histograms comparingthe yielded energy PDFs for Sorensen and Lindhard models,including the resolution of the current detector (see `Calibration.ipynb`).The histograms are comprised of approximately simulated cascades.The orange (front) filled histogram represents the Lindhard model while the ... | #Import Libraries
import uproot
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.patches as mpatch
plt.style.use('../mplstyles/stylelib/standard.mplstyle')
from matplotlib.lines import Line2D
from tabulate import tabulate
import sys
sys.path.append('../python')
import nc_kinematics as nck
import lin... | dict_keys(['xx', 'yy', 'ex', 'ey'])
| MIT | 0-Analysis/Capture_Spectrum.ipynb | villano-lab/nrSiCap |
Statistical exploration for Bayesian analysis of PhIP-seq | import pandas as pd
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
cpm = pd.read_csv('/Users/laserson/tmp/phip_analysis/phip-9/cpm.tsv', sep='\t', header=0, index_col=0)
upper_bound = sp.stats.scoreatpercentile(cpm.values.ravel(), 99.9)
upper_bound
fig, ax... | _____no_output_____ | Apache-2.0 | notebooks/phip_modeling/bayesian-modeling-stats.ipynb | lasersonlab/phip-stat |
Plot only the lowest 99.9% of the data | fig, ax = plt.subplots()
_ = ax.hist(cpm.values.ravel()[cpm.values.ravel() <= upper_bound], bins=range(100), log=False)
_ = ax.set(xlim=(0, 60))
_ = ax.set(title='trimmed cpm')
trimmed_cpm = cpm.values.ravel()[cpm.values.ravel() <= upper_bound]
trimmed_cpm.mean(), trimmed_cpm.std()
means = cpm.apply(lambda x: x[x <= up... | _____no_output_____ | Apache-2.0 | notebooks/phip_modeling/bayesian-modeling-stats.ipynb | lasersonlab/phip-stat |
Do the slices look Poisson? | a = np.random.poisson(8, 10000)
fig, ax = plt.subplots()
plot_hist(ax, a)
ax.set(xlim=(0, 50)) | _____no_output_____ | Apache-2.0 | notebooks/phip_modeling/bayesian-modeling-stats.ipynb | lasersonlab/phip-stat |
For the most part. Maybe try NegBin just in case What does the distribution of the trimmed means look like? | fig, ax = plt.subplots()
plot_hist(ax, means)
ax.set(xlim=(0, 50))
a = np.random.gamma(1, 10, 10000)
fig, ax = plt.subplots()
plot_hist(ax, a)
ax.set(xlim=(0, 50))
means.mean() | _____no_output_____ | Apache-2.0 | notebooks/phip_modeling/bayesian-modeling-stats.ipynb | lasersonlab/phip-stat |
Following Anders and Huber, _Genome Biology_ 2010, compute some of their stats Compute size factors | s = np.exp(np.median(np.log(cpm.values + 0.5) - np.log(cpm.values + 0.5).mean(axis=1).reshape((cpm.shape[0], 1)), axis=0))
_ = sns.distplot(s)
q = (cpm.values / s).mean(axis=1)
fig, ax = plt.subplots()
_ = ax.hist(q, bins=100, log=False)
fig, ax = plt.subplots()
_ = ax.hist(q, bins=100, log=True)
w = (cpm.values / s).s... | _____no_output_____ | Apache-2.0 | notebooks/phip_modeling/bayesian-modeling-stats.ipynb | lasersonlab/phip-stat |
Proceeding with the following strategy/modelTrim data to remove top 0.1% of count values. Compute mean of each row and use the means to fit a gamma distribution. Using these values, define a posterior on a rate for each clone, assuming Poisson stats for each cell. This means the posterior is also gamma distributed. ... | import pystan
cpm = pd.read_csv('/Users/laserson/tmp/phip_analysis/phip-9/cpm.tsv', sep='\t', header=0, index_col=0)
upper_bound = sp.stats.scoreatpercentile(cpm.values, 99.9)
trimmed_means = cpm.apply(lambda x: x[x <= upper_bound].mean(), axis=1, raw=True).values
brm = pystan.StanModel(model_name='background_rates', f... | _____no_output_____ | Apache-2.0 | notebooks/phip_modeling/bayesian-modeling-stats.ipynb | lasersonlab/phip-stat |
Can we use scipy's MLE for the gamma parameters instead? | sp.stats.gamma.fit(trimmed_means)
fig, ax = plt.subplots()
_ = ax.hist(sp.stats.gamma.rvs(a=0.3387, loc=0, scale=3.102, size=10000), bins=100)
_ = ax.set(xlim=(0, 50)) | _____no_output_____ | Apache-2.0 | notebooks/phip_modeling/bayesian-modeling-stats.ipynb | lasersonlab/phip-stat |
Hmmm...doesn't appear to get the correct solution. Alternatively, let's try optimizing the log likelihood ourselves | pos = trimmed_means > 0
n = len(trimmed_means)
s = trimmed_means[pos].sum()
sl = np.log(trimmed_means[pos]).sum()
def ll(x):
return -1 * (n * x[0] * np.log(x[1]) - n * sp.special.gammaln(x[0]) + (x[0] - 1) * sl - x[1] * s)
param = sp.optimize.minimize(ll, np.asarray([2, 1]), bounds=[(np.nextafter(0, 1), None), (np.... | _____no_output_____ | Apache-2.0 | notebooks/phip_modeling/bayesian-modeling-stats.ipynb | lasersonlab/phip-stat |
SUCCESS! Do the p-values have a correlation with the peptide abundance? | mlxp = pd.read_csv('/Users/laserson/tmp/phip_analysis/sjogrens/mlxp.tsv', sep='\t', index_col=0, header=0)
inputs = pd.read_csv('/Users/laserson/repos/phage_libraries_private/human90/inputs/human90-larman1-input.tsv', sep='\t', index_col=0, header=0)
m = pd.merge(mlxp, inputs, left_index=True, right_index=True)
sample ... | _____no_output_____ | Apache-2.0 | notebooks/phip_modeling/bayesian-modeling-stats.ipynb | lasersonlab/phip-stat |
Intent Recognition with BERT using Keras and TensorFlow 2 | !nvidia-smi
!pip install tensorflow-gpu >> /dev/null
!pip install --upgrade grpcio >> /dev/null
!pip install tqdm >> /dev/null
!pip install bert-for-tf2 >> /dev/null
!pip install sentencepiece >> /dev/null
import os
import math
import datetime
from tqdm import tqdm
import pandas as pd
import numpy as np
import tens... | _____no_output_____ | MIT | 19_intent_classification.ipynb | evergreenllc2020/Deep-Learning-For-Hackers |
DataThe data contains various user queries categorized into seven intents. It is hosted on [GitHub](https://github.com/snipsco/nlu-benchmark/tree/master/2017-06-custom-intent-engines) and is first presented in [this paper](https://arxiv.org/abs/1805.10190). | !gdown --id 1OlcvGWReJMuyYQuOZm149vHWwPtlboR6 --output train.csv
!gdown --id 1Oi5cRlTybuIF2Fl5Bfsr-KkqrXrdt77w --output valid.csv
!gdown --id 1ep9H6-HvhB4utJRLVcLzieWNUSG3P_uF --output test.csv
train = pd.read_csv("train.csv")
valid = pd.read_csv("valid.csv")
test = pd.read_csv("test.csv")
train = train.append(valid).r... | _____no_output_____ | MIT | 19_intent_classification.ipynb | evergreenllc2020/Deep-Learning-For-Hackers |
Intent Recognition with BERT | !wget https://storage.googleapis.com/bert_models/2018_10_18/uncased_L-12_H-768_A-12.zip
!unzip uncased_L-12_H-768_A-12.zip
os.makedirs("model", exist_ok=True)
!mv uncased_L-12_H-768_A-12/ model
bert_model_name="uncased_L-12_H-768_A-12"
bert_ckpt_dir = os.path.join("model/", bert_model_name)
bert_ckpt_file = os.path.jo... | _____no_output_____ | MIT | 19_intent_classification.ipynb | evergreenllc2020/Deep-Learning-For-Hackers |
Preprocessing | class IntentDetectionData:
DATA_COLUMN = "text"
LABEL_COLUMN = "intent"
def __init__(self, train, test, tokenizer: FullTokenizer, classes, max_seq_len=192):
self.tokenizer = tokenizer
self.max_seq_len = 0
self.classes = classes
((self.train_x, self.train_y), (self.test_x, self.test_y)) = map... | _____no_output_____ | MIT | 19_intent_classification.ipynb | evergreenllc2020/Deep-Learning-For-Hackers |
Training | classes = train.intent.unique().tolist()
data = IntentDetectionData(train, test, tokenizer, classes, max_seq_len=128)
data.train_x.shape
data.train_x[0]
data.train_y[0]
data.max_seq_len
model = create_model(data.max_seq_len, bert_ckpt_file)
model.summary()
model.compile(
optimizer=keras.optimizers.Adam(1e-5),
loss... | Train on 12405 samples, validate on 1379 samples
Epoch 1/5
5392/12405 [============>.................] - ETA: 2:51 - loss: 1.4535 - acc: 0.7361 | MIT | 19_intent_classification.ipynb | evergreenllc2020/Deep-Learning-For-Hackers |
Evaluation | %load_ext tensorboard
%tensorboard --logdir log
ax = plt.figure().gca()
ax.xaxis.set_major_locator(MaxNLocator(integer=True))
ax.plot(history.history['loss'])
ax.plot(history.history['val_loss'])
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['train', 'test'])
plt.title('Loss over training epochs')
plt.show();
ax ... | _____no_output_____ | MIT | 19_intent_classification.ipynb | evergreenllc2020/Deep-Learning-For-Hackers |
Welche Dateitypen gibt es? 1. Verbindung zur DatenbankEs wird eine Verbindung zur Neo4j-Datenbank aufgebaut. | import py2neo
graph = py2neo.Graph(bolt=True, host='localhost', user='neo4j', password='neo4j') | _____no_output_____ | Apache-2.0 | 6. Dateitypen.ipynb | softvis-research/BeLL |
2. Cypher-AbfrageEs wird eine Abfrage an die Datenbank gestellt. Das Ergebnis wird in einem Dataframe (pandas) gespeichert. | import pandas as pd
query ="MATCH (f:Git:File) RETURN f.relativePath as relativePath"
df = pd.DataFrame(graph.run(query).data())
| _____no_output_____ | Apache-2.0 | 6. Dateitypen.ipynb | softvis-research/BeLL |
3. DatenaufbereitungZur Kontrolle werden die ersten fünf Zeilen des Ergebnisses der Abfrage als Tabelle ausgegeben. | df.head() | _____no_output_____ | Apache-2.0 | 6. Dateitypen.ipynb | softvis-research/BeLL |
Der folgende Codeabschnitt extrahiert die verschiedenen Dateitypen entsprechend der Dateiendung und zählt deren Häufigkeit. Die Dateitypen werden in der Variable datatype und deren Häufigkeit in der Variable frequency gespeichert. | # Extrahiere Dateitypen aus Spalte des Dataframes.
datatypes = df['relativePath'].str.rsplit('.', 1).str[1]
# Zähle die Anzahl der Dateitypen und bilde diese in einem Series-Objekt ab.
series = datatypes.value_counts()
# Erzeuge zwei Listen aus dem Series-Objekt.
datatype = list(series.index)
frequency = list(series)... | [1383, 80, 41, 36, 21, 126]
['java', 'html', 'class', 'gif', 'txt', 'andere']
| Apache-2.0 | 6. Dateitypen.ipynb | softvis-research/BeLL |
4. VisualisierungDie Daten werden mittels eines Pie Charts visualisiert. | from IPython.display import display, HTML
base_html = """
<!DOCTYPE html>
<html>
<head>
<script type="text/javascript" src="http://kozea.github.com/pygal.js/javascripts/svg.jquery.js"></script>
<script type="text/javascript" src="https://kozea.github.io/pygal.js/2.0.x/pygal-tooltips.min.js""></script>
</head>
... | _____no_output_____ | Apache-2.0 | 6. Dateitypen.ipynb | softvis-research/BeLL |
Introduction to the Interstellar Medium Jonathan Williams Figure 6.3: portion of the Galactic plane in 21cm continuum showing bremsstrahlung and synchrotron sources | import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
from astropy.io import fits
from astropy.wcs import WCS
from astropy.visualization import (ImageNormalize, SqrtStretch, LogStretch, AsinhStretch)
%matplotlib inline
fig = plt.figure(figsize=(14,7.5))
hdu = fits.open('g330to340.i.fits')
wcs1 = ... | -0.15332128 7.7325406
| CC0-1.0 | ionized/galactic_plane_continuum_21cm.ipynb | CambridgeUniversityPress/IntroductionInterstellarMedium |
TV Script GenerationIn this project, you'll generate your own [Simpsons](https://en.wikipedia.org/wiki/The_Simpsons) TV scripts using RNNs. You'll be using part of the [Simpsons dataset](https://www.kaggle.com/wcukierski/the-simpsons-by-the-data) of scripts from 27 seasons. The Neural Network you'll build will gener... | """
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:] | _____no_output_____ | MIT | tv-script-generation/dlnd_tv_script_generation.ipynb | duozhanggithub/Deep-Learning |
Explore the DataPlay around with `view_sentence_range` to view different parts of the data. | view_sentence_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [sc... | Dataset Stats
Roughly the number of unique words: 11492
Number of scenes: 262
Average number of sentences in each scene: 15.251908396946565
Number of lines: 4258
Average number of words in each line: 11.50164396430249
The sentences 0 to 10:
Moe_Szyslak: (INTO PHONE) Moe's Tavern. Where the elite meet to drink.
Bart_S... | MIT | tv-script-generation/dlnd_tv_script_generation.ipynb | duozhanggithub/Deep-Learning |
Implement Preprocessing FunctionsThe first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictiona... | import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
text = list(set(text))
... | Tests Passed
| MIT | tv-script-generation/dlnd_tv_script_generation.ipynb | duozhanggithub/Deep-Learning |
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".Implement the function `token_lookup` to return a dict that will be used to toke... | def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
keys = ['.', ',', '"', ';', '!', '?', '(', ')', '--','\n']
values = ['||Period||','||Comma||','||Q... | Tests Passed
| MIT | tv-script-generation/dlnd_tv_script_generation.ipynb | duozhanggithub/Deep-Learning |
Preprocess all the data and save itRunning the code cell below will preprocess all the data and save it to file. | """
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables) | _____no_output_____ | MIT | tv-script-generation/dlnd_tv_script_generation.ipynb | duozhanggithub/Deep-Learning |
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. | """
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess() | _____no_output_____ | MIT | tv-script-generation/dlnd_tv_script_generation.ipynb | duozhanggithub/Deep-Learning |
Build the Neural NetworkYou'll build the components necessary to build a RNN by implementing the following functions below:- get_inputs- get_init_cell- get_embed- build_rnn- build_nn- get_batches Check the Version of TensorFlow and Access to GPU | """
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Che... | TensorFlow Version: 1.0.0
| MIT | tv-script-generation/dlnd_tv_script_generation.ipynb | duozhanggithub/Deep-Learning |
InputImplement the `get_inputs()` function to create TF Placeholders for the Neural Network. It should create the following placeholders:- Input text placeholder named "input" using the [TF Placeholder](https://www.tensorflow.org/api_docs/python/tf/placeholder) `name` parameter.- Targets placeholder- Learning Rate pl... | def get_inputs():
"""
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
"""
# TODO: Implement Function
Input = tf.placeholder(dtype=tf.int32, shape=[None, None], name='input')
Targets = tf.placeholder(dtype=tf.int32, shape=[None, No... | Tests Passed
| MIT | tv-script-generation/dlnd_tv_script_generation.ipynb | duozhanggithub/Deep-Learning |
Build RNN Cell and InitializeStack one or more [`BasicLSTMCells`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/BasicLSTMCell) in a [`MultiRNNCell`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/MultiRNNCell).- The Rnn size should be set using `rnn_size`- Initalize Cell State using the MultiRNN... | def get_init_cell(batch_size, rnn_size):
"""
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
"""
# TODO: Implement Function
#rnn_layers = 2
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size... | Tests Passed
| MIT | tv-script-generation/dlnd_tv_script_generation.ipynb | duozhanggithub/Deep-Learning |
Word EmbeddingApply embedding to `input_data` using TensorFlow. Return the embedded sequence. | def get_embed(input_data, vocab_size, embed_dim):
"""
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
"""
# TODO: Implement Fun... | Tests Passed
| MIT | tv-script-generation/dlnd_tv_script_generation.ipynb | duozhanggithub/Deep-Learning |
Build RNNYou created a RNN Cell in the `get_init_cell()` function. Time to use the cell to create a RNN.- Build the RNN using the [`tf.nn.dynamic_rnn()`](https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn) - Apply the name "final_state" to the final state using [`tf.identity()`](https://www.tensorflow.org/a... | def build_rnn(cell, inputs):
"""
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
"""
# TODO: Implement Function
Outputs, Final_State = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
FinalState = tf.identity(F... | Tests Passed
| MIT | tv-script-generation/dlnd_tv_script_generation.ipynb | duozhanggithub/Deep-Learning |
Build the Neural NetworkApply the functions you implemented above to:- Apply embedding to `input_data` using your `get_embed(input_data, vocab_size, embed_dim)` function.- Build RNN using `cell` and your `build_rnn(cell, inputs)` function.- Apply a fully connected layer with a linear activation and `vocab_size` as the... | def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
"""
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logi... | Tests Passed
| MIT | tv-script-generation/dlnd_tv_script_generation.ipynb | duozhanggithub/Deep-Learning |
BatchesImplement `get_batches` to create batches of input and targets using `int_text`. The batches should be a Numpy array with the shape `(number of batches, 2, batch size, sequence length)`. Each batch contains two elements:- The first element is a single batch of **input** with the shape `[batch size, sequence le... | def get_batches(int_text, batch_size, seq_length):
"""
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
"""
# TODO: Implement Func... | Tests Passed
| MIT | tv-script-generation/dlnd_tv_script_generation.ipynb | duozhanggithub/Deep-Learning |
Neural Network Training HyperparametersTune the following parameters:- Set `num_epochs` to the number of epochs.- Set `batch_size` to the batch size.- Set `rnn_size` to the size of the RNNs.- Set `embed_dim` to the size of the embedding.- Set `seq_length` to the length of sequence.- Set `learning_rate` to the learning... | # Number of Epochs
num_epochs = 100
# Batch Size
batch_size = 156
# RNN Size
rnn_size = 600
# Embedding Dimension Size
embed_dim = 500
# Sequence Length
seq_length = 14
# Learning Rate
learning_rate = 0.001
# Show stats for every n number of batches
show_every_n_batches = 100
"""
DON'T MODIFY ANYTHING IN THIS CELL THA... | _____no_output_____ | MIT | tv-script-generation/dlnd_tv_script_generation.ipynb | duozhanggithub/Deep-Learning |
Build the GraphBuild the graph using the neural network you implemented. | """
DON'T MODIFY ANYTHING IN THIS CELL
"""
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0]... | _____no_output_____ | MIT | tv-script-generation/dlnd_tv_script_generation.ipynb | duozhanggithub/Deep-Learning |
TrainTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the [forms](https://discussions.udacity.com/) to see if anyone is having the same problem. | """
DON'T MODIFY ANYTHING IN THIS CELL
"""
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i... | Epoch 0 Batch 0/31 train_loss = 8.825
Epoch 3 Batch 7/31 train_loss = 5.159
Epoch 6 Batch 14/31 train_loss = 4.528
Epoch 9 Batch 21/31 train_loss = 4.046
Epoch 12 Batch 28/31 train_loss = 3.626
Epoch 16 Batch 4/31 train_loss = 3.317
Epoch 19 Batch 11/31 train_loss = 3.031
Epoch... | MIT | tv-script-generation/dlnd_tv_script_generation.ipynb | duozhanggithub/Deep-Learning |
Save ParametersSave `seq_length` and `save_dir` for generating a new TV script. | """
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir)) | _____no_output_____ | MIT | tv-script-generation/dlnd_tv_script_generation.ipynb | duozhanggithub/Deep-Learning |
Checkpoint | """
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params() | _____no_output_____ | MIT | tv-script-generation/dlnd_tv_script_generation.ipynb | duozhanggithub/Deep-Learning |
Implement Generate Functions Get TensorsGet tensors from `loaded_graph` using the function [`get_tensor_by_name()`](https://www.tensorflow.org/api_docs/python/tf/Graphget_tensor_by_name). Get the tensors using the following names:- "input:0"- "initial_state:0"- "final_state:0"- "probs:0"Return the tensors in the foll... | def get_tensors(loaded_graph):
"""
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
"""
# TODO: Implement Function
with lo... | Tests Passed
| MIT | tv-script-generation/dlnd_tv_script_generation.ipynb | duozhanggithub/Deep-Learning |
Choose WordImplement the `pick_word()` function to select the next word using `probabilities`. | def pick_word(probabilities, int_to_vocab):
"""
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
"""
# TODO: Implement Function
... | Tests Passed
| MIT | tv-script-generation/dlnd_tv_script_generation.ipynb | duozhanggithub/Deep-Learning |
Generate TV ScriptThis will generate the TV script for you. Set `gen_length` to the length of TV script you want to generate. | gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loa... | moe_szyslak: ah-ha, big mistake pal! hey moe, can you be the best book on you could never!
homer_simpson:(getting idea) but you're dea-d-d-dead.(three stooges scared sound)
grampa_simpson:(upbeat) i guess despite all sweet music, but then we pour it a beer at half something.
lenny_leonard: hey, homer. r.
homer_simpso... | MIT | tv-script-generation/dlnd_tv_script_generation.ipynb | duozhanggithub/Deep-Learning |
Model metrics before removing outliers | reg = LazyRegressor()
X = data.drop(columns = ["SalePrice"])
y = data["SalePrice"]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, random_state=42)
models, _ = reg.fit(X_train, X_test, y_train, y_test)
models | 100%|██████████| 43/43 [00:36<00:00, 1.19it/s]
| MIT | house_prices/analysis12.ipynb | randat9/House_Prices |
Removing outliers | nan_columns = {column: data[column].isna().sum() for column in data.columns if data[column].isna().sum() > 0}
nan_columns
data["PoolQC"].sample(10)
ordinal_common = ['ExterQual', 'ExterCond', 'BsmtQual', 'BsmtCond', 'HeatingQC',
'KitchenQual', 'FireplaceQu', 'GarageQual', 'PoolQC']
outlier_removed_dat... | _____no_output_____ | MIT | house_prices/analysis12.ipynb | randat9/House_Prices |
import pandas as pd
from google.colab import drive
drive.mount('/content/drive')
%pwd
%ls '/content/drive/My Drive/Machine Learning Final'
pos_muts = pd.read_csv('/content/drive/My Drive/Machine Learning Final/H77_metadata.csv')
freqs = pd.read_csv('/content/drive/My Drive/Machine Learning Final/HCV1a_TsMutFreq_195.csv... | _____no_output_____ | MIT | Big_Dreams.ipynb | Lore8614/Lore8614.github.io | |
Nested cross-validationIn this notebook, we show a pattern called **nested cross-validation** whichshould be used when you want to both evaluate a model and tune themodel's hyperparameters.Cross-validation is a powerful tool to evaluate the statistical performanceof a model. It is also used to select the best model fr... | from sklearn.datasets import load_breast_cancer
data, target = load_breast_cancer(return_X_y=True) | _____no_output_____ | CC-BY-4.0 | notebooks/cross_validation_nested.ipynb | nish2612/scikit-learn-mooc |
Now, we'll make a minimal example using the utility `GridSearchCV` to findthe best parameters via cross-validation. | from sklearn.model_selection import GridSearchCV
from sklearn.svm import SVC
param_grid = {"C": [0.1, 1, 10], "gamma": [.01, .1]}
model_to_tune = SVC()
search = GridSearchCV(estimator=model_to_tune, param_grid=param_grid,
n_jobs=2)
search.fit(data, target) | _____no_output_____ | CC-BY-4.0 | notebooks/cross_validation_nested.ipynb | nish2612/scikit-learn-mooc |
We recall that `GridSearchCV` will train a model with some specific parameteron a training set and evaluate it on testing. However, this evaluation isdone via cross-validation using the `cv` parameter. This procedure isrepeated for all possible combinations of parameters given in `param_grid`.The attribute `best_params... | print(f"The best parameter found are: {search.best_params_}") | _____no_output_____ | CC-BY-4.0 | notebooks/cross_validation_nested.ipynb | nish2612/scikit-learn-mooc |
We can now show the mean score obtained using the parameter `best_score_`. | print(f"The mean score in CV is: {search.best_score_:.3f}") | _____no_output_____ | CC-BY-4.0 | notebooks/cross_validation_nested.ipynb | nish2612/scikit-learn-mooc |
At this stage, one should be extremely careful using this score. Themisinterpretation would be the following: since the score was computed on atest set, it could be considered our model's testing score.However, we should not forget that we used this score to pick-up the bestmodel. It means that we used knowledge from t... | from sklearn.model_selection import cross_val_score, KFold
# Declare the inner and outer cross-validation
inner_cv = KFold(n_splits=4, shuffle=True, random_state=0)
outer_cv = KFold(n_splits=4, shuffle=True, random_state=0)
# Inner cross-validation for parameter search
model = GridSearchCV(
estimator=model_to_tun... | _____no_output_____ | CC-BY-4.0 | notebooks/cross_validation_nested.ipynb | nish2612/scikit-learn-mooc |
In the example above, the reported score is more trustful and should be closeto production's expected statistical performance.We will illustrate the difference between the nested and non-nestedcross-validation scores to show that the latter one will be too optimistic inpractice. In this regard, we will repeat several t... | test_score_not_nested = []
test_score_nested = []
N_TRIALS = 20
for i in range(N_TRIALS):
inner_cv = KFold(n_splits=4, shuffle=True, random_state=i)
outer_cv = KFold(n_splits=4, shuffle=True, random_state=i)
# Non_nested parameter search and scoring
model = GridSearchCV(estimator=model_to_tune, param_... | _____no_output_____ | CC-BY-4.0 | notebooks/cross_validation_nested.ipynb | nish2612/scikit-learn-mooc |
We can merge the data together and make a box plot of the two strategies. | import pandas as pd
all_scores = {
"Not nested CV": test_score_not_nested,
"Nested CV": test_score_nested,
}
all_scores = pd.DataFrame(all_scores)
import matplotlib.pyplot as plt
color = {"whiskers": "black", "medians": "black", "caps": "black"}
all_scores.plot.box(color=color, vert=False)
plt.xlabel("Accurac... | _____no_output_____ | CC-BY-4.0 | notebooks/cross_validation_nested.ipynb | nish2612/scikit-learn-mooc |
 | (X_train,y_train),(X_test,y_test) = datasets.mnist.load_data()
X_train.shape
X_train = X_train.reshape(60000,28,28,1)
X_train.shape
X_train.shape
plt.imshow(X_train[0])
X_train = X_train/255
X_test = X_test/255
mnist_cnn = models.Sequential([
layers.Conv2D(filters=10, kernel_size=(5,5), activation='relu',input_shap... | _____no_output_____ | MIT | code/8_CNN_cifar10_mnist.ipynb | Akshatha-Jagadish/DL_topics |
EDA for Import and Export Trade Volumes Binational trade relationship between Mexico and the United States | #import key libraries
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline | _____no_output_____ | MIT | jupyter-notebook/eda_import_exports.ipynb | NanceCA/binational-trade-volumes |
Dataset 1: General Imports from Mexico to the United States | imports = pd.read_csv("./data/usitc/total-imports-mx2us.csv")
## data to be read includes the customs value of the import and the year
imports.shape
imports.head()
#note that the customs_value and the dollar_amount are the same just different data types
list(imports.columns)
imports['imports'].describe()
imports['doll... | _____no_output_____ | MIT | jupyter-notebook/eda_import_exports.ipynb | NanceCA/binational-trade-volumes |
Dataset 2 Exports from US to Mexico | exports = pd.read_csv("./data/usitc/total-exports-us2mx.csv")
exports.shape
exports.head()
list(exports.columns)
exports['exports'].describe()
plt.scatter(exports["year"],exports['exports'],color="green")
plt.title('Exports from US to Mexico, Annual')
plt.xlabel('year')
plt.ylabel('FAS Value e11')
plt.show()
##generall... | _____no_output_____ | MIT | jupyter-notebook/eda_import_exports.ipynb | NanceCA/binational-trade-volumes |
Data preprocessing | # imports
year_var = list(imports['year'])
print(year_var)
dollar = list(imports["dollar_amount"])
print(dollar)
def pre_process(year, dollar):
print("[",year,",",dollar,"]",",")
pre_process(1996, 2) | _____no_output_____ | MIT | jupyter-notebook/eda_import_exports.ipynb | NanceCA/binational-trade-volumes |
Running descriptive statistics | # Pulling in descriptive statistics on IMPORTS
from scipy import stats
stats.describe(ytrain_pred)
imports['imports'].describe()
exports["exports"].describe() | _____no_output_____ | MIT | jupyter-notebook/eda_import_exports.ipynb | NanceCA/binational-trade-volumes |
1: Introduction To The Dataset | data = open('US_births_1994-2003_CDC_NCHS.csv','r').read().split('\n')
data[:10] | _____no_output_____ | MIT | Explore U.S. Births/Basics.ipynb | vipmunot/Data-Science-Projects |
2: Converting Data Into A List Of Lists | def read_csv(filename,header = False):
final_list = []
read_data = open(filename,'r').read().split('\n')[1:]
if header == True:
read_data = open(filename,'r').read().split('\n')[1:]
else:
read_data = open(filename,'r').read().split('\n')
for item in read_data:
int_fields = []... | _____no_output_____ | MIT | Explore U.S. Births/Basics.ipynb | vipmunot/Data-Science-Projects |
3: Calculating Number Of Births Each Month | def month_births(data):
births_per_month = {}
for item in data:
if item[1] in births_per_month.keys():
births_per_month[item[1]] += item[4]
else:
births_per_month[item[1]] = item[4]
return(births_per_month)
cdc_month_births = month_births(cdc_list)
cdc_month_births... | _____no_output_____ | MIT | Explore U.S. Births/Basics.ipynb | vipmunot/Data-Science-Projects |
5: Creating A More General Function | def calc_counts(data,column):
birth = {}
for item in data:
if item[column] in birth.keys():
birth[item[column]] += item[4]
else:
birth[item[column]] = item[4]
return(birth)
cdc_year_births = calc_counts(cdc_list, 0)
cdc_month_births = calc_counts(cdc_list, 1)
cdc_dom_... | _____no_output_____ | MIT | Explore U.S. Births/Basics.ipynb | vipmunot/Data-Science-Projects |
Classes and Objects in Python Welcome! Objects in programming are like objects in real life. Like life, there are different classes of objects. In this notebook, we will create two classes called Circle and Rectangle. By the end of this notebook, you will have a better idea about : what a class ... | # Import the library
import matplotlib.pyplot as plt
%matplotlib inline | _____no_output_____ | MIT | Python for AI and DataScience/PY0101EN-3-4-Classes.ipynb | amitkrishna/IBM-DataScience |
The first step in creating your own class is to use the class keyword, then the name of the class as shown in Figure 4. In this course the class parent will always be object: Figure 4: Three instances of the class circle or three objects of type circle. The next step is a special method called a constructor &95;&95;... | # Create a class Circle
class Circle(object):
# Constructor
def __init__(self, radius=3, color='blue'):
self.radius = radius
self.color = color
# Method
def add_radius(self, r):
self.radius = self.radius + r
return(self.radius)
# Method
def drawCi... | _____no_output_____ | MIT | Python for AI and DataScience/PY0101EN-3-4-Classes.ipynb | amitkrishna/IBM-DataScience |
Creating an instance of a class Circle Let’s create the object RedCircle of type Circle to do the following: | # Create an object RedCircle
RedCircle = Circle(10, 'red') | _____no_output_____ | MIT | Python for AI and DataScience/PY0101EN-3-4-Classes.ipynb | amitkrishna/IBM-DataScience |
We can use the dir command to get a list of the object's methods. Many of them are default Python methods. | # Find out the methods can be used on the object RedCircle
dir(RedCircle) | _____no_output_____ | MIT | Python for AI and DataScience/PY0101EN-3-4-Classes.ipynb | amitkrishna/IBM-DataScience |
We can look at the data attributes of the object: | # Print the object attribute radius
RedCircle.radius
# Print the object attribute color
RedCircle.color | _____no_output_____ | MIT | Python for AI and DataScience/PY0101EN-3-4-Classes.ipynb | amitkrishna/IBM-DataScience |
We can change the object's data attributes: | # Set the object attribute radius
RedCircle.radius = 1
RedCircle.radius | _____no_output_____ | MIT | Python for AI and DataScience/PY0101EN-3-4-Classes.ipynb | amitkrishna/IBM-DataScience |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.