markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Writing final submission file to kaggle output disk
submission_data.to_csv('submission.csv', index=False)
_____no_output_____
MIT
hm-recbole-ifs.ipynb
ManashJKonwar/Kaggle-HM-Recommender
Lambda School Data Science - Quantile RegressionRegressing towards the median - or any quantile - as a way to mitigate outliers and control risk. LectureLet's look at data that has a bit of a skew to it:http://archive.ics.uci.edu/ml/datasets/Beijing+PM2.5+Data
import numpy as np import pandas as pd from sklearn.linear_model import LinearRegression import statsmodels.formula.api as smf df = pd.read_csv('http://archive.ics.uci.edu/ml/machine-learning-databases/' '00381/PRSA_data_2010.1.1-2014.12.31.csv') df.head() df.describe() df['pm2.5'].plot.hist(); np.l...
pm25 ~ No + year + month + day + hour + DEWP + TEMP + PRES + cbwd + Iws + Is + Ir
MIT
module3-quantile-regression/Copy_of_LS_DS1_233_Quantile_Regression.ipynb
quinn-dougherty/DS-Unit-2-Sprint-3-Advanced-Regression
That fit to the median (q=0.5), also called "Least Absolute Deviation." The pseudo-R^2 isn't really directly comparable to the R^2 from linear regression, but it clearly isn't dramatically improved. Can we make it better?
help(quant_mod.fit) quantiles = (.05, .96, .1) for quantile in quantiles: print(quant_mod.fit(q=quantile).summary())
QuantReg Regression Results ============================================================================== Dep. Variable: pm25 Pseudo R-squared: 0.04130 Model: QuantReg Bandwidth: 8.908 Meth...
MIT
module3-quantile-regression/Copy_of_LS_DS1_233_Quantile_Regression.ipynb
quinn-dougherty/DS-Unit-2-Sprint-3-Advanced-Regression
"Strong multicollinearity", eh? In other words - maybe we shouldn't throw every variable in our formula. Let's hand-craft a smaller one, picking the features with the largest magnitude t-statistics for their coefficients. Let's also search for more quantile cutoffs to see what's most effective.
quant_formula = 'pm25 ~ DEWP + TEMP + Ir + hour + Iws' quant_mod = smf.quantreg(quant_formula, data=df) for quantile in range(50, 100): quantile /= 100 quant_reg = quant_mod.fit(q=quantile) print((quantile, quant_reg.prsquared)) # Okay, this data seems *extremely* skewed # Let's trying logging import numpy as np ...
_____no_output_____
MIT
module3-quantile-regression/Copy_of_LS_DS1_233_Quantile_Regression.ipynb
quinn-dougherty/DS-Unit-2-Sprint-3-Advanced-Regression
Overall - in this case, quantile regression is not *necessarily* superior to linear regression. But it does give us extra flexibility and another thing to tune - what the center of what we're actually fitting in the dependent variable.The basic case of `q=0.5` (the median) minimizes the absolute value of residuals, whi...
import numpy as np import pandas as pd from sklearn.linear_model import LinearRegression from sklearn.linear_model import LogisticRegression import statsmodels.formula.api as smf url = 'http://archive.ics.uci.edu/ml/machine-learning-databases/00272/SkillCraft1_Dataset.csv' df = pd.read_csv(url) df.head() df = df.re...
_____no_output_____
MIT
module3-quantile-regression/Copy_of_LS_DS1_233_Quantile_Regression.ipynb
quinn-dougherty/DS-Unit-2-Sprint-3-Advanced-Regression
Assignment - birth weight dataBirth weight is a situation where, while the data itself is actually fairly normal and symmetric, our main goal is actually *not* to model mean weight (via OLS), but rather to identify mothers at risk of having children below a certain "at-risk" threshold weight.Quantile regression gives ...
import numpy as np import pandas as pd from sklearn.linear_model import LinearRegression from sklearn.linear_model import LogisticRegression import statsmodels.formula.api as smf from scipy.stats import percentileofscore from numpy.testing import assert_almost_equal bwt_df = pd.read_csv('http://people.reed.edu/~jones...
_____no_output_____
MIT
module3-quantile-regression/Copy_of_LS_DS1_233_Quantile_Regression.ipynb
quinn-dougherty/DS-Unit-2-Sprint-3-Advanced-Regression
Gestation is our most reliable feature.- it's errors are consistently reasonable, and it's p-value is consistently very small, adding gestation ** 2 improved the model. BMI is better- the model is better with BMI than without Adding BMI made age_per_weight less effective - age_per_weight's error and p-value both wen...
''' What characteristics of a mother indicate the highest likelihood of an at-risk (low weight) baby? What can expectant mothers be told to help mitigate this risk? '''
_____no_output_____
MIT
module3-quantile-regression/Copy_of_LS_DS1_233_Quantile_Regression.ipynb
quinn-dougherty/DS-Unit-2-Sprint-3-Advanced-Regression
Updating SFRDs: UV dataThanks to the improvements in observational facilities in the past few years, we were able to compute luminosity function more accurately. We now use this updated measurements of luminosity fucntion to update the values of SFRDs. In the present notebook, we focus on UV luminosity functions, whic...
import numpy as np import matplotlib.pyplot as plt import astropy.constants as con import astropy.units as u from scipy.optimize import minimize as mz from scipy.optimize import curve_fit as cft import utils as utl import os
_____no_output_____
MIT
Results/res1.ipynb
Jayshil/csfrd
We have already computed the SFRDs by using [this](https://github.com/Jayshil/csfrd/blob/main/sfrd_all.py) code -- here we only plot the results.
ppr_uv = np.array(['Khusanova_et_al._2020', 'Ono_et_al._2017', 'Viironen_et_al._2018', 'Finkelstein_et_al._2015', 'Bouwens_et_al._2021', 'Alavi_et_al._2016', 'Livermore_et_al._2017', 'Atek_et_al._2015', 'Parsa_et_al._2016', 'Hagen_et_al._2015', 'Moutard_et_al._2019', 'Pello_et_al._2018', 'Bhatawdekar_et_al._2018']) co...
_____no_output_____
MIT
Results/res1.ipynb
Jayshil/csfrd
Note that, for most of the values, the SFRD is tightly constrained. We again note here that, in this calculation we have assumed that the Schechter function parameters are correlated (except for lower redshifts), and the correlation matrix is according to Bouwens et al. (2021). For lowest redshifts ($z=0$ and $z=1$), w...
# Defining best-fitted SFRD def psi_md(z): ab = (1+z)**2.7 cd = ((1+z)/2.9)**5.6 ef = 0.015*ab/(1+cd) return ef # Calculating psi(z) znew = np.linspace(0,9,1000) psi1 = psi_md(znew) psi2 = np.log10(psi1) plt.figure(figsize=(16, 9)) # Plotting them for i in range(len(ppr_uv)): zc_uv, zp, zn, lg_sf,...
_____no_output_____
MIT
Results/res1.ipynb
Jayshil/csfrd
It can readily be observed from the above figure that, the best-fitted function from Madau & Dickinson (2014) does not exactly match with our computation of SFRDs, which shows the need to correct for dust in these calculations. However, in the present work, we are not going to take the dust corrections. We shall comput...
# New model def psi_new(z, aa, bb, cc, dd): ab = (1+z)**bb cd = ((1+z)/cc)**dd ef = aa*ab/(1+cd) return ef # Negative likelihood function def min_log_likelihood(x): model = psi_new(zcen_uv, x[0], x[1], x[2], x[3]) chi2 = (sfrd_uv - model)/sfrd_uv_err chi22 = np.sum(chi2**2) yy = 0.5*chi...
_____no_output_____
MIT
Results/res1.ipynb
Jayshil/csfrd
So, the fitting in converged; that's good! Let's see how this new fitting looks like...
best_fit_fun = psi_new(znew, *soln.x) log_best_fit = np.log10(best_fit_fun) plt.figure(figsize=(16,9)) plt.errorbar(zcen_uv, log_sfrd_uv, xerr=[zup, zdo], yerr=log_sfrd_uv_err, fmt='o', c='cornflowerblue') plt.plot(znew, log_best_fit, label='Best fitted function', lw=2, c='orangered') plt.xlabel('Redshift') plt.ylabel...
_____no_output_____
MIT
Results/res1.ipynb
Jayshil/csfrd
That's sounds about right. Here, the fitted function would be,$$ \psi(z) = 0.006 \frac{(1+z)^{1.37}}{1 + [(1+z)/4.95]^{5.22}} M_\odot \ year^{-1} Mpc^{-3}$$We want to make note here though. There are some points present in the plot which have large errorbars. Those are from the paper Hagen et al. (2015). From the quick...
# Loading new data sfrd1, sfrd_err1 = np.array([]), np.array([]) log_sfrd1, log_sfrd_err1 = np.array([]), np.array([]) zcen1, zdo1, zup1 = np.array([]), np.array([]), np.array([]) for i in range(len(ppr_uv1)): if ppr_uv1[i] != 'Hagen_et_al._2015': sfrd1 = np.hstack((sfrd1, sfrd_uv[i])) sfrd_err1 = ...
_____no_output_____
MIT
Results/res1.ipynb
Jayshil/csfrd
Why do we care?Supply chains have grown to span the globe.Multiple functions are now combined (warehousing, inventory, transportation, demand planning and procurement)Runs within and across a firm"Bridge" & "Shockabsorber"From customers to suppliers.Adapt and adjust and be flexible.Prediciting the future is hard.Using...
!wget https://www.eia.gov/petroleum/gasdiesel/xls/pswrgvwall.xls -O ./data/pswrgvwall.xls import pandas as pd diesel = pd.read_excel("./data/pswrgvwall.xls", sheet_name=None) diesel.keys() diesel['Data 1'] %matplotlib inline no_head = diesel['Data 1'] no_head.columns = no_head.iloc[1] no_head = no_head.iloc[2:] no_head...
_____no_output_____
MIT
SC0x/Unit 1 - Supply Chain Management Overview.ipynb
fhk/MITx_CTL_SCx
Question: What is the impact of such variability on a supply chain?If price was fixed you could desing a supply chain to last 10 years. But its not so you need to have a "shock absorber", these uncertainties/types of factors need to be considered. By using a data drive and metrics based approach organizations can purs...
from IPython.display import Image from IPython.core.display import HTML Image(url= "https://www2.deloitte.com/content/dam/insights/us/articles/5024_Economics-Spotlight-July2019/figures/5024_Fig2.jpg")
_____no_output_____
MIT
SC0x/Unit 1 - Supply Chain Management Overview.ipynb
fhk/MITx_CTL_SCx
Run in Google Colab View source on GitHub Earth Engine Python API Colab SetupThis notebook demonstrates how to setup the Earth Engine Python API in Colab and provides several examples of how to print and visualize Earth Engine processed data. Import API and get credentialsThe Earth Engine API is installed by defa...
import ee
_____no_output_____
CC0-1.0
ee_api_colab_setup.ipynb
pahdsn/SENSE_2020_GEE
Authenticate and initializeRun the `ee.Authenticate` function to authenticate your access to Earth Engine servers and `ee.Initialize` to initialize it. Upon running the following cell you'll be asked to grant Earth Engine access to your Google account. Follow the instructions printed to the cell.
# Trigger the authentication flow. ee.Authenticate() # Initialize the library. ee.Initialize()
_____no_output_____
CC0-1.0
ee_api_colab_setup.ipynb
pahdsn/SENSE_2020_GEE
Test the APITest the API by printing the elevation of Mount Everest.
# Print the elevation of Mount Everest. dem = ee.Image('USGS/SRTMGL1_003') xy = ee.Geometry.Point([86.9250, 27.9881]) elev = dem.sample(xy, 30).first().get('elevation').getInfo() print('Mount Everest elevation (m):', elev)
_____no_output_____
CC0-1.0
ee_api_colab_setup.ipynb
pahdsn/SENSE_2020_GEE
Map visualization`ee.Image` objects can be displayed to notebook output cells. The following twoexamples demonstrate displaying a static image and an interactive map. Static imageThe `IPython.display` module contains the `Image` function, which can displaythe results of a URL representing an image generated from a ca...
# Import the Image function from the IPython.display module. from IPython.display import Image # Display a thumbnail of global elevation. Image(url = dem.updateMask(dem.gt(0)) .getThumbURL({'min': 0, 'max': 4000, 'dimensions': 512, 'palette': ['006633', 'E5FFCC', '662A00', 'D8D8D8', 'F5F5F5']}))
_____no_output_____
CC0-1.0
ee_api_colab_setup.ipynb
pahdsn/SENSE_2020_GEE
Interactive mapThe [`folium`](https://python-visualization.github.io/folium/)library can be used to display `ee.Image` objects on an interactive[Leaflet](https://leafletjs.com/) map. Folium has no defaultmethod for handling tiles from Earth Engine, so one must be definedand added to the `folium.Map` module before use....
# Import the Folium library. import folium # Define a method for displaying Earth Engine image tiles to folium map. def add_ee_layer(self, ee_image_object, vis_params, name): map_id_dict = ee.Image(ee_image_object).getMapId(vis_params) folium.raster_layers.TileLayer( tiles = map_id_dict['tile_fetcher'].url_for...
_____no_output_____
CC0-1.0
ee_api_colab_setup.ipynb
pahdsn/SENSE_2020_GEE
Chart visualizationSome Earth Engine functions produce tabular data that can be plotted bydata visualization packages such as `matplotlib`. The following exampledemonstrates the display of tabular data from Earth Engine as a scatterplot. See [Charting in Colaboratory](https://colab.sandbox.google.com/notebooks/charts....
# Import the matplotlib.pyplot module. import matplotlib.pyplot as plt # Fetch a Landsat image. img = ee.Image('LANDSAT/LT05/C01/T1_SR/LT05_034033_20000913') # Select Red and NIR bands, scale them, and sample 500 points. samp_fc = img.select(['B3','B4']).divide(10000).sample(scale=30, numPixels=500) # Arrange the sa...
_____no_output_____
CC0-1.0
ee_api_colab_setup.ipynb
pahdsn/SENSE_2020_GEE
Creating your own dataset from Google Images*by: Francisco Ingham and Jeremy Howard. Inspired by [Adrian Rosebrock](https://www.pyimagesearch.com/2017/12/04/how-to-create-a-deep-learning-dataset-using-google-images/)* In this tutorial we will see how to easily create an image dataset through Google Images. **Note**: Y...
from fastai.vision import *
_____no_output_____
Apache-2.0
nbs/dl1/lesson2-download.ipynb
piggybox/course-v3
Get a list of URLs Search and scroll Go to [Google Images](http://images.google.com) and search for the images you are interested in. The more specific you are in your Google Search, the better the results and the less manual pruning you will have to do.Scroll down until you've seen all the images you want to downloa...
folder = 'black' file = 'urls_black.csv' folder = 'teddys' file = 'urls_teddys.csv' folder = 'grizzly' file = 'urls_grizzly.csv'
_____no_output_____
Apache-2.0
nbs/dl1/lesson2-download.ipynb
piggybox/course-v3
You will need to run this cell once per each category.
path = Path('data/bears') dest = path/folder dest.mkdir(parents=True, exist_ok=True) path.ls()
_____no_output_____
Apache-2.0
nbs/dl1/lesson2-download.ipynb
piggybox/course-v3
Finally, upload your urls file. You just need to press 'Upload' in your working directory and select your file, then click 'Upload' for each of the displayed files.![uploaded file](images/download_images/upload.png) Download images Now you will need to download your images from their respective urls.fast.ai has a func...
classes = ['teddys','grizzly','black'] download_images(path/file, dest, max_pics=200) # If you have problems download, try with `max_workers=0` to see exceptions: download_images(path/file, dest, max_pics=20, max_workers=0)
_____no_output_____
Apache-2.0
nbs/dl1/lesson2-download.ipynb
piggybox/course-v3
Then we can remove any images that can't be opened:
for c in classes: print(c) verify_images(path/c, delete=True, max_size=500)
teddys
Apache-2.0
nbs/dl1/lesson2-download.ipynb
piggybox/course-v3
View data
np.random.seed(42) data = ImageDataBunch.from_folder(path, train=".", valid_pct=0.2, ds_tfms=get_transforms(), size=224, num_workers=4).normalize(imagenet_stats) # If you already cleaned your data, run this cell instead of the one before # np.random.seed(42) # data = ImageDataBunch.from_csv(path, folder=".", va...
_____no_output_____
Apache-2.0
nbs/dl1/lesson2-download.ipynb
piggybox/course-v3
Good! Let's take a look at some of our pictures then.
data.classes data.show_batch(rows=3, figsize=(7,8)) data.classes, data.c, len(data.train_ds), len(data.valid_ds)
_____no_output_____
Apache-2.0
nbs/dl1/lesson2-download.ipynb
piggybox/course-v3
Train model
learn = cnn_learner(data, models.resnet34, metrics=error_rate) learn.fit_one_cycle(4) learn.save('stage-1') learn.unfreeze() learn.lr_find() learn.recorder.plot() learn.fit_one_cycle(2, max_lr=slice(3e-5,3e-4)) learn.save('stage-2')
_____no_output_____
Apache-2.0
nbs/dl1/lesson2-download.ipynb
piggybox/course-v3
Interpretation
learn.load('stage-2'); interp = ClassificationInterpretation.from_learner(learn) interp.plot_confusion_matrix()
_____no_output_____
Apache-2.0
nbs/dl1/lesson2-download.ipynb
piggybox/course-v3
Cleaning UpSome of our top losses aren't due to bad performance by our model. There are images in our data set that shouldn't be.Using the `ImageCleaner` widget from `fastai.widgets` we can prune our top losses, removing photos that don't belong.
from fastai.widgets import *
_____no_output_____
Apache-2.0
nbs/dl1/lesson2-download.ipynb
piggybox/course-v3
First we need to get the file paths from our top_losses. We can do this with `.from_toplosses`. We then feed the top losses indexes and corresponding dataset to `ImageCleaner`.Notice that the widget will not delete images directly from disk but it will create a new csv file `cleaned.csv` from where you can create a new...
db = (ImageList.from_folder(path) .no_split() .label_from_folder() .transform(get_transforms(), size=224) .databunch() ) # If you already cleaned your data using indexes from `from_toplosses`, # run this cell instead of the one before to p...
_____no_output_____
Apache-2.0
nbs/dl1/lesson2-download.ipynb
piggybox/course-v3
Then we create a new learner to use our new databunch with all the images.
learn_cln = cnn_learner(db, models.resnet34, metrics=error_rate) learn_cln.load('stage-2'); ds, idxs = DatasetFormatter().from_toplosses(learn_cln)
_____no_output_____
Apache-2.0
nbs/dl1/lesson2-download.ipynb
piggybox/course-v3
Make sure you're running this notebook in Jupyter Notebook, not Jupyter Lab. That is accessible via [/tree](/tree), not [/lab](/lab). Running the `ImageCleaner` widget in Jupyter Lab is [not currently supported](https://github.com/fastai/fastai/issues/1539).
ImageCleaner(ds, idxs, path)
_____no_output_____
Apache-2.0
nbs/dl1/lesson2-download.ipynb
piggybox/course-v3
Flag photos for deletion by clicking 'Delete'. Then click 'Next Batch' to delete flagged photos and keep the rest in that row. `ImageCleaner` will show you a new row of images until there are no more to show. In this case, the widget will show you images until there are none left from `top_losses.ImageCleaner(ds, idxs)...
ds, idxs = DatasetFormatter().from_similars(learn_cln) ImageCleaner(ds, idxs, path, duplicates=True)
_____no_output_____
Apache-2.0
nbs/dl1/lesson2-download.ipynb
piggybox/course-v3
Remember to recreate your ImageDataBunch from your `cleaned.csv` to include the changes you made in your data! Putting your model in production First thing first, let's export the content of our `Learner` object for production:
learn.export()
_____no_output_____
Apache-2.0
nbs/dl1/lesson2-download.ipynb
piggybox/course-v3
This will create a file named 'export.pkl' in the directory where we were working that contains everything we need to deploy our model (the model, the weights but also some metadata like the classes or the transforms/normalization used). You probably want to use CPU for inference, except at massive scale (and you almos...
defaults.device = torch.device('cpu') img = open_image(path/'black'/'00000021.jpg') img
_____no_output_____
Apache-2.0
nbs/dl1/lesson2-download.ipynb
piggybox/course-v3
We create our `Learner` in production enviromnent like this, jsut make sure that `path` contains the file 'export.pkl' from before.
learn = load_learner(path) pred_class,pred_idx,outputs = learn.predict(img) pred_class
_____no_output_____
Apache-2.0
nbs/dl1/lesson2-download.ipynb
piggybox/course-v3
So you might create a route something like this ([thanks](https://github.com/simonw/cougar-or-not) to Simon Willison for the structure of this code):```python@app.route("/classify-url", methods=["GET"])async def classify_url(request): bytes = await get_bytes(request.query_params["url"]) img = open_image(BytesIO(b...
learn = cnn_learner(data, models.resnet34, metrics=error_rate) learn.fit_one_cycle(1, max_lr=0.5)
Total time: 00:13 epoch train_loss valid_loss error_rate 1 12.220007 1144188288.000000 0.765957 (00:13)
Apache-2.0
nbs/dl1/lesson2-download.ipynb
piggybox/course-v3
Learning rate (LR) too low
learn = cnn_learner(data, models.resnet34, metrics=error_rate)
_____no_output_____
Apache-2.0
nbs/dl1/lesson2-download.ipynb
piggybox/course-v3
Previously we had this result:```Total time: 00:57epoch train_loss valid_loss error_rate1 1.030236 0.179226 0.028369 (00:14)2 0.561508 0.055464 0.014184 (00:13)3 0.396103 0.053801 0.014184 (00:13)4 0.316883 0.050197 0.021277 (00:15)```
learn.fit_one_cycle(5, max_lr=1e-5) learn.recorder.plot_losses()
_____no_output_____
Apache-2.0
nbs/dl1/lesson2-download.ipynb
piggybox/course-v3
As well as taking a really long time, it's getting too many looks at each image, so may overfit. Too few epochs
learn = cnn_learner(data, models.resnet34, metrics=error_rate, pretrained=False) learn.fit_one_cycle(1)
Total time: 00:14 epoch train_loss valid_loss error_rate 1 0.602823 0.119616 0.049645 (00:14)
Apache-2.0
nbs/dl1/lesson2-download.ipynb
piggybox/course-v3
Too many epochs
np.random.seed(42) data = ImageDataBunch.from_folder(path, train=".", valid_pct=0.9, bs=32, ds_tfms=get_transforms(do_flip=False, max_rotate=0, max_zoom=1, max_lighting=0, max_warp=0 ),size=224, num_workers=4).normalize(imagenet_stats) learn = cnn_learner(data, models.resnet50, me...
Total time: 06:39 epoch train_loss valid_loss error_rate 1 1.513021 1.041628 0.507326 (00:13) 2 1.290093 0.994758 0.443223 (00:09) 3 1.185764 0.936145 0.410256 (00:09) 4 1.117229 0.838402 0.322344 (00:09) 5 1.022635 0.734872 0.252747 (00:09) 6 ...
Apache-2.0
nbs/dl1/lesson2-download.ipynb
piggybox/course-v3
Observations and Insights
# Dependencies and Setup import matplotlib.pyplot as plt import pandas as pd import scipy.stats as stats import numpy as np from scipy.stats import linregress # Study data files mouse_metadata_path = "data/Mouse_metadata.csv" study_results_path = "data/Study_results.csv" # Read the mouse data and the study results mo...
_____no_output_____
ADSL
Pymaceuticals/.ipynb_checkpoints/keely_pymaceuticals_starter-checkpoint.ipynb
keelywright1/matplotlib-challenge
Summary Statistics
clean # Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume # for each regimen grp = clean.groupby('Drug Regimen')['Tumor Volume (mm3)'] pd.DataFrame({'mean':grp.mean(),'median':grp.median(),'var':grp.var(),'std':grp.std(),'sem':grp.sem()}) # Generate a summa...
_____no_output_____
ADSL
Pymaceuticals/.ipynb_checkpoints/keely_pymaceuticals_starter-checkpoint.ipynb
keelywright1/matplotlib-challenge
Bar and Pie Charts
# Generate a bar plot showing the total number of measurements taken on each drug regimen using pandas. # plot the mouse counts for each drug using pandas plt.figure(figsize=[15,6]) measurements = clean.groupby('Drug Regimen').Sex.count() measurements.plot(kind='bar',rot=45,title='Total Measurements per Drug') plt.yla...
_____no_output_____
ADSL
Pymaceuticals/.ipynb_checkpoints/keely_pymaceuticals_starter-checkpoint.ipynb
keelywright1/matplotlib-challenge
Quartiles, Outliers and Boxplots
# Reset index so drug regimen column persists after inner merge # Start by getting the last (greatest) timepoint for each mouse timemax = clean.groupby('Mouse ID').max().Timepoint.reset_index() # Merge this group df with the original dataframe to get the tumor volume at the last timepoint tumormax = timemax.merge(clea...
_____no_output_____
ADSL
Pymaceuticals/.ipynb_checkpoints/keely_pymaceuticals_starter-checkpoint.ipynb
keelywright1/matplotlib-challenge
Line and Scatter Plots
# Generate a line plot of tumor volume vs. time point for a mouse treated with Capomulin #change index to mouse ID #remove other mouse IDs so only s185 shows #set the x-axis equal to the Timepoint and y-axis to Tumor Volume plt.figure(figsize=[15,6]) clean[(clean['Drug Regimen']=='Capomulin')&(clean['Mouse ID']=='s18...
_____no_output_____
ADSL
Pymaceuticals/.ipynb_checkpoints/keely_pymaceuticals_starter-checkpoint.ipynb
keelywright1/matplotlib-challenge
Correlation and Regression
tumor_weight.head() # Calculate the correlation coefficient and linear regression model # for mouse weight and average tumor volume for the Capomulin regimen #establish x and y values and find St. Pearson Correlation Coefficient for Mouse Weight and Tumor Volume Avg linear_corr = stats.pearsonr(tumor_weight.index,tumo...
_____no_output_____
ADSL
Pymaceuticals/.ipynb_checkpoints/keely_pymaceuticals_starter-checkpoint.ipynb
keelywright1/matplotlib-challenge
Question Answering Download and Prepare Data
!wget https://dl.fbaipublicfiles.com/MLQA/MLQA_V1.zip !unzip MLQA_V1.zip
_____no_output_____
MIT
tasks/question_answering/Question_Answering.ipynb
ARBML/tkseem
Prepare Data
import json def read_data(file_path, max_context_size = 100): # Read dataset with open(file_path) as f: data = json.load(f) contexts = [] questions = [] answers = [] labels = [] for i in range(len(data['data'])): paragraph_object = data['data'][i]["paragraphs"] for j in range(len...
ما الذي جعل شريط الاختبار للطائرة؟ بحيرة جرووم كانت تستخدم للقصف المدفعي والتدريب علي المدفعية خلال الحرب العالمية الثانية، ولكن تم التخلي عنها بعد ذلك حتى نيسان / أبريل 1955، عندما تم اختياره من قبل فريق لوكهيد اسكنك كموقع مثالي لاختبار لوكهيد يو-2 - 2 طائرة التجسس. قاع البحيرة قدم الشريط المثالية التي يمكن عمل اختبار...
MIT
tasks/question_answering/Question_Answering.ipynb
ARBML/tkseem
Imports
import re import nltk import time import numpy as np import tkseem as tk import tensorflow as tf import matplotlib.ticker as ticker import matplotlib.pyplot as plt
_____no_output_____
MIT
tasks/question_answering/Question_Answering.ipynb
ARBML/tkseem
Tokenization
qa_tokenizer = tk.WordTokenizer() qa_tokenizer.train('train_questions.txt') print('Vocab size ', qa_tokenizer.vocab_size) cx_tokenizer = tk.WordTokenizer() cx_tokenizer.train('train_contexts.txt') print('Vocab size ', cx_tokenizer.vocab_size) train_inp_data = qa_tokenizer.encode_sentences(train_data['qas']) train_tar...
Training WordTokenizer ... Vocab size 8883 Training WordTokenizer ... Vocab size 10000
MIT
tasks/question_answering/Question_Answering.ipynb
ARBML/tkseem
Create Dataset
BATCH_SIZE = 64 BUFFER_SIZE = len(train_inp_data) dataset = tf.data.Dataset.from_tensor_slices((train_inp_data, train_tar_data, train_tar_lbls)).shuffle(BUFFER_SIZE) dataset = dataset.batch(BATCH_SIZE, drop_remainder=True)
_____no_output_____
MIT
tasks/question_answering/Question_Answering.ipynb
ARBML/tkseem
Create Encoder and Decoder
class Encoder(tf.keras.Model): def __init__(self, vocab_size, embedding_dim, enc_units, batch_sz): super(Encoder, self).__init__() self.batch_sz = batch_sz self.enc_units = enc_units self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim) self.gru = tf.keras.lay...
_____no_output_____
MIT
tasks/question_answering/Question_Answering.ipynb
ARBML/tkseem
Training
units = 1024 embedding_dim = 256 max_length_inp = train_inp_data.shape[1] max_length_tar = train_tar_data.shape[1] vocab_tar_size = cx_tokenizer.vocab_size vocab_inp_size = qa_tokenizer.vocab_size steps_per_epoch = len(train_inp_data) // BATCH_SIZE decoder = Decoder(vocab_tar_size, embedding_dim, units, max_length_tar)...
Epoch 0 loss: 4.386 Epoch 1 loss: 4.264 Epoch 2 loss: 4.238 Epoch 3 loss: 4.105 Epoch 4 loss: 3.932 Epoch 5 loss: 3.758 Epoch 6 loss: 3.643 Epoch 7 loss: 3.548 Epoch 8 loss: 3.456 Epoch 9 loss: 3.382 Epoch 10 loss: 3.285 Epoch 11 loss: 3.215 Epoch 12 loss: 3.141 Epoch 13 loss: 3.047 Epoch 14 loss: 2.916 Epoch 15 loss: ...
MIT
tasks/question_answering/Question_Answering.ipynb
ARBML/tkseem
evaluation
def answer(question_txt, context_txt, answer_txt_tru): question = qa_tokenizer.encode_sentences([question_txt], out_length = max_length_inp) context = cx_tokenizer.encode_sentences([context_txt], out_length = max_length_tar) question = tf.convert_to_tensor(question) context = tf.convert_to_tensor(contex...
Question : في أي عام توفي وليام ؟ Context : توفي وليام في عام 1990 Pred Answer : 1990 True Answer : 1990 ====================== Question : ماهي عاصمة البحرين ؟ Context : عاصمة البحرين هي المنامة Pred Answer : المنامة True Answer : المنامة ====================== Question : في أي دولة ولد جون ؟ Context : ولد...
MIT
tasks/question_answering/Question_Answering.ipynb
ARBML/tkseem
Single gene name
geneinfo('USP4')
_____no_output_____
MIT
example.ipynb
kaspermunch/geneinfo
List of names
geneinfo(['LARS2', 'XCR1'])
_____no_output_____
MIT
example.ipynb
kaspermunch/geneinfo
Get all protein coding genes in a (hg38) region
for gene in mg.query('q=chr2:49500000-50000000 AND type_of_gene:protein-coding', species='human', fetch_all=True): geneinfo(gene['symbol'])
Fetching 4 gene(s) . . .
MIT
example.ipynb
kaspermunch/geneinfo
Plot data over gene annotation
chrom, start, end = 'chr3', 49500000, 50600000 ax = geneplot(chrom, start, end, figsize=(10, 5)) ax.plot(np.linspace(start, end, 1000), np.random.random(1000), 'o') ; mpld3.display() geneinfo(['HYAL3', 'IFRD2'])
_____no_output_____
MIT
example.ipynb
kaspermunch/geneinfo
Convolutional Neural NetworksIn this notebook we will implement a convolutional neural network. Rather than doing everything from scratch we will make use of [TensorFlow 2](https://www.tensorflow.org/) and the [Keras](https://keras.io) high level interface. Installing TensorFlow and KerasTensorFlow and Keras are not ...
import tensorflow as tf
_____no_output_____
MIT
Neural Networks/Convolutional Neural Networks.ipynb
PeterJamesNee/Examples
Creating a simple network with TensorFlowWe will start by creating a very simple fully connected feedforward network using TensorFlow/Keras. The network will mimic the one we implemented previously, but TensorFlow/Keras will take care of most of the details for us. MNIST DatasetFirst, let us load the MNIST digits data...
(x_train, y_train),(x_test, y_test) = tf.keras.datasets.mnist.load_data()
_____no_output_____
MIT
Neural Networks/Convolutional Neural Networks.ipynb
PeterJamesNee/Examples
The data comes as a set of integers in the range [0,255] representing the shade of gray of a given pixel. Let's first rescale them to be in the range [0,1]:
x_train, x_test = x_train / 255.0, x_test / 255.0
_____no_output_____
MIT
Neural Networks/Convolutional Neural Networks.ipynb
PeterJamesNee/Examples
Now we can build a neural network model using Keras. This uses a very simple high-level modular structure where we only have the specify the layers in our model and the properties of each layer. The layers we will have are as follows:1. Input layer: This will be a 28x28 matrix of numbers.2. `Flatten` layer: Convert our...
model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(30, activation='sigmoid'), tf.keras.layers.Dense(10, activation='softsign') ]) model.compile(optimizer='adam', loss='mean_squared_logarithmic_error', metrics=['accuracy']) model.fi...
313/313 [==============================] - 0s 719us/step - loss: 0.1253 - accuracy: 0.9629
MIT
Neural Networks/Convolutional Neural Networks.ipynb
PeterJamesNee/Examples
ExercisesExperiment with this network:1. Change the number of neurons in the hidden layer.2. Add more hidden layers.3. Change the activation function in the hidden layer to `relu`.4. Change the activation in the output layer to `softmax`.How does the performance of your network change with these modifications? TaskImp...
model = tf.keras.models.Sequential([ tf.keras.layers.Conv2D(6,5, activation='sigmoid',input_shape=(28, 28,1)), tf.keras.layers.MaxPooling2D(pool_size=(2,2)), tf.keras.layers.Conv2D(16,5, activation='sigmoid'), tf.keras.layers.MaxPooling2D(pool_size=(2,2)), tf.keras.layers.Flatten(), tf.keras.layers.Dense(84...
Epoch 1/5
MIT
Neural Networks/Convolutional Neural Networks.ipynb
PeterJamesNee/Examples
Introduction This notebook was intended to show the nuclear isotope abundance dependent on baryon density. I abandoned it later though. If you want to pick it up, go ahead. No guarantees all rights reserved and you are responsible for your interpretation of all this, etc. Imports
import numpy as np import matplotlib.pyplot as plt %matplotlib inline plt.rc('font', size=18) plt.rcParams['figure.figsize'] = (10.0, 7.0)
_____no_output_____
Unlicense
Abundances.ipynb
ErikHogenbirk/DMPlots
Read data
def read_datathief(fn): data = np.loadtxt(fn, converters={0: lambda x: x[:-1]}) return data[:, 0], data[:, 1] ab = {} elems = ['he4', 'd2', 'he3', 'li3'] ab['x_he4'], ab['he4'] = read_datathief('data/abundances/He4.txt') ab['x_d2'], ab['d2'] = read_datathief('data/abundances/D2.txt') ab['x_he3'], ab['he3'] = ...
_____no_output_____
Unlicense
Abundances.ipynb
ErikHogenbirk/DMPlots
Plots All on one
for el in elems: plt.plot(ab['x_' + el], ab[el]) plt.xscale('log') plt.yscale('log') plt.xlabel('Baryon fraction') plt.ylabel('Abundance') # plt.ylim(1e-10, 1)
_____no_output_____
Unlicense
Abundances.ipynb
ErikHogenbirk/DMPlots
Fancy plot
f, (ax1, ax2, ax3) = plt.subplots(3, 1, sharex=True) axs = [ax1, ax2, ax3] c = {el: 'C%d' % i for i, el in enumerate(elems) } planck_ab = 0.02230 for el in elems: for ax in axs: ax.plot(ab['x_' + el], ab[el], color=c[el], lw=3) ax.axhline(ab[el+'_c'], color=c[el]) ax.axvline(planck_ab) ax1.s...
_____no_output_____
Unlicense
Abundances.ipynb
ErikHogenbirk/DMPlots
Supervised LearningThis worksheet covers concepts covered in the second part of day 2 - Feature Engineering. It should take no more than 40-60 minutes to complete. Please raise your hand if you get stuck. Import the LibrariesFor this exercise, we will be using:* Pandas (http://pandas.pydata.org/pandas-docs/stable...
# Load Libraries - Make sure to run this cell! import pandas as pd import numpy as np import re from collections import Counter from sklearn import feature_extraction, tree, model_selection, metrics from yellowbrick.classifier import ClassificationReport from yellowbrick.classifier import ConfusionMatrix import matplot...
_____no_output_____
BSD-3-Clause
Notebooks/Day 2 - Feature Engineering and Supervised Learning/DGA Detection using Supervised Learning.ipynb
ahouseholder/machine-learning-for-security-professionals
Worksheet - DGA Detection using Machine LearningThis worksheet is a step-by-step guide on how to detect domains that were generated using "Domain Generation Algorithm" (DGA). We will walk you through the process of transforming raw domain strings to Machine Learning features and creating a decision tree classifer whic...
df_final = pd.read_csv('../../Data/dga_features_final_df.csv') print(df_final.isDGA.value_counts()) df_final.head() # Load dictionary of common english words from part 1 from six.moves import cPickle as pickle with open('../../Data/d_common_en_words' + '.pickle', 'rb') as f: d = pickle.load(f)
_____no_output_____
BSD-3-Clause
Notebooks/Day 2 - Feature Engineering and Supervised Learning/DGA Detection using Supervised Learning.ipynb
ahouseholder/machine-learning-for-security-professionals
Part 2 - Machine LearningTo learn simple classification procedures using [sklearn](http://scikit-learn.org/stable/) we have split the work flow into 5 steps. Step 1: Prepare Feature matrix and ```target``` vector containing the URL labels- In statistics, the feature matrix is often referred to as ```X```- target is a...
#Your code here ...
_____no_output_____
BSD-3-Clause
Notebooks/Day 2 - Feature Engineering and Supervised Learning/DGA Detection using Supervised Learning.ipynb
ahouseholder/machine-learning-for-security-professionals
Step 2: Simple Cross-ValidationTasks:- split your feature matrix X and target vector into train and test subsets using sklearn [model_selection.train_test_split](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html)
# Simple Cross-Validation: Split the data set into training and test data #Your code here ...
_____no_output_____
BSD-3-Clause
Notebooks/Day 2 - Feature Engineering and Supervised Learning/DGA Detection using Supervised Learning.ipynb
ahouseholder/machine-learning-for-security-professionals
Step 3: Train the model and make a predictionFinally, we have prepared and segmented the data. Let's start classifying!! Tasks:- Use the sklearn [tree.DecisionTreeClassfier()](http://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html), create a decision tree with standard parameters,...
# Train the decision tree based on the entropy criterion #Your code here ... # For simplicity let's just copy the needed function in here again def H_entropy (x): # Calculate Shannon Entropy prob = [ float(x.count(c)) / len(x) for c in dict.fromkeys(list(x)) ] H = - sum([ p * np.log2(p) for p in prob ]) ...
_____no_output_____
BSD-3-Clause
Notebooks/Day 2 - Feature Engineering and Supervised Learning/DGA Detection using Supervised Learning.ipynb
ahouseholder/machine-learning-for-security-professionals
Step 4: Assess model accuracy with simple cross-validationTasks:- Make predictions for all your data. Call the ```.predict()``` method on the clf with your training data ```X_train``` and store the results in a variable called ```target_pred```.- Use sklearn [metrics.accuracy_score](http://scikit-learn.org/stable/modu...
# fair approach: make prediction on test data portion #Your code here ... # Classification Report...neat summary #Your code here ...
_____no_output_____
BSD-3-Clause
Notebooks/Day 2 - Feature Engineering and Supervised Learning/DGA Detection using Supervised Learning.ipynb
ahouseholder/machine-learning-for-security-professionals
Step 5: Assess model accuracy with k-fold cross-validationTasks:- Partition the dataset into *k* different subsets- Create *k* different models by training on *k-1* subsets and testing on the remaining subsets- Measure the performance on each of the models and take the average measure.*Short-Cut*All of these steps can...
#Your code here ...
_____no_output_____
BSD-3-Clause
Notebooks/Day 2 - Feature Engineering and Supervised Learning/DGA Detection using Supervised Learning.ipynb
ahouseholder/machine-learning-for-security-professionals
(Optional) Visualizing your TreeAs an optional step, you can actually visualize your tree. The following code will generate a graph of your decision tree. You will need graphviz (http://www.graphviz.org) and pydotplus (or pydot) installed for this to work.The Griffon VM has this installed already, but if you try thi...
# These libraries are used to visualize the decision tree and require that you have GraphViz # and pydot or pydotplus installed on your computer. from sklearn.externals.six import StringIO from IPython.core.display import Image import pydotplus as pydot dot_data = StringIO() tree.export_graphviz(clf, out_file=dot...
_____no_output_____
BSD-3-Clause
Notebooks/Day 2 - Feature Engineering and Supervised Learning/DGA Detection using Supervised Learning.ipynb
ahouseholder/machine-learning-for-security-professionals
Other ModelsNow that you've built a Decision Tree, let's try out two other classifiers and see how they perform on this data. For this next exercise, create classifiers using:* Support Vector Machine* Random Forest* K-Nearest Neighbors (http://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClas...
from sklearn import svm from sklearn.ensemble import RandomForestClassifier from sklearn.neighbors import KNeighborsClassifier #Create the Random Forest Classifier #Next, create the SVM classifier #Finally the knn
_____no_output_____
BSD-3-Clause
Notebooks/Day 2 - Feature Engineering and Supervised Learning/DGA Detection using Supervised Learning.ipynb
ahouseholder/machine-learning-for-security-professionals
Explain a PredictionIn the example below, you can use LIME to explain how a classifier arrived at its prediction. Try running LIME with the various classifiers you've created and various rows to see how it functions.
import lime.lime_tabular explainer = lime.lime_tabular.LimeTabularExplainer(feature_matrix_train, feature_names=['length', 'digits', 'entropy', 'vowel-cons', 'ngrams'], class_names=['legit', 'isDGA'], ...
_____no_output_____
BSD-3-Clause
Notebooks/Day 2 - Feature Engineering and Supervised Learning/DGA Detection using Supervised Learning.ipynb
ahouseholder/machine-learning-for-security-professionals
k-Nearest Neighbor (kNN) exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*The k...
# Run some setup code for this notebook. import random import numpy as np from cs231n.data_utils import load_CIFAR10 import matplotlib.pyplot as plt # This is a bit of magic to make matplotlib figures appear inline in the notebook # rather than in a new window. %matplotlib inline plt.rcParams['figure.figsize'] = (10....
_____no_output_____
WTFPL
assignment1/knn.ipynb
kamikat/cs231n
We would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps: 1. First we must compute the distances between all test examples and all train examples. 2. Given these distances, for each test example we find the k nearest examples and have them vote for t...
# Open cs231n/classifiers/k_nearest_neighbor.py and implement # compute_distances_two_loops. # Test your implementation: dists = classifier.compute_distances_two_loops(X_test) print dists.shape # We can visualize the distance matrix: each row is a single test example and # its distances to training examples plt.imshow...
_____no_output_____
WTFPL
assignment1/knn.ipynb
kamikat/cs231n
**Inline Question 1:** Notice the structured patterns in the distance matrix, where some rows or columns are visible brighter. (Note that with the default color scheme black indicates low distances while white indicates high distances.)- What in the data is the cause behind the distinctly bright rows?- What causes the ...
# Now implement the function predict_labels and run the code below: # We use k = 1 (which is Nearest Neighbor). y_test_pred = classifier.predict_labels(dists, k=1) # Compute and print the fraction of correctly predicted examples num_correct = np.sum(y_test_pred == y_test) accuracy = float(num_correct) / num_test print...
Got 137 / 500 correct => accuracy: 0.274000
WTFPL
assignment1/knn.ipynb
kamikat/cs231n
You should expect to see approximately `27%` accuracy. Now lets try out a larger `k`, say `k = 5`:
y_test_pred = classifier.predict_labels(dists, k=5) num_correct = np.sum(y_test_pred == y_test) accuracy = float(num_correct) / num_test print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
Got 139 / 500 correct => accuracy: 0.278000
WTFPL
assignment1/knn.ipynb
kamikat/cs231n
You should expect to see a slightly better performance than with `k = 1`.
# Now lets speed up distance matrix computation by using partial vectorization # with one loop. Implement the function compute_distances_one_loop and run the # code below: dists_one = classifier.compute_distances_one_loop(X_test) # To ensure that our vectorized implementation is correct, we make sure that it # agrees ...
Two loop version took 27.158314 seconds One loop version took 40.179075 seconds No loop version took 0.529196 seconds
WTFPL
assignment1/knn.ipynb
kamikat/cs231n
Cross-validationWe have implemented the k-Nearest Neighbor classifier but we set the value k = 5 arbitrarily. We will now determine the best value of this hyperparameter with cross-validation.
num_folds = 5 k_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100] X_train_folds = [] y_train_folds = [] ################################################################################ # TODO: # # Split up the training data into folds. After splittin...
Got 162 / 500 correct => accuracy: 0.324000
WTFPL
assignment1/knn.ipynb
kamikat/cs231n
Monte Carlo - Forecasting Stock Prices - Part I *Suggested Answers follow (usually there are multiple ways to solve a problem in Python).* Load the data for Microsoft (‘MSFT’) for the period ‘2000-1-1’ until today.
import numpy as np import pandas as pd from pandas_datareader import data as wb import matplotlib.pyplot as plt from scipy.stats import norm %matplotlib inline data = pd.read_csv('D:/Python/MSFT_2000.csv', index_col = 'Date')
_____no_output_____
MIT
Python for Finance - Code Files/103 Monte Carlo - Predicting Stock Prices - Part I/CSV/Python 2 CSV/MC Predicting Stock Prices - Part I - Solution_CSV.ipynb
siddharthjain1611/Python_for_Finance_Investment_Fundamentals-and-Data-Analytics
Use the .pct_change() method to obtain the log returns of Microsoft for the designated period.
log_returns = np.log(1 + data.pct_change()) log_returns.tail() data.plot(figsize=(10, 6)); log_returns.plot(figsize = (10, 6))
_____no_output_____
MIT
Python for Finance - Code Files/103 Monte Carlo - Predicting Stock Prices - Part I/CSV/Python 2 CSV/MC Predicting Stock Prices - Part I - Solution_CSV.ipynb
siddharthjain1611/Python_for_Finance_Investment_Fundamentals-and-Data-Analytics
Assign the mean value of the log returns to a variable, called “U”, and their variance to a variable, called “var”.
u = log_returns.mean() u var = log_returns.var() var
_____no_output_____
MIT
Python for Finance - Code Files/103 Monte Carlo - Predicting Stock Prices - Part I/CSV/Python 2 CSV/MC Predicting Stock Prices - Part I - Solution_CSV.ipynb
siddharthjain1611/Python_for_Finance_Investment_Fundamentals-and-Data-Analytics
Calculate the drift, using the following formula: $$drift = u - \frac{1}{2} \cdot var$$
drift = u - (0.5 * var) drift
_____no_output_____
MIT
Python for Finance - Code Files/103 Monte Carlo - Predicting Stock Prices - Part I/CSV/Python 2 CSV/MC Predicting Stock Prices - Part I - Solution_CSV.ipynb
siddharthjain1611/Python_for_Finance_Investment_Fundamentals-and-Data-Analytics
Store the standard deviation of the log returns in a variable, called “stdev”.
stdev = log_returns.std() stdev
_____no_output_____
MIT
Python for Finance - Code Files/103 Monte Carlo - Predicting Stock Prices - Part I/CSV/Python 2 CSV/MC Predicting Stock Prices - Part I - Solution_CSV.ipynb
siddharthjain1611/Python_for_Finance_Investment_Fundamentals-and-Data-Analytics
Siamese Neural Network Recommendation for Friends (for Website) This notebook presents the final code that will be used for the Movinder [website](https://movinder.herokuapp.com/) when `Get recommendation with SiameseNN!` is selected by user.
import pandas as pd import json import datetime, time from sklearn.model_selection import train_test_split import itertools import os import zipfile import random import numpy as np import requests import matplotlib.pyplot as plt import scipy.sparse as sp from sklearn.metrics import roc_auc_score
_____no_output_____
MIT
movie_recommendation_with_LightFM_friends_WEBAPP.ipynb
LukasSteffensen/movielens-imdb-exploration
--- (1) Read data
movies = json.load(open('movies.json')) friends = json.load(open('friends.json')) ratings = json.load(open('ratings.json')) soup_movie_features = sp.load_npz('soup_movie_features_11.npz') soup_movie_features = soup_movie_features.toarray()
_____no_output_____
MIT
movie_recommendation_with_LightFM_friends_WEBAPP.ipynb
LukasSteffensen/movielens-imdb-exploration
(1.2) Simulate new friend's input The new group of friends will need to provide information that will be later used for training the model and predicting the ratings they will give to other movies. The friends will have a new id `new_friend_id`. They will provide a rating specified in the dictionary with the following...
new_friend_id = len(friends) new_ratings = [{'movie_id_ml': 302.0, 'rating': 4.0, 'friend_id': new_friend_id}, {'movie_id_ml': 304.0, 'rating': 4.0, 'friend_id': new_friend_id}, {'movie_id_ml': 307.0, 'rating': 4.0, 'friend_id': new_friend_id}] new_ratings new_friend = {'friend_id': new_frie...
_____no_output_____
MIT
movie_recommendation_with_LightFM_friends_WEBAPP.ipynb
LukasSteffensen/movielens-imdb-exploration
--- (2) Train the LightFM Model We will be using the [LightFM](http://lyst.github.io/lightfm/docs/index.html) implementation of SiameseNN to train our model using the user and item (i.e. movie) features. First, we create `scipy.sparse` matrices from raw data and they can be used to fit the LightFM model.
from lightfm.data import Dataset from lightfm import LightFM from lightfm.evaluation import precision_at_k from lightfm.evaluation import auc_score
_____no_output_____
MIT
movie_recommendation_with_LightFM_friends_WEBAPP.ipynb
LukasSteffensen/movielens-imdb-exploration
(2.1) Build ID mappings We create a mapping between the user and item ids from our input data to indices that will be internally used by this model. This needs to be done since the LightFM works with user and items ids that are consecutive non-negative integers. Using `dataset.fit` we assign internal numerical id to e...
dataset = Dataset() item_str_for_eval = "x['title'],x['release'], x['unknown'], x['action'], x['adventure'],x['animation'], x['childrens'], x['comedy'], x['crime'], x['documentary'], x['drama'], x['fantasy'], x['noir'], x['horror'], x['musical'],x['mystery'], x['romance'], x['scifi'], x['thriller'], x['war'], x['west...
Mappings - Num friends: 192, num_items 1251.
MIT
movie_recommendation_with_LightFM_friends_WEBAPP.ipynb
LukasSteffensen/movielens-imdb-exploration
(2.2) Build the interactions and feature matrices The `interactions` matrix contains interactions between `friend_id` and `movie_id_ml`. It puts 1 if friends `friend_id` rated movie `movie_id_ml`, and 0 otherwise.
(interactions, weights) = dataset.build_interactions(((int(x['friend_id']), int(x['movie_id_ml'])) for x in ratings)) print(repr(interactions))
<192x1251 sparse matrix of type '<class 'numpy.int32'>' with 59123 stored elements in COOrdinate format>
MIT
movie_recommendation_with_LightFM_friends_WEBAPP.ipynb
LukasSteffensen/movielens-imdb-exploration
The `item_features` is also a sparse matrix that contains movie ids with their corresponding features. In the item features, we include the following features: movie title, when it was released, all genres it belongs to, and vectorized representation of movie keywords, cast members, and countries it was released in.
item_features = dataset.build_item_features(((x['movie_id_ml'], [eval("("+item_str_for_eval+")")]) for x in movies) ) print(repr(item_features))
<1251x2487 sparse matrix of type '<class 'numpy.float32'>' with 2502 stored elements in Compressed Sparse Row format>
MIT
movie_recommendation_with_LightFM_friends_WEBAPP.ipynb
LukasSteffensen/movielens-imdb-exploration
The `user_features` is also a sparse matrix that contains movie ids with their corresponding features. The user features include their age, and gender.
user_features = dataset.build_user_features(((x['friend_id'], [eval(friend_str_for_eval)]) for x in friends) ) print(repr(user_features))
<192x342 sparse matrix of type '<class 'numpy.float32'>' with 384 stored elements in Compressed Sparse Row format>
MIT
movie_recommendation_with_LightFM_friends_WEBAPP.ipynb
LukasSteffensen/movielens-imdb-exploration
(2.3) Building a model After some hyperparameters tuning, we end up to having the best model performance with the following values:- Epocks = 150- Learning rate = 0.015- Max sampled = 11- Loss type = WARPReferences:- The WARP (Weighted Approximate-Rank Pairwise) lso for implicit feedback learning-rank. Originally impl...
epochs = 150 lr = 0.015 max_sampled = 11 loss_type = "warp" # "bpr" model = LightFM(learning_rate=lr, loss=loss_type, max_sampled=max_sampled) model.fit_partial(interactions, epochs=epochs, user_features=user_features, item_features=item_features) train_precision = precision_at_k(model, interactions, k=10, user_fe...
(1251,) Friends 191 Known positives: 301 | in & out 302 | l.a. confidential 307 | the devil's advocate Recommended: 48 | hoop dreams 292 | rosewood 255 | my best friend's wedding 286 | the english patient 284 | tin cup 299 | hoodlum ...
MIT
movie_recommendation_with_LightFM_friends_WEBAPP.ipynb
LukasSteffensen/movielens-imdb-exploration