markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
We're working with the movielens data, which contains one rating per row, like this:
ratings = pd.read_csv(path+'ratings.csv') ratings.head()
_____no_output_____
Apache-2.0
courses/dl1/lesson5-movielens_max_playground.ipynb
maxwellmckinnon/fastai_maxfork
Just for display purposes, let's read in the movie names too.
movies = pd.read_csv(path+'movies.csv') movies.head()
_____no_output_____
Apache-2.0
courses/dl1/lesson5-movielens_max_playground.ipynb
maxwellmckinnon/fastai_maxfork
Create subset for Excel We create a crosstab of the most popular movies and most movie-addicted users which we'll copy into Excel for creating a simple example. This isn't necessary for any of the modeling below however.
g=ratings.groupby('userId')['rating'].count() topUsers=g.sort_values(ascending=False)[:15] g=ratings.groupby('movieId')['rating'].count() topMovies=g.sort_values(ascending=False)[:15] top_r = ratings.join(topUsers, rsuffix='_r', how='inner', on='userId') top_r = top_r.join(topMovies, rsuffix='_r', how='inner', on='mo...
_____no_output_____
Apache-2.0
courses/dl1/lesson5-movielens_max_playground.ipynb
maxwellmckinnon/fastai_maxfork
Collaborative filtering
val_idxs = get_cv_idxs(len(ratings)) wd=2e-4 n_factors = 50 cf = CollabFilterDataset.from_csv(path, 'ratings.csv', 'userId', 'movieId', 'rating') learn = cf.get_learner(n_factors, val_idxs, 64, opt_fn=optim.Adam) learn.fit(1e-2, 2, wds=wd, cycle_len=1, cycle_mult=2)
_____no_output_____
Apache-2.0
courses/dl1/lesson5-movielens_max_playground.ipynb
maxwellmckinnon/fastai_maxfork
Let's compare to some benchmarks. Here's [some benchmarks](https://www.librec.net/release/v1.3/example.html) on the same dataset for the popular Librec system for collaborative filtering. They show best results based on [RMSE](http://www.statisticshowto.com/rmse/) of 0.91. We'll need to take the square root of our loss...
math.sqrt(0.776)
_____no_output_____
Apache-2.0
courses/dl1/lesson5-movielens_max_playground.ipynb
maxwellmckinnon/fastai_maxfork
Looking good - we've found a solution better than any of those benchmarks! Let's take a look at how the predictions compare to actuals for this model.
preds = learn.predict() y=learn.data.val_y sns.jointplot(preds, y, kind='hex', stat_func=None);
/root/anaconda3/envs/fastai/lib/python3.6/site-packages/matplotlib/axes/_axes.py:6462: UserWarning: The 'normed' kwarg is deprecated, and has been replaced by the 'density' kwarg. warnings.warn("The 'normed' kwarg is deprecated, and has been " /root/anaconda3/envs/fastai/lib/python3.6/site-packages/matplotlib/axes/_a...
Apache-2.0
courses/dl1/lesson5-movielens_max_playground.ipynb
maxwellmckinnon/fastai_maxfork
Analyze results Movie bias
movie_names = movies.set_index('movieId')['title'].to_dict() g=ratings.groupby('movieId')['rating'].count() topMovies=g.sort_values(ascending=False).index.values[:3000] topMovieIdx = np.array([cf.item2idx[o] for o in topMovies]) m=learn.model; m.cuda()
_____no_output_____
Apache-2.0
courses/dl1/lesson5-movielens_max_playground.ipynb
maxwellmckinnon/fastai_maxfork
First, we'll look at the movie bias term. Here, our input is the movie id (a single id), and the output is the movie bias (a single float).
movie_bias = to_np(m.ib(V(topMovieIdx))) movie_bias movie_ratings = [(b[0], movie_names[i]) for i,b in zip(topMovies,movie_bias)]
_____no_output_____
Apache-2.0
courses/dl1/lesson5-movielens_max_playground.ipynb
maxwellmckinnon/fastai_maxfork
Now we can look at the top and bottom rated movies. These ratings are corrected for different levels of reviewer sentiment, as well as different types of movies that different reviewers watch.
sorted(movie_ratings, key=lambda o: o[0])[:15] sorted(movie_ratings, key=itemgetter(0))[:15] sorted(movie_ratings, key=lambda o: o[0], reverse=True)[:15]
_____no_output_____
Apache-2.0
courses/dl1/lesson5-movielens_max_playground.ipynb
maxwellmckinnon/fastai_maxfork
Embedding interpretation We can now do the same thing for the embeddings.
movie_emb = to_np(m.i(V(topMovieIdx))) movie_emb.shape
_____no_output_____
Apache-2.0
courses/dl1/lesson5-movielens_max_playground.ipynb
maxwellmckinnon/fastai_maxfork
Because it's hard to interpret 50 embeddings, we use [PCA](https://plot.ly/ipython-notebooks/principal-component-analysis/) to simplify them down to just 3 vectors.
from sklearn.decomposition import PCA pca = PCA(n_components=3) movie_pca = pca.fit(movie_emb.T).components_ movie_pca.shape fac0 = movie_pca[0] movie_comp = [(f, movie_names[i]) for f,i in zip(fac0, topMovies)]
_____no_output_____
Apache-2.0
courses/dl1/lesson5-movielens_max_playground.ipynb
maxwellmckinnon/fastai_maxfork
Here's the 1st component. It seems to be 'easy watching' vs 'serious'.
sorted(movie_comp, key=itemgetter(0), reverse=True)[:10] sorted(movie_comp, key=itemgetter(0))[:10] fac1 = movie_pca[1] movie_comp = [(f, movie_names[i]) for f,i in zip(fac1, topMovies)]
_____no_output_____
Apache-2.0
courses/dl1/lesson5-movielens_max_playground.ipynb
maxwellmckinnon/fastai_maxfork
Here's the 2nd component. It seems to be 'CGI' vs 'dialog driven'.
sorted(movie_comp, key=itemgetter(0), reverse=True)[:10] sorted(movie_comp, key=itemgetter(0))[:10]
_____no_output_____
Apache-2.0
courses/dl1/lesson5-movielens_max_playground.ipynb
maxwellmckinnon/fastai_maxfork
We can draw a picture to see how various movies appear on the map of these components. This picture shows the first two components.
idxs = np.random.choice(len(topMovies), 50, replace=False) X = fac0[idxs] Y = fac1[idxs] plt.figure(figsize=(15,15)) plt.scatter(X, Y) for i, x, y in zip(topMovies[idxs], X, Y): plt.text(x,y,movie_names[i], color=np.random.rand(3)*0.7, fontsize=11) plt.show()
_____no_output_____
Apache-2.0
courses/dl1/lesson5-movielens_max_playground.ipynb
maxwellmckinnon/fastai_maxfork
Collab filtering from scratch Dot product example
a = T([[1.,2],[3,4]]) b = T([[2.,2],[10,10]]) a,b a*b (a*b).sum(1) class DotProduct(nn.Module): def forward(self, u, m): return (u*m).sum(1) model=DotProduct() model(a,b)
_____no_output_____
Apache-2.0
courses/dl1/lesson5-movielens_max_playground.ipynb
maxwellmckinnon/fastai_maxfork
Dot product model
u_uniq = ratings.userId.unique() user2idx = {o:i for i,o in enumerate(u_uniq)} ratings.userId = ratings.userId.apply(lambda x: user2idx[x]) m_uniq = ratings.movieId.unique() movie2idx = {o:i for i,o in enumerate(m_uniq)} ratings.movieId = ratings.movieId.apply(lambda x: movie2idx[x]) n_users=int(ratings.userId.nuniqu...
_____no_output_____
Apache-2.0
courses/dl1/lesson5-movielens_max_playground.ipynb
maxwellmckinnon/fastai_maxfork
Bias
min_rating,max_rating = ratings.rating.min(),ratings.rating.max() min_rating,max_rating def get_emb(ni,nf): e = nn.Embedding(ni, nf) e.weight.data.uniform_(-0.01,0.01) return e class EmbeddingDotBias(nn.Module): def __init__(self, n_users, n_movies): super().__init__() (self.u, self.m, ...
_____no_output_____
Apache-2.0
courses/dl1/lesson5-movielens_max_playground.ipynb
maxwellmckinnon/fastai_maxfork
Mini net
class EmbeddingNet(nn.Module): def __init__(self, n_users, n_movies, nh=10, p1=0.05, p2=0.5): super().__init__() (self.u, self.m) = [get_emb(*o) for o in [ (n_users, n_factors), (n_movies, n_factors)]] self.lin1 = nn.Linear(n_factors*2, nh) self.lin2 = nn.Linear(nh, 1) ...
_____no_output_____
Apache-2.0
courses/dl1/lesson5-movielens_max_playground.ipynb
maxwellmckinnon/fastai_maxfork
Qubit encoding
def whiten(arr): arr_mean = np.mean(arr) arr_std = np.std(arr) whitened = (arr - arr_mean)/arr_std return whitened plt.imshow(whiten(X[0, :].reshape(28, 28)), cmap='gray') plt.colorbar() plt.imshow(X_r[0, :].reshape(8, 1), cmap='gray') plt.colorbar() plt.tight_layout() plt.axis('off') def rescale_to_an...
_____no_output_____
MIT
mnist-pca.ipynb
jiwoncpark/cs269q-quantum-computer-programming
Interact Exercise 3 Imports
%matplotlib inline from matplotlib import pyplot as plt import numpy as np from IPython.html.widgets import interact, interactive, fixed from IPython.display import display
:0: FutureWarning: IPython widgets are experimental and may change in the future.
MIT
assignments/assignment05/InteractEx03.ipynb
aschaffn/phys202-2015-work
Using interact for animation with data A [*soliton*](http://en.wikipedia.org/wiki/Soliton) is a constant velocity wave that maintains its shape as it propagates. They arise from non-linear wave equations, such has the [Korteweg–de Vries](http://en.wikipedia.org/wiki/Korteweg%E2%80%93de_Vries_equation) equation, which ...
def soliton(x, t, c, a): """Return phi(x, t) for a soliton wave with constants c and a.""" phiarg = (np.sqrt(c)/2.)*(x-c*t-a) phi = .5 * np.cosh(phiarg)**2 return(phi) assert np.allclose(soliton(np.array([0]),0.0,1.0,0.0), np.array([0.5]))
_____no_output_____
MIT
assignments/assignment05/InteractEx03.ipynb
aschaffn/phys202-2015-work
To create an animation of a soliton propagating in time, we are going to precompute the soliton data and store it in a 2d array. To set this up, we create the following variables and arrays:
tmin = 0.0 tmax = 10.0 tpoints = 100 t = np.linspace(tmin, tmax, tpoints) xmin = 0.0 xmax = 10.0 xpoints = 200 x = np.linspace(xmin, xmax, xpoints) c = 1.0 a = 0.0
_____no_output_____
MIT
assignments/assignment05/InteractEx03.ipynb
aschaffn/phys202-2015-work
Compute a 2d NumPy array called `phi`:* It should have a dtype of `float`.* It should have a shape of `(xpoints, tpoints)`.* `phi[i,j]` should contain the value $\phi(x[i],t[j])$.
phi = np.zeros([200,100], dtype = 'float') for i in range(0,200): for j in range(0,100): phi[i,j] = soliton(x[i], t[j], c, a) # is there a list comprehension that would make this better? assert phi.shape==(xpoints, tpoints) assert phi.ndim==2 assert phi.dtype==np.dtype(float) assert phi[0,0]==solit...
_____no_output_____
MIT
assignments/assignment05/InteractEx03.ipynb
aschaffn/phys202-2015-work
Write a `plot_soliton_data(i)` function that plots the soliton wave $\phi(x, t[i])$. Customize your plot to make it effective and beautiful.
def plot_soliton_data(i=0): """Plot the soliton data at t[i] versus x.""" plt.plot(x, phi[:,i]) plt.xlim((0,10)) plt.ylim((0,3000)) plt.title("t =" + str(t[i])) plot_soliton_data(0) """hi there""" print("""hi how are you""") assert True # leave this for grading the plot_soliton_data function
_____no_output_____
MIT
assignments/assignment05/InteractEx03.ipynb
aschaffn/phys202-2015-work
Use `interact` to animate the `plot_soliton_data` function versus time.
interact(plot_soliton_data, i=(0,99)) assert True # leave this for grading the interact with plot_soliton_data cell
_____no_output_____
MIT
assignments/assignment05/InteractEx03.ipynb
aschaffn/phys202-2015-work
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); `GiRaFFE_NRPy`: Main Driver Author: Patrick Nelson**Notebook Status:** Validation in progress **Validation Notes:** This code assembles the various parts needed for GRFFE...
# Step 0: Add NRPy's directory to the path # https://stackoverflow.com/questions/16780014/import-file-from-parent-directory import os,sys nrpy_dir_path = os.path.join("..") if nrpy_dir_path not in sys.path: sys.path.append(nrpy_dir_path) from outputC import outCfunction, lhrh, add_to_Cfunction_dict # NRPy+: Core C...
_____no_output_____
BSD-2-Clause
in_progress/Tutorial-GiRaFFE_NRPy_Main_Driver_new_way.ipynb
philchang/nrpytutorial
Step 1: Calculate the right-hand sides \[Back to [top](toc)\]$$\label{rhs}$$In the method of lines using Runge-Kutta methods, each timestep involves several "RK substeps" during which we will run the same set of function calls. These can be divided into two groups: one in which the RHSs themselves are calculated, and ...
import GRHD.equations as GRHD # NRPy+: Generate general relativistic hydrodynamics equations import GRFFE.equations as GRFFE # NRPy+: Generate general relativisitic force-free electrodynamics equations gammaDD = ixp.register_gridfunctions_for_single_rank2("AUXEVOL","gammaDD","sym01",DIM=3) betaU = ixp.register_gri...
_____no_output_____
BSD-2-Clause
in_progress/Tutorial-GiRaFFE_NRPy_Main_Driver_new_way.ipynb
philchang/nrpytutorial
Step 1.b: Calculate the source terms of $\partial_t A_i$, $\partial_t \tilde{S}_i$, and $\partial_t [\sqrt{\gamma} \Phi]$ right-hand sides \[Back to [top](toc)\]$$\label{source}$$With the operands of the gradient of divergence operators stored in memory from the previous step, we can now calculate the terms on the RHS...
def add_to_Cfunction_dict__AD_gauge_term_psi6Phi_fin_diff(includes=None): xi_damping = par.Cparameters("REAL",thismodule,"xi_damping",0.1) GRFFE.compute_psi6Phi_rhs_damping_term(alpha,psi6Phi,xi_damping) AevolParen_dD = ixp.declarerank1("AevolParen_dD",DIM=3) PhievolParenU_dD = ixp.declarerank2("Phievo...
_____no_output_____
BSD-2-Clause
in_progress/Tutorial-GiRaFFE_NRPy_Main_Driver_new_way.ipynb
philchang/nrpytutorial
We also need to compute the source term of the $\tilde{S}_i$ evolution equation. This term involves derivatives of the four metric, so we can save some effort here by taking advantage of the interpolations done of the metric gridfunctions to the cell faces, which will allow us to take a finite-difference derivative wit...
subdir = "boundary_conditions" cmd.mkdir(os.path.join(out_dir,subdir)) import GiRaFFE_NRPy.GiRaFFE_NRPy_BCs as BC BC.GiRaFFE_NRPy_BCs(os.path.join(out_dir,subdir))
_____no_output_____
BSD-2-Clause
in_progress/Tutorial-GiRaFFE_NRPy_Main_Driver_new_way.ipynb
philchang/nrpytutorial
Step 2.b: Compute $B^i$ from $A_i$ \[Back to [top](toc)\]$$\label{a2b}$$Now, we will calculate the magnetic field as the curl of the vector potential at all points in our domain; this requires care to be taken in the ghost zones, which is detailed in [Tutorial-GiRaFFE_NRPy-A2B](Tutorial-GiRaFFE_NRPy-A2B.ipynb). This c...
import GiRaFFE_NRPy.GiRaFFE_NRPy_C2P_P2C as C2P_P2C def add_to_Cfunction_dict__cons_to_prims(StildeD,BU,gammaDD,betaU,alpha, includes=None): C2P_P2C.GiRaFFE_NRPy_C2P(StildeD,BU,gammaDD,betaU,alpha) values_to_print = [ lhrh(lhs=gri.gfaccess("in_gfs","StildeD0"),rhs=C2P_P2C.outStildeD[0]),...
_____no_output_____
BSD-2-Clause
in_progress/Tutorial-GiRaFFE_NRPy_Main_Driver_new_way.ipynb
philchang/nrpytutorial
Step 2.d: Apply outflow boundary conditions to $\bar{v}^i$ \[Back to [top](toc)\]$$\label{velocity_bc}$$Now, we can apply outflow boundary conditions to the Valencia three-velocity. This specific type of boundary condition helps avoid numerical error "flowing" into our grid. This function has already been generated [a...
%%writefile $out_dir/GiRaFFE_NRPy_Main_Driver.h // Structure to track ghostzones for PPM: typedef struct __gf_and_gz_struct__ { REAL *gf; int gz_lo[4],gz_hi[4]; } gf_and_gz_struct; // Some additional constants needed for PPM: const int VX=0,VY=1,VZ=2,BX=3,BY=4,BZ=5; const int NUM_RECONSTRUCT_GFS = 6; // Include AL...
_____no_output_____
BSD-2-Clause
in_progress/Tutorial-GiRaFFE_NRPy_Main_Driver_new_way.ipynb
philchang/nrpytutorial
Step 4: Self-Validation against `GiRaFFE_NRPy_Main_Drive.py` \[Back to [top](toc)\]$$\label{code_validation}$$To validate the code in this tutorial we check for agreement between the files1. that were generated in this tutorial and1. those that are generated in the module [`GiRaFFE_NRPy_Main_Driver.py`](../../edit/in_...
gri.glb_gridfcs_list = [] # Define the directory that we wish to validate against: valdir = os.path.join("GiRaFFE_validation_Ccodes") cmd.mkdir(valdir) import GiRaFFE_NRPy.GiRaFFE_NRPy_Main_Driver as md md.GiRaFFE_NRPy_Main_Driver_generate_all(valdir)
_____no_output_____
BSD-2-Clause
in_progress/Tutorial-GiRaFFE_NRPy_Main_Driver_new_way.ipynb
philchang/nrpytutorial
With both sets of codes generated, we can now compare them against each other.
import difflib import sys print("Printing difference between original C code and this code...") # Open the files to compare files = ["GiRaFFE_NRPy_Main_Driver.h", "RHSs/calculate_AD_gauge_term_psi6Phi_flux_term_for_RHSs.h", "RHSs/calculate_AD_gauge_psi6Phi_RHSs.h", "PPM/reconstruct_set_of_pr...
_____no_output_____
BSD-2-Clause
in_progress/Tutorial-GiRaFFE_NRPy_Main_Driver_new_way.ipynb
philchang/nrpytutorial
Step 4: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial direct...
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-GiRaFFE_NRPy_Main_Driver",location_of_template_file=os.path.join(".."))
_____no_output_____
BSD-2-Clause
in_progress/Tutorial-GiRaFFE_NRPy_Main_Driver_new_way.ipynb
philchang/nrpytutorial
Text is a highly unstructured form of data, various types of noise are present in it and the data is not readily analyzable without any pre-processing. The entire process of cleaning and standardization of text, making it noise-free and ready for analysis is known as text preprocessing. We will divide it into 2 parts:*...
train.head()
_____no_output_____
MIT
twitter-sentiment-analysis-word2vec-doc2vec (2).ipynb
msharma043510/Twitter-Sentiment-Analysis
Data InspectionLet’s check out a few **non** racist/sexist tweets.
train[train['label'] == 0].head(10)
_____no_output_____
MIT
twitter-sentiment-analysis-word2vec-doc2vec (2).ipynb
msharma043510/Twitter-Sentiment-Analysis
Now check out a few racist/sexist tweets.
train[train['label'] == 1].head(10)
_____no_output_____
MIT
twitter-sentiment-analysis-word2vec-doc2vec (2).ipynb
msharma043510/Twitter-Sentiment-Analysis
There are quite a many words and characters which are not really required. So, we will try to keep only those words which are important and add value.Let’s check dimensions of the train and test dataset.
train.shape, test.shape
_____no_output_____
MIT
twitter-sentiment-analysis-word2vec-doc2vec (2).ipynb
msharma043510/Twitter-Sentiment-Analysis
Train set has 31,962 tweets and test set has 17,197 tweets.Let’s have a glimpse at label-distribution in the train dataset.
train["label"].value_counts()
_____no_output_____
MIT
twitter-sentiment-analysis-word2vec-doc2vec (2).ipynb
msharma043510/Twitter-Sentiment-Analysis
In the train dataset, we have 2,242 (~7%) tweets labeled as racist or sexist, and 29,720 (~93%) tweets labeled as non racist/sexist. So, it is an imbalanced classification challenge.Now we will check the distribution of length of the tweets, in terms of words, in both train and test data.
plt.hist(train.tweet.str.len(), bins=20, label='train') plt.hist(test.tweet.str.len(), bins=20, label='test') plt.legend() plt.show()
_____no_output_____
MIT
twitter-sentiment-analysis-word2vec-doc2vec (2).ipynb
msharma043510/Twitter-Sentiment-Analysis
In any natural language processing task, cleaning raw text data is an important step. It helps in getting rid of the unwanted words and characters which helps in obtaining better features. If we skip this step then there is a higher chance that you are working with noisy and inconsistent data. The objective of this ste...
combi = train.append(test, ignore_index=True, sort=True) combi.shape
_____no_output_____
MIT
twitter-sentiment-analysis-word2vec-doc2vec (2).ipynb
msharma043510/Twitter-Sentiment-Analysis
Given below is a user-defined function to remove unwanted text patterns from the tweets.
def remove_pattern(input_txt, pattern): r = re.findall(pattern, input_txt) for i in r: input_txt = re.sub(i, '', input_txt) return input_txt
_____no_output_____
MIT
twitter-sentiment-analysis-word2vec-doc2vec (2).ipynb
msharma043510/Twitter-Sentiment-Analysis
**1. Removing Twitter Handles (@user)**Let’s create a new column tidy_tweet, it will contain the cleaned and processed tweets. Note that we have passed “@[]*” as the pattern to the remove_pattern function. It is actually a regular expression which will pick any word starting with ‘@’.
combi['tidy_tweet'] = np.vectorize(remove_pattern)(combi['tweet'], "@[\w]*") combi.head(10)
_____no_output_____
MIT
twitter-sentiment-analysis-word2vec-doc2vec (2).ipynb
msharma043510/Twitter-Sentiment-Analysis
**2. Removing Punctuations, Numbers, and Special Characters**Here we will replace everything except characters and hashtags with spaces. The regular expression “[^a-zA-Z]” means anything except alphabets and ‘’.
combi.tidy_tweet = combi.tidy_tweet.str.replace("[^a-zA-Z#]", " ") combi.head(10)
_____no_output_____
MIT
twitter-sentiment-analysis-word2vec-doc2vec (2).ipynb
msharma043510/Twitter-Sentiment-Analysis
**3. Removing Short Words**We have to be a little careful here in selecting the length of the words which we want to remove. So, I have decided to remove all the words having length 3 or less. For example, terms like “hmm”, “oh” are of very little use. It is better to get rid of them.
combi.tidy_tweet = combi.tidy_tweet.apply(lambda x: ' '.join([w for w in x.split() if len(w) > 3])) combi.head(10)
_____no_output_____
MIT
twitter-sentiment-analysis-word2vec-doc2vec (2).ipynb
msharma043510/Twitter-Sentiment-Analysis
You can see the difference between the raw tweets and the cleaned tweets (tidy_tweet) quite clearly. Only the important words in the tweets have been retained and the noise (numbers, punctuations, and special characters) has been removed. **4. Text Normalization**Here we will use nltk’s PorterStemmer() function to norm...
tokenized_tweet = combi.tidy_tweet.apply(lambda x: x.split()) tokenized_tweet.head() # Now we can normalize the tokenized tweets. from nltk.stem.porter import * stemmer = PorterStemmer() tokenized_tweet = tokenized_tweet.apply(lambda x: [stemmer.stem(i) for i in x]) # stemming tokenized_tweet.head() # Now let’s stit...
_____no_output_____
MIT
twitter-sentiment-analysis-word2vec-doc2vec (2).ipynb
msharma043510/Twitter-Sentiment-Analysis
We can see most of the words are positive or neutral. Words like love, great, friend, life are the most frequent ones. It doesn’t give us any idea about the words associated with the racist/sexist tweets. Hence, we will plot separate wordclouds for both the classes (racist/sexist or not) in our train data. **B) Words i...
normal_words =' '.join([text for text in combi['tidy_tweet'][combi['label'] == 0]]) wordcloud = WordCloud(width=800, height=500, random_state=21, max_font_size=110).generate(normal_words) plt.figure(figsize=(10, 7)) plt.imshow(wordcloud, interpolation="bilinear") plt.axis('off') plt.show()
_____no_output_____
MIT
twitter-sentiment-analysis-word2vec-doc2vec (2).ipynb
msharma043510/Twitter-Sentiment-Analysis
Most of the frequent words are compatible with the sentiment, i.e, non-racist/sexists tweets. Similarly, we will plot the word cloud for the other sentiment. Expect to see negative, racist, and sexist terms. **C) Racist/Sexist Tweets**
negative_words = ' '.join([text for text in combi['tidy_tweet'][combi['label'] == 1]]) wordcloud = WordCloud(width=800, height=500, random_state=21, max_font_size=110).generate(negative_words) plt.figure(figsize=(10, 7)) plt.imshow(wordcloud, interpolation="bilinear") plt.axis('off') plt.show()
_____no_output_____
MIT
twitter-sentiment-analysis-word2vec-doc2vec (2).ipynb
msharma043510/Twitter-Sentiment-Analysis
As we can clearly see, most of the words have negative connotations. So, it seems we have a pretty good text data to work on. Next we will the hashtags/trends in our twitter data.
# function to collect hashtags def hashtag_extract(x): hashtags = [] # Loop over the words in the tweet for i in x: ht = re.findall(r"#(\w+)", i) hashtags.append(ht) return hashtags # extracting hashtags from non racist/sexist tweets HT_regular = hashtag_extract(combi['tidy_tweet'][co...
_____no_output_____
MIT
twitter-sentiment-analysis-word2vec-doc2vec (2).ipynb
msharma043510/Twitter-Sentiment-Analysis
Now that we have prepared our lists of hashtags for both the sentiments, we can plot the top ‘n’ hashtags. So, first let’s check the hashtags in the non-racist/sexist tweets. **Non-Racist/Sexist Tweets**
a = nltk.FreqDist(HT_regular) d = pd.DataFrame( { 'Hashtag': list(a.keys()), 'Count': list(a.values()) } ) # selecting top 20 most frequent hashtags d = d.nlargest(columns="Count", n = 20) plt.figure(figsize=(20,5)) ax = sns.barplot(data=d, x= "Hashtag", y = "Count") ax.set(ylabel = 'Count') # plt.xti...
_____no_output_____
MIT
twitter-sentiment-analysis-word2vec-doc2vec (2).ipynb
msharma043510/Twitter-Sentiment-Analysis
All these hashtags are positive and it makes sense. I am expecting negative terms in the plot of the second list. Let’s check the most frequent hashtags appearing in the racist/sexist tweets. **Racist/Sexist Tweets**
a = nltk.FreqDist(HT_negative) d = pd.DataFrame( { 'Hashtag': list(a.keys()), 'Count': list(a.values()) } ) # selecting top 20 most frequent hashtags d = d.nlargest(columns="Count", n = 20) plt.figure(figsize=(20,5)) ax = sns.barplot(data=d, x= "Hashtag", y = "Count") ax.set(ylabel = 'Count') # plt.xt...
_____no_output_____
MIT
twitter-sentiment-analysis-word2vec-doc2vec (2).ipynb
msharma043510/Twitter-Sentiment-Analysis
As expected, most of the terms are negative with a few neutral terms as well. So, it’s not a bad idea to keep these hashtags in our data as they contain useful information. Next, we will try to extract features from the tokenized tweets. Bag-of-Words FeaturesTo analyse a preprocessed data, it needs to be converted int...
from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer import gensim
_____no_output_____
MIT
twitter-sentiment-analysis-word2vec-doc2vec (2).ipynb
msharma043510/Twitter-Sentiment-Analysis
Let’s start with the **Bag-of-Words** Features.Consider a Corpus C of D documents {d1,d2…..dD} and N unique tokens extracted out of the corpus C. The N tokens (words) will form a dictionary and the size of the bag-of-words matrix M will be given by D X N. Each row in the matrix M contains the frequency of tokens in doc...
bow_vectorizer = CountVectorizer(max_df=0.90, min_df=2, max_features=1000, stop_words='english') bow = bow_vectorizer.fit_transform(combi['tidy_tweet']) bow.shape
_____no_output_____
MIT
twitter-sentiment-analysis-word2vec-doc2vec (2).ipynb
msharma043510/Twitter-Sentiment-Analysis
TF-IDF FeaturesThis is another method which is based on the frequency method but it is different to the bag-of-words approach in the sense that it takes into account not just the occurrence of a word in a single document (or tweet) but in the entire corpus.TF-IDF works by penalising the common words by assigning them ...
tfidf_vectorizer = TfidfVectorizer(max_df=0.90, min_df=2, max_features=1000, stop_words='english') tfidf = tfidf_vectorizer.fit_transform(combi['tidy_tweet']) tfidf.shape
_____no_output_____
MIT
twitter-sentiment-analysis-word2vec-doc2vec (2).ipynb
msharma043510/Twitter-Sentiment-Analysis
Word2Vec FeaturesWord embeddings are the modern way of representing words as vectors. The objective of word embeddings is to redefine the high dimensional word features into low dimensional feature vectors by preserving the contextual similarity in the corpus. They are able to achieve tasks like **King -man +woman = Q...
**1. Word2Vec Embeddings** Word2Vec is not a single algorithm but a combination of two techniques – **CBOW (Continuous bag of words)** and **Skip-gram** model. Both of these are shallow neural networks which map word(s) to the target variable which is also a word(s). Both of these techniques learn weights which act as...
_____no_output_____
MIT
twitter-sentiment-analysis-word2vec-doc2vec (2).ipynb
msharma043510/Twitter-Sentiment-Analysis
**1. Word2Vec Embeddings**Word2Vec is not a single algorithm but a combination of two techniques – **CBOW (Continuous bag of words)** and **Skip-gram** model. Both of these are shallow neural networks which map word(s) to the target variable which is also a word(s). Both of these techniques learn weights which act as w...
%%time tokenized_tweet = combi['tidy_tweet'].apply(lambda x: x.split()) # tokenizing model_w2v = gensim.models.Word2Vec( tokenized_tweet, size=200, # desired no. of features/independent variables window=5, # context window size min_count=2, # Ignores all words with tot...
CPU times: user 2min 48s, sys: 895 ms, total: 2min 49s Wall time: 1min 38s
MIT
twitter-sentiment-analysis-word2vec-doc2vec (2).ipynb
msharma043510/Twitter-Sentiment-Analysis
Let’s play a bit with our Word2Vec model and see how does it perform. We will specify a word and the model will pull out the most similar words from the corpus.
model_w2v.wv.most_similar(positive="dinner") model_w2v.most_similar(positive="trump")
_____no_output_____
MIT
twitter-sentiment-analysis-word2vec-doc2vec (2).ipynb
msharma043510/Twitter-Sentiment-Analysis
From the above two examples, we can see that our word2vec model does a good job of finding the most similar words for a given word. But how is it able to do so? That’s because it has learned vectors for every unique word in our data and it uses cosine similarity to find out the most similar vectors (words).Let’s check ...
model_w2v['food'] len(model_w2v['food']) #The length of the vector is 200
_____no_output_____
MIT
twitter-sentiment-analysis-word2vec-doc2vec (2).ipynb
msharma043510/Twitter-Sentiment-Analysis
Preparing Vectors for TweetsSince our data contains tweets and not just words, we’ll have to figure out a way to use the word vectors from word2vec model to create vector representation for an entire tweet. There is a simple solution to this problem, we can simply take mean of all the word vectors present in the tweet...
def word_vector(tokens, size): vec = np.zeros(size).reshape((1, size)) count = 0 for word in tokens: try: vec += model_w2v[word].reshape((1, size)) count += 1. except KeyError: # handling the case where the token is not in vocabulary continue if count...
_____no_output_____
MIT
twitter-sentiment-analysis-word2vec-doc2vec (2).ipynb
msharma043510/Twitter-Sentiment-Analysis
Preparing word2vec feature set…
wordvec_arrays = np.zeros((len(tokenized_tweet), 200)) for i in range(len(tokenized_tweet)): wordvec_arrays[i,:] = word_vector(tokenized_tweet[i], 200) wordvec_df = pd.DataFrame(wordvec_arrays) wordvec_df.shape
_____no_output_____
MIT
twitter-sentiment-analysis-word2vec-doc2vec (2).ipynb
msharma043510/Twitter-Sentiment-Analysis
Now we have 200 new features, whereas in Bag of Words and TF-IDF we had 1000 features. 2. Doc2Vec EmbeddingDoc2Vec model is an unsupervised algorithm to generate vectors for sentence/paragraphs/documents. This approach is an extension of the word2vec. The major difference between the two is that doc2vec provides an ad...
from tqdm import tqdm tqdm.pandas(desc="progress-bar") from gensim.models.doc2vec import LabeledSentence
_____no_output_____
MIT
twitter-sentiment-analysis-word2vec-doc2vec (2).ipynb
msharma043510/Twitter-Sentiment-Analysis
To implement doc2vec, we have to **labelise** or **tag** each tokenised tweet with unique IDs. We can do so by using Gensim’s *LabeledSentence()* function.
def add_label(twt): output = [] for i, s in zip(twt.index, twt): output.append(LabeledSentence(s, ["tweet_" + str(i)])) return output labeled_tweets = add_label(tokenized_tweet) # label all the tweets
_____no_output_____
MIT
twitter-sentiment-analysis-word2vec-doc2vec (2).ipynb
msharma043510/Twitter-Sentiment-Analysis
Let’s have a look at the result.
labeled_tweets[:6]
_____no_output_____
MIT
twitter-sentiment-analysis-word2vec-doc2vec (2).ipynb
msharma043510/Twitter-Sentiment-Analysis
Now let’s train a **doc2vec** model.
%%time model_d2v = gensim.models.Doc2Vec(dm=1, # dm = 1 for ‘distributed memory’ model dm_mean=1, # dm_mean = 1 for using mean of the context word vectors vector_size=200, # no. of desired features window=5, # width o...
100%|██████████| 49159/49159 [00:00<00:00, 1287900.95it/s]
MIT
twitter-sentiment-analysis-word2vec-doc2vec (2).ipynb
msharma043510/Twitter-Sentiment-Analysis
**Preparing doc2vec Feature Set**
docvec_arrays = np.zeros((len(tokenized_tweet), 200)) for i in range(len(combi)): docvec_arrays[i,:] = model_d2v.docvecs[i].reshape((1,200)) docvec_df = pd.DataFrame(docvec_arrays) docvec_df.shape
_____no_output_____
MIT
twitter-sentiment-analysis-word2vec-doc2vec (2).ipynb
msharma043510/Twitter-Sentiment-Analysis
We are now done with all the pre-modeling stages required to get the data in the proper form and shape. We will be building models on the datasets with different feature sets prepared in the earlier sections — Bag-of-Words, TF-IDF, word2vec vectors, and doc2vec vectors. We will use the following algorithms to build mod...
from sklearn.linear_model import LogisticRegression from sklearn.model_selection import train_test_split from sklearn.metrics import f1_score
_____no_output_____
MIT
twitter-sentiment-analysis-word2vec-doc2vec (2).ipynb
msharma043510/Twitter-Sentiment-Analysis
**Bag-of-Words Features**We will first try to fit the logistic regression model on the Bag-of-Words (BoW) features.
# Extracting train and test BoW features train_bow = bow[:31962,:] test_bow = bow[31962:,:] # splitting data into training and validation set xtrain_bow, xvalid_bow, ytrain, yvalid = train_test_split(train_bow, train['label'], random_state=42, test_size=0.3) lreg = LogisticRegression(solver='lbfgs') # training ...
_____no_output_____
MIT
twitter-sentiment-analysis-word2vec-doc2vec (2).ipynb
msharma043510/Twitter-Sentiment-Analysis
Now let’s make predictions for the test dataset and create a submission file.
test_pred = lreg.predict_proba(test_bow) test_pred_int = test_pred[:,1] >= 0.3 test_pred_int = test_pred_int.astype(np.int) test['label'] = test_pred_int submission = test[['id','label']] submission.to_csv('sub_lreg_bow.csv', index=False) # writing data to a CSV file
_____no_output_____
MIT
twitter-sentiment-analysis-word2vec-doc2vec (2).ipynb
msharma043510/Twitter-Sentiment-Analysis
**TF-IDF Features**We’ll follow the same steps as above, but now for the TF-IDF feature set.
train_tfidf = tfidf[:31962,:] test_tfidf = tfidf[31962:,:] xtrain_tfidf = train_tfidf[ytrain.index] xvalid_tfidf = train_tfidf[yvalid.index] lreg.fit(xtrain_tfidf, ytrain) prediction = lreg.predict_proba(xvalid_tfidf) prediction_int = prediction[:,1] >= 0.3 prediction_int = prediction_int.astype(np.int) f1_scor...
_____no_output_____
MIT
twitter-sentiment-analysis-word2vec-doc2vec (2).ipynb
msharma043510/Twitter-Sentiment-Analysis
**Word2Vec Features**
train_w2v = wordvec_df.iloc[:31962,:] test_w2v = wordvec_df.iloc[31962:,:] xtrain_w2v = train_w2v.iloc[ytrain.index,:] xvalid_w2v = train_w2v.iloc[yvalid.index,:] lreg.fit(xtrain_w2v, ytrain) prediction = lreg.predict_proba(xvalid_w2v) prediction_int = prediction[:,1] >= 0.3 prediction_int = prediction_int.astype(...
_____no_output_____
MIT
twitter-sentiment-analysis-word2vec-doc2vec (2).ipynb
msharma043510/Twitter-Sentiment-Analysis
**Doc2Vec Features**
train_d2v = docvec_df.iloc[:31962,:] test_d2v = docvec_df.iloc[31962:,:] xtrain_d2v = train_d2v.iloc[ytrain.index,:] xvalid_d2v = train_d2v.iloc[yvalid.index,:] lreg.fit(xtrain_d2v, ytrain) prediction = lreg.predict_proba(xvalid_d2v) prediction_int = prediction[:,1] >= 0.3 prediction_int = prediction_int.astype(n...
_____no_output_____
MIT
twitter-sentiment-analysis-word2vec-doc2vec (2).ipynb
msharma043510/Twitter-Sentiment-Analysis
Doc2Vec features do not seem to be capturing the right signals as the F1-score on validation set is quite low. Support Vector Machine (SVM)Support Vector Machine (SVM) is a supervised machine learning algorithm which can be used for both classification or regression challenges. However, it is mostly used in classifica...
from sklearn import svm
_____no_output_____
MIT
twitter-sentiment-analysis-word2vec-doc2vec (2).ipynb
msharma043510/Twitter-Sentiment-Analysis
**Bag-of-Words Features**
svc = svm.SVC(kernel='linear', C=1, probability=True).fit(xtrain_bow, ytrain) prediction = svc.predict_proba(xvalid_bow) prediction_int = prediction[:,1] >= 0.3 prediction_int = prediction_int.astype(np.int) f1_score(yvalid, prediction_int)
_____no_output_____
MIT
twitter-sentiment-analysis-word2vec-doc2vec (2).ipynb
msharma043510/Twitter-Sentiment-Analysis
Again let’s make predictions for the test dataset and create another submission file.
test_pred = svc.predict_proba(test_bow) test_pred_int = test_pred[:,1] >= 0.3 test_pred_int = test_pred_int.astype(np.int) test['label'] = test_pred_int submission = test[['id','label']] submission.to_csv('sub_svm_bow.csv', index=False)
_____no_output_____
MIT
twitter-sentiment-analysis-word2vec-doc2vec (2).ipynb
msharma043510/Twitter-Sentiment-Analysis
Here validation score is slightly lesser than the Logistic Regression score for bag-of-words features. **TF-IDF Features**
svc = svm.SVC(kernel='linear', C=1, probability=True).fit(xtrain_tfidf, ytrain) prediction = svc.predict_proba(xvalid_tfidf) prediction_int = prediction[:,1] >= 0.3 prediction_int = prediction_int.astype(np.int) f1_score(yvalid, prediction_int)
_____no_output_____
MIT
twitter-sentiment-analysis-word2vec-doc2vec (2).ipynb
msharma043510/Twitter-Sentiment-Analysis
**Word2Vec Features**
svc = svm.SVC(kernel='linear', C=1, probability=True).fit(xtrain_w2v, ytrain) prediction = svc.predict_proba(xvalid_w2v) prediction_int = prediction[:,1] >= 0.3 prediction_int = prediction_int.astype(np.int) f1_score(yvalid, prediction_int)
_____no_output_____
MIT
twitter-sentiment-analysis-word2vec-doc2vec (2).ipynb
msharma043510/Twitter-Sentiment-Analysis
**Doc2Vec Features**
svc = svm.SVC(kernel='linear', C=1, probability=True).fit(xtrain_d2v, ytrain) prediction = svc.predict_proba(xvalid_d2v) prediction_int = prediction[:,1] >= 0.3 prediction_int = prediction_int.astype(np.int) f1_score(yvalid, prediction_int)
_____no_output_____
MIT
twitter-sentiment-analysis-word2vec-doc2vec (2).ipynb
msharma043510/Twitter-Sentiment-Analysis
RandomForestRandom Forest is a versatile machine learning algorithm capable of performing both regression and classification tasks. It is a kind of ensemble learning method, where a few weak models combine to form a powerful model. In Random Forest, we grow multiple trees as opposed to a decision single tree. To class...
from sklearn.ensemble import RandomForestClassifier
_____no_output_____
MIT
twitter-sentiment-analysis-word2vec-doc2vec (2).ipynb
msharma043510/Twitter-Sentiment-Analysis
**Bag-of-Words Features**First we will train our RandomForest model on the Bag-of-Words features and check its performance on validation set.
rf = RandomForestClassifier(n_estimators=400, random_state=11).fit(xtrain_bow, ytrain) prediction = rf.predict(xvalid_bow) f1_score(yvalid, prediction) # validation score
_____no_output_____
MIT
twitter-sentiment-analysis-word2vec-doc2vec (2).ipynb
msharma043510/Twitter-Sentiment-Analysis
Let’s make predictions for the test dataset and create another submission file.
test_pred = rf.predict(test_bow) test['label'] = test_pred submission = test[['id','label']] submission.to_csv('sub_rf_bow.csv', index=False)
_____no_output_____
MIT
twitter-sentiment-analysis-word2vec-doc2vec (2).ipynb
msharma043510/Twitter-Sentiment-Analysis
**TF-IDF Features**
rf = RandomForestClassifier(n_estimators=400, random_state=11).fit(xtrain_tfidf, ytrain) prediction = rf.predict(xvalid_tfidf) f1_score(yvalid, prediction)
_____no_output_____
MIT
twitter-sentiment-analysis-word2vec-doc2vec (2).ipynb
msharma043510/Twitter-Sentiment-Analysis
**Word2Vec Features**
rf = RandomForestClassifier(n_estimators=400, random_state=11).fit(xtrain_w2v, ytrain) prediction = rf.predict(xvalid_w2v) f1_score(yvalid, prediction)
_____no_output_____
MIT
twitter-sentiment-analysis-word2vec-doc2vec (2).ipynb
msharma043510/Twitter-Sentiment-Analysis
**Doc2Vec Features**
rf = RandomForestClassifier(n_estimators=400, random_state=11).fit(xtrain_d2v, ytrain) prediction = rf.predict(xvalid_d2v) f1_score(yvalid, prediction)
_____no_output_____
MIT
twitter-sentiment-analysis-word2vec-doc2vec (2).ipynb
msharma043510/Twitter-Sentiment-Analysis
XGBoostExtreme Gradient Boosting (xgboost) is an advanced implementation of gradient boosting algorithm. It has both linear model solver and tree learning algorithms. Its ability to do parallel computation on a single machine makes it extremely fast. It also has additional features for doing cross validation and findi...
from xgboost import XGBClassifier
_____no_output_____
MIT
twitter-sentiment-analysis-word2vec-doc2vec (2).ipynb
msharma043510/Twitter-Sentiment-Analysis
**Bag-of-Words Features**
xgb_model = XGBClassifier(max_depth=6, n_estimators=1000).fit(xtrain_bow, ytrain) prediction = xgb_model.predict(xvalid_bow) f1_score(yvalid, prediction) test_pred = xgb_model.predict(test_bow) test['label'] = test_pred submission = test[['id','label']] submission.to_csv('sub_xgb_bow.csv', index=False)
_____no_output_____
MIT
twitter-sentiment-analysis-word2vec-doc2vec (2).ipynb
msharma043510/Twitter-Sentiment-Analysis
**TF-IDF Features**
xgb = XGBClassifier(max_depth=6, n_estimators=1000).fit(xtrain_tfidf, ytrain) prediction = xgb.predict(xvalid_tfidf) f1_score(yvalid, prediction)
_____no_output_____
MIT
twitter-sentiment-analysis-word2vec-doc2vec (2).ipynb
msharma043510/Twitter-Sentiment-Analysis
**Word2Vec Features**
xgb = XGBClassifier(max_depth=6, n_estimators=1000, nthread= 3).fit(xtrain_w2v, ytrain) prediction = xgb.predict(xvalid_w2v) f1_score(yvalid, prediction)
_____no_output_____
MIT
twitter-sentiment-analysis-word2vec-doc2vec (2).ipynb
msharma043510/Twitter-Sentiment-Analysis
XGBoost model on word2vec features has outperformed all the previuos models **Doc2Vec Features**
xgb = XGBClassifier(max_depth=6, n_estimators=1000, nthread= 3).fit(xtrain_d2v, ytrain) prediction = xgb.predict(xvalid_d2v) f1_score(yvalid, prediction) import xgboost as xgb
_____no_output_____
MIT
twitter-sentiment-analysis-word2vec-doc2vec (2).ipynb
msharma043510/Twitter-Sentiment-Analysis
Here we will use DMatrices. A DMatrix can contain both the features and the target.
dtrain = xgb.DMatrix(xtrain_w2v, label=ytrain) dvalid = xgb.DMatrix(xvalid_w2v, label=yvalid) dtest = xgb.DMatrix(test_w2v) # Parameters that we are going to tune params = { 'objective':'binary:logistic', 'max_depth':6, 'min_child_weight': 1, 'eta':.3, 'subsample': 1, 'colsample_bytree': 1 }
/opt/conda/lib/python3.6/site-packages/xgboost/core.py:587: FutureWarning: Series.base is deprecated and will be removed in a future version if getattr(data, 'base', None) is not None and \
MIT
twitter-sentiment-analysis-word2vec-doc2vec (2).ipynb
msharma043510/Twitter-Sentiment-Analysis
We will prepare a custom evaluation metric to calculate F1 score.
def custom_eval(preds, dtrain): labels = dtrain.get_label().astype(np.int) preds = (preds >= 0.3).astype(np.int) return [('f1_score', f1_score(labels, preds))]
_____no_output_____
MIT
twitter-sentiment-analysis-word2vec-doc2vec (2).ipynb
msharma043510/Twitter-Sentiment-Analysis
**General Approach for Parameter Tuning**We will follow the steps below to tune the parameters.1. Choose a relatively high learning rate. Usually a learning rate of 0.3 is used at this stage.1. Tune tree-specific parameters such as max_depth, min_child_weight, subsample, colsample_bytree keeping the learning rate fixed...
gridsearch_params = [ (max_depth, min_child_weight) for max_depth in range(6,10) for min_child_weight in range(5,8) ] max_f1 = 0. # initializing with 0 best_params = None for max_depth, min_child_weight in gridsearch_params: print("CV with max_depth={}, min_child_weight={}".format(max_depth,min...
CV with max_depth=6, min_child_weight=5 CV with max_depth=6, min_child_weight=6 CV with max_depth=6, min_child_weight=7 CV with max_depth=7, min_child_weight=5 CV with max_depth=7, min_child_weight=6 CV with max_depth=7, min_child_weight=7 CV with max_depth=8, min_child_weight=5 CV with max_depth=8, min_child_weight=6 ...
MIT
twitter-sentiment-analysis-word2vec-doc2vec (2).ipynb
msharma043510/Twitter-Sentiment-Analysis
Updating max_depth and min_child_weight parameters.
params['max_depth'] = 9 params['min_child_weight'] = 7
_____no_output_____
MIT
twitter-sentiment-analysis-word2vec-doc2vec (2).ipynb
msharma043510/Twitter-Sentiment-Analysis
Tuning *subsample* and *colsample*
gridsearch_params = [ (subsample, colsample) for subsample in [i/10. for i in range(5,10)] for colsample in [i/10. for i in range(5,10)] ] max_f1 = 0. best_params = None for subsample, colsample in gridsearch_params: print("CV with subsample={}, colsample={}".format(subsample,colsample)) # ...
CV with subsample=0.5, colsample=0.5 F1 Score 0.6542134 for 48 rounds CV with subsample=0.5, colsample=0.6 F1 Score 0.6542134 for 48 rounds CV with subsample=0.5, colsample=0.7 F1 Score 0.6542134 for 48 rounds CV with subsample=0.5, colsample=0.8 F1 Score 0.6542134 for 48 rounds CV with subsample=0.5, colsample=0.9...
MIT
twitter-sentiment-analysis-word2vec-doc2vec (2).ipynb
msharma043510/Twitter-Sentiment-Analysis
Updating *subsample* and *colsample_bytree*
params['subsample'] = 0.9 params['colsample_bytree'] = 0.5
_____no_output_____
MIT
twitter-sentiment-analysis-word2vec-doc2vec (2).ipynb
msharma043510/Twitter-Sentiment-Analysis
Now let’s tune the *learning rate*.
max_f1 = 0. best_params = None for eta in [.3, .2, .1, .05, .01, .005]: print("CV with eta={}".format(eta)) # Update ETA params['eta'] = eta # Run CV cv_results = xgb.cv( params, dtrain, feval= custom_eval, num_boost_round=1000, maximize=True, see...
CV with eta=0.3 F1 Score 0.678087 for 97 rounds CV with eta=0.2 F1 Score 0.6725521999999999 for 60 rounds CV with eta=0.1 F1 Score 0.6811619999999999 for 149 rounds CV with eta=0.05 F1 Score 0.6785198 for 243 rounds CV with eta=0.01 F1 Score 0.1302024 for 0 rounds CV with eta=0.005 F1 Score 0.1302024 for 0 rounds...
MIT
twitter-sentiment-analysis-word2vec-doc2vec (2).ipynb
msharma043510/Twitter-Sentiment-Analysis
Let’s have a look at the final list of tuned parameters.
params = { 'colsample': 0.9, 'colsample_bytree': 0.5, 'eta': 0.1, 'max_depth': 9, 'min_child_weight': 7, 'objective': 'binary:logistic', 'subsample': 0.9 }
_____no_output_____
MIT
twitter-sentiment-analysis-word2vec-doc2vec (2).ipynb
msharma043510/Twitter-Sentiment-Analysis
Finally we can now use these tuned parameters in our xgboost model. We have used early stopping of 10 which means if the model’s performance doesn’t improve under 10 rounds, then the model training will be stopped.
xgb_model = xgb.train( params, dtrain, feval= custom_eval, num_boost_round= 1000, maximize=True, evals=[(dvalid, "Validation")], early_stopping_rounds=10 ) test_pred = xgb_model.predict(dtest) test['label'] = (test_pred >= 0.3).astype(np.int) submission = test[['id','label']] submission.to...
_____no_output_____
MIT
twitter-sentiment-analysis-word2vec-doc2vec (2).ipynb
msharma043510/Twitter-Sentiment-Analysis
Abnormality Detection in Musculoskeletal Radiographs The objective is to build a machine learning model that can detect an abnormality in the X-Ray radiographs. These models can help towards providing healthcare access to the parts of the world where access to skilled radiologists is limited. According to a study on t...
from keras.applications.densenet import DenseNet169, DenseNet121, preprocess_input from keras.preprocessing.image import ImageDataGenerator, load_img, image from keras.models import Sequential, Model, load_model from keras.layers import Conv2D, MaxPool2D from keras.layers import Activation, Dropout, Flatten, Dense from...
Using TensorFlow backend.
MIT
src/xr_finger_model.ipynb
rajkumargithub/denset.mura
3.1 Data preprocessing
#Utility function to find the list of files in a directory excluding the hidden files. def listdir_nohidden(path): for f in os.listdir(path): if not f.startswith('.'): yield f
_____no_output_____
MIT
src/xr_finger_model.ipynb
rajkumargithub/denset.mura
3.1.1 Creating a csv file containing path to image & csv
def create_images_metadata_csv(category,study_types): """ This function creates a csv file containing the path of images, label. """ image_data = {} study_label = {'positive': 1, 'negative': 0} #study_types = ['XR_ELBOW','XR_FINGER','XR_FOREARM','XR_HAND','XR_HUMERUS','XR_SHOULDER','XR_WRIST'] ...
_____no_output_____
MIT
src/xr_finger_model.ipynb
rajkumargithub/denset.mura