markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
diagnostics Collect data for diagnostics
singles = [] study_singles = [] sets = {'everything': metadata, 'all_emp': emp_metadata, 'qc_filtered': filtered_metadata} ordering = ['everything', 'all_emp', 'qc_filtered'] for setSize in sorted(subsets.keys()): name = 'subset_' + str(setSize) sets[name] = metadata[metadata[name]] ordering...
code/04-subsets-prevalence/subset_samples_by_empo_and_study.ipynb
cuttlefishh/emp
bsd-3-clause
visualize data Two quick sanity checks: We expect the sub-set distribution of 'number samples' (first plot) to follow the one of the 'filtered' data-set, but be cut off at a certain level such that only the lower part remains. The distribution of 'number studies' (second plot) should be identical to the one of the 'fil...
empo_ordering = list(data[data.set == 'everything'].sort_values('samples', ascending=False).group) fig, ax = plt.subplots(2, 1, figsize=(20, 8)) sn.barplot(x="set", y="samples", hue="group", data=data, order=ordering, hue_order=empo_ordering, ax=ax[...
code/04-subsets-prevalence/subset_samples_by_empo_and_study.ipynb
cuttlefishh/emp
bsd-3-clause
We want to make sure that samples drawn within a EMPO group are equally (not proportionally) drawn from the studies that fall into the EMPO group. We should observe the same trimming of distributions as on the above graph 'number samples'. However, the picture gets blurred due to the fact that the overall number of sam...
fig, ax = plt.subplots(1, 1, figsize=(20, 8)) sn.barplot(x="set", y="samples", hue="study", data=study_data, order=ordering, ax = ax, hue_order=list(study_data[study_data.set == 'everything'].sort_values('samples', ascending=False).study), ) l...
code/04-subsets-prevalence/subset_samples_by_empo_and_study.ipynb
cuttlefishh/emp
bsd-3-clause
Prepare Vectors
# set up targets for the human-categorized data targets = pd.DataFrame.from_dict(human_categorized, 'index') targets[0] = pd.Categorical(targets[0]) targets['code'] = targets[0].cat.codes # form: | word (label) | language | code (1-5) tmp_dict = {} for key in human_categorized: tmp_dict[key] = tsvopener.etymdict[k...
Prototyping semi-supervised.ipynb
Trevortds/Etymachine
gpl-2.0
Use Scikit's semisupervised learning There are two semisupervised methods that scikit has. Label Propagation and Label Spreading. The difference is in how they regularize.
num_points = 1000 num_test = 50 x = vstack([vectors[:num_points], supervised_vectors]).toarray() t = all_sents['code'][:num_points].append(targets['code']) x_test = x[-num_test:] t_test = t[-num_test:] x = x[:-num_test] t = t[:-num_test] label_prop_model = LabelSpreading(kernel='knn') from time import time print("f...
Prototyping semi-supervised.ipynb
Trevortds/Etymachine
gpl-2.0
Measuring effectiveness.
from sklearn.metrics import precision_score, accuracy_score, f1_score, recall_score t_pred = label_prop_model.predict(x_test) print("Metrics based on 50 hold-out points") print("Macro") print("accuracy: %f" % accuracy_score(t_test, t_pred)) print("precision: %f" % precision_score(t_test, t_pred, average='macro')) ...
Prototyping semi-supervised.ipynb
Trevortds/Etymachine
gpl-2.0
PCA: Let's see what it looks like Performing PCA
supervised_vectors import matplotlib.pyplot as pl u, s, v = np.linalg.svd(supervised_vectors.toarray()) pca = np.dot(u[:,0:2], np.diag(s[0:2])) english = np.empty((0,2)) french = np.empty((0,2)) greek = np.empty((0,2)) latin = np.empty((0,2)) norse = np.empty((0,2)) other = np.empty((0,2)) for i in range(pca.sha...
Prototyping semi-supervised.ipynb
Trevortds/Etymachine
gpl-2.0
The three bases ${\phi^n_k}_{k=0}^{N-3}$ are implemented with slightly different scaling in shenfun. The first, with $n=0$, is obtained with no special scaling using
N = 20 D0 = FunctionSpace(N, 'C', bc=(0, 0), basis='Heinrichs')
docs/source/fasttransforms.ipynb
spectralDNS/shenfun
bsd-2-clause
The second basis is implemented in Shenfun as $\phi_k = \frac{2}{k+1}\phi^1_k$, which can be simplified as <!-- Equation labels as ordinary links --> <div id="eq:ft:shen"></div> $$ \label{eq:ft:shen} \tag{16} \phi_k(x) = T_k-T_{k+2}, \quad k=0,1, \ldots, N-3, $$ and implemented as
D1 = FunctionSpace(N, 'C', bc=(0, 0)) # this is the default basis
docs/source/fasttransforms.ipynb
spectralDNS/shenfun
bsd-2-clause
Because of the scaling the expansion coefficients for $\phi_k$ are $\hat{u}^{\phi}_k=\frac{k+1}{2}\hat{u}^1_k$. Using (14) we get $$ \hat{u}^{\phi}_k = \frac{1}{2N}\text{dst}^{II}(\boldsymbol{u}/\sin \boldsymbol{\theta})_k, \quad k = 0, 1, \ldots, N-3. $$ The third basis is also scaled and implemented in Shenfun as $\...
D2 = FunctionSpace(N, 'U', bc=(0, 0), quad='GC') # quad='GU' is default for U
docs/source/fasttransforms.ipynb
spectralDNS/shenfun
bsd-2-clause
and the expansion coefficients are found as $\hat{u}^{\psi}_k = \frac{(k+3)(k+2)}{2} \hat{u}^2_k$. For verification of all the fast transforms we first create a vector consisting of random expansion coefficients, and then transform it backwards to physical space
f = Function(D0, buffer=np.random.random(N)) f[-2:] = 0 fb = f.backward().copy()
docs/source/fasttransforms.ipynb
spectralDNS/shenfun
bsd-2-clause
Next, we perform the regular projections into the three spaces D0, D1 and D2, using the default inner product in $L^2_{\omega^{-1/2}}$ for D0 and D1, whereas $L^2_{\omega^{1/2}}$ is used for D2. Now u0, u1 and u2 will be the three solution vectors $\boldsymbol{\hat{u}}^{\varphi}$, $\boldsymbol{\hat{u}}^{\phi}$ and $\bo...
u0 = project(fb, D0) u1 = project(fb, D1) u2 = project(fb, D2)
docs/source/fasttransforms.ipynb
spectralDNS/shenfun
bsd-2-clause
Now compute the fast transforms and assert that they are equal to u0, u1 and u2
theta = np.pi*(2*np.arange(N)+1)/(2*N) # Test for n=0 dct = fftw.dctn(fb.copy(), type=2) ck = np.ones(N); ck[0] = 2 d0 = dct(fb/np.sin(theta)**2)/(ck*N) assert np.linalg.norm(d0-u0) < 1e-8, np.linalg.norm(d0-f0) # Test for n=1 dst = fftw.dstn(fb.copy(), type=2) d1 = dst(fb/np.sin(theta))/(2*N) assert np.linalg.norm(d1-...
docs/source/fasttransforms.ipynb
spectralDNS/shenfun
bsd-2-clause
That's it! If you make it to here with no errors, then the three tests pass, and the fast transforms are equal to the slow ones, at least within given precision. Let's try some timings
%timeit project(fb, D1) %timeit dst(fb/np.sin(theta))/(2*N)
docs/source/fasttransforms.ipynb
spectralDNS/shenfun
bsd-2-clause
We can precompute the sine term, because it does not change
dd = np.sin(theta)*2*N %timeit dst(fb/dd)
docs/source/fasttransforms.ipynb
spectralDNS/shenfun
bsd-2-clause
The other two transforms are approximately the same speed.
%timeit dct(fb/np.sin(theta)**2)/(ck*N)
docs/source/fasttransforms.ipynb
spectralDNS/shenfun
bsd-2-clause
3) Make sure you have all the dependencies installed (replacing gpu with cpu for cpu-only mode):
import pip #pip.main(['install','-r','requirements-gpu.txt']) pip.main(['install','-r','requirements-cpu.txt']) pip.main(['install', 'SimpleITK>=1.0.0'])
demos/PROMISE12/PROMISE12_Demo_Notebook.ipynb
NifTK/NiftyNet
apache-2.0
Training a network from the command line The simplest way to use NiftyNet is via the commandline net_segment.py script. Normally, this is done on the command line with a command like this from the NiftyNet root directory: python net_segment.py train --conf demos/PROMISE12/promise12_demo_train_config.ini --max_iter 10 N...
import os import sys import niftynet sys.argv=['','train','-a','net_segment','--conf',os.path.join('demos','PROMISE12','promise12_demo_train_config.ini'),'--max_iter','10'] niftynet.main()
demos/PROMISE12/PROMISE12_Demo_Notebook.ipynb
NifTK/NiftyNet
apache-2.0
Now you have trained (a few iterations of) a deep learning network for medical image segmentation. If you have some time on your hands, you can finish training the network (by leaving off the max_iter argument) and try it out, by running the following command python net_segment.py inference --conf demos/PROMISE12/promi...
import os import sys import niftynet sys.argv=['', 'inference','-a','net_segment','--conf',os.path.join('demos','PROMISE12','promise12_demo_inference_config.ini')] niftynet.main()
demos/PROMISE12/PROMISE12_Demo_Notebook.ipynb
NifTK/NiftyNet
apache-2.0
Otherwise, you can load up some pre-trained weights for the network: python net_segment.py inference --conf demo/PROMISE12/promise12_demo_config.ini --model_dir demo/PROMISE12/pretrained or the following python code in the Notebook
import os import sys import niftynet sys.argv=['', 'inference','-a','net_segment','--conf',os.path.join('demos','PROMISE12','promise12_demo_inference_config.ini'), '--model_dir', os.path.join('demos','PROMISE12','pretrained')] niftynet.main()
demos/PROMISE12/PROMISE12_Demo_Notebook.ipynb
NifTK/NiftyNet
apache-2.0
As a first exercise, we'll solve the 1D linear convection equation with a square wave initial condition, defined as follows: \begin{equation} u(x,0)=\begin{cases}2 & \text{where } 0.5\leq x \leq 1,\ 1 & \text{everywhere else in } (0, 2) \end{cases} \end{equation} We also need a boundary condition on $x$: let $u=1$ at $...
nx = 41 # try changing this number from 41 to 81 and Run All ... what happens? dx = 2/(nx-1) nt = 25 dt = .02 c = 1 #assume wavespeed of c = 1 x = numpy.linspace(0,2,nx)
notebook/02_01_1DConvection.ipynb
MedievalSure/ToStudy
mit
We also need to set up our initial conditions. Here, we use the NumPy function ones() defining an array which is nx elements long with every value equal to $1$. How useful! We then change a slice of that array to the value $u=2$, to get the square wave, and we print out the initial array just to admire it. But which va...
u = numpy.ones(nx) #numpy function ones() lbound = numpy.where(x >= 0.5) ubound = numpy.where(x <= 1) print(lbound) print(ubound)
notebook/02_01_1DConvection.ipynb
MedievalSure/ToStudy
mit
That leaves us with two vectors. lbound, which has the indices for $x \geq .5$ and 'ubound', which has the indices for $x \leq 1$. To combine these two, we can use an intersection, with numpy.intersect1d.
bounds = numpy.intersect1d(lbound, ubound) u[bounds]=2 #setting u = 2 between 0.5 and 1 as per our I.C.s print(u)
notebook/02_01_1DConvection.ipynb
MedievalSure/ToStudy
mit
Remember that Python can also combine commands, we could have instead written Python u[numpy.intersect1d(numpy.where(x &gt;= 0.5), numpy.where(x &lt;= 1))] = 2 but that can be a little hard to read. Now let's take a look at those initial conditions we've built with a handy plot.
pyplot.plot(x, u, color='#003366', ls='--', lw=3) pyplot.ylim(0,2.5);
notebook/02_01_1DConvection.ipynb
MedievalSure/ToStudy
mit
It does look pretty close to what we expected. But it looks like the sides of the square wave are not perfectly vertical. Is that right? Think for a bit. Now it's time to write some code for the discrete form of the convection equation using our chosen finite-difference scheme. For every element of our array u, we nee...
for n in range(1,nt): un = u.copy() for i in range(1,nx): u[i] = un[i]-c*dt/dx*(un[i]-un[i-1])
notebook/02_01_1DConvection.ipynb
MedievalSure/ToStudy
mit
Note—We will learn later that the code as written above is quite inefficient, and there are better ways to write this, Python-style. But let's carry on. Now let's inspect our solution array after advancing in time with a line plot.
pyplot.plot(x, u, color='#003366', ls='--', lw=3) pyplot.ylim(0,2.5);
notebook/02_01_1DConvection.ipynb
MedievalSure/ToStudy
mit
That's funny. Our square wave has definitely moved to the right, but it's no longer in the shape of a top-hat. What's going on? Dig deeper The solution differs from the expected square wave because the discretized equation is an approximation of the continuous differential equation that we want to solve. There are err...
##problem parameters nx = 41 dx = 2/(nx-1) nt = 10 dt = .02 ##initial conditions u = numpy.ones(nx) u[numpy.intersect1d(lbound, ubound)]=2
notebook/02_01_1DConvection.ipynb
MedievalSure/ToStudy
mit
How does it look?
pyplot.plot(x, u, color='#003366', ls='--', lw=3) pyplot.ylim(0,2.5);
notebook/02_01_1DConvection.ipynb
MedievalSure/ToStudy
mit
Changing just one line of code in the solution of linear convection, we are able to now get the non-linear solution: the line that corresponds to the discrete equation now has un[i] in the place where before we just had c. So you could write something like: Python for n in range(1,nt): un = u.copy() for i in ran...
for n in range(1, nt): un = u.copy() u[1:] = un[1:]-un[1:]*dt/dx*(un[1:]-un[0:-1]) u[0] = 1.0 pyplot.plot(x, u, color='#003366', ls='--', lw=3) pyplot.ylim(0,2.5);
notebook/02_01_1DConvection.ipynb
MedievalSure/ToStudy
mit
Hmm. That's quite interesting: like in the linear case, we see that we have lost the sharp sides of our initial square wave, but there's more. Now, the wave has also lost symmetry! It seems to be lagging on the rear side, while the front of the wave is steepening. Is this another form of numerical error, do you ask? No...
from IPython.core.display import HTML css_file = 'numericalmoocstyle.css' HTML(open(css_file, "r").read())
notebook/02_01_1DConvection.ipynb
MedievalSure/ToStudy
mit
Next we're going to need a way to play the audio files we're working with (otherwise this wouldn't be very exciting at all would it?). In the next bit of code I've defined a wavPlayer class that takes the signal and the sample rate and then creates a nice HTML5 webplayer right inline with the notebook.
from IPython.display import Audio from IPython.display import display def wavPlayer(data, rate): display(Audio(data, rate=rate))
doc/ipython-notebooks/ica/bss_audio.ipynb
geektoni/shogun
bsd-3-clause
Now that we can load and play wav files we actually need some wav files! I found the sounds from Starcraft to be a great source of wav files because they're short, interesting and remind me of my childhood. You can download Starcraft wav files here: http://wavs.unclebubby.com/computer/starcraft/ among other places on t...
# change to the shogun-data directory import os os.chdir(os.path.join(SHOGUN_DATA_DIR, 'ica')) %matplotlib inline import matplotlib.pyplot as plt # load fs1,s1 = load_wav('tbawht02.wav') # Terran Battlecruiser - "Good day, commander." # plot plt.figure(figsize=(6.75,2)) plt.plot(s1) plt.title('Signal 1') plt.show() ...
doc/ipython-notebooks/ica/bss_audio.ipynb
geektoni/shogun
bsd-3-clause
Now let's load a second audio clip:
# load fs2,s2 = load_wav('TMaRdy00.wav') # Terran Marine - "You want a piece of me, boy?" # plot plt.figure(figsize=(6.75,2)) plt.plot(s2) plt.title('Signal 2') plt.show() # player wavPlayer(s2, fs2)
doc/ipython-notebooks/ica/bss_audio.ipynb
geektoni/shogun
bsd-3-clause
and a third audio clip:
# load fs3,s3 = load_wav('PZeRdy00.wav') # Protoss Zealot - "My life for Aiur!" # plot plt.figure(figsize=(6.75,2)) plt.plot(s3) plt.title('Signal 3') plt.show() # player wavPlayer(s3, fs3)
doc/ipython-notebooks/ica/bss_audio.ipynb
geektoni/shogun
bsd-3-clause
Now we've got our audio files loaded up into our example program. The next thing we need to do is mix them together! First another nuance - what if the audio clips aren't the same lenth? The solution I came up with for this was to simply resize them all to the length of the longest signal, the extra length will just be...
# Adjust for different clip lengths fs = fs1 length = max([len(s1), len(s2), len(s3)]) s1 = np.resize(s1, (length,1)) s2 = np.resize(s2, (length,1)) s3 = np.resize(s3, (length,1)) S = (np.c_[s1, s2, s3]).T # Mixing Matrix #A = np.random.uniform(size=(3,3)) #A = A / A.sum(axis=0) A = np.array([[1, 0.5, 0.5], ...
doc/ipython-notebooks/ica/bss_audio.ipynb
geektoni/shogun
bsd-3-clause
Now before we can work on separating these signals we need to get the data ready for Shogun, thankfully this is pretty easy!
# Convert to features for shogun mixed_signals = sg.create_features((X).astype(np.float64))
doc/ipython-notebooks/ica/bss_audio.ipynb
geektoni/shogun
bsd-3-clause
Now lets unmix those signals! In this example I'm going to use an Independent Component Analysis (ICA) algorithm called JADE. JADE is one of the ICA algorithms available in Shogun and it works by performing Aproximate Joint Diagonalization (AJD) on a 4th order cumulant tensor. I'm not going to go into a lot of detail o...
# Separating with JADE jade = sg.create_transformer('Jade') jade.fit(mixed_signals) signals = jade.transform(mixed_signals) S_ = signals.get('feature_matrix') A_ = jade.get('mixing_matrix') A_ = A_ / A_.sum(axis=0) print('Estimated Mixing Matrix:') print(A_)
doc/ipython-notebooks/ica/bss_audio.ipynb
geektoni/shogun
bsd-3-clause
Thats all there is to it! Check out how nicely those signals have been separated and have a listen!
# Show separation results # Separated Signal i gain = 4000 for i in range(S_.shape[0]): plt.figure(figsize=(6.75,2)) plt.plot((gain*S_[i]).astype(np.int16)) plt.title('Separated Signal %d' % (i+1)) plt.show() wavPlayer((gain*S_[i]).astype(np.int16), fs)
doc/ipython-notebooks/ica/bss_audio.ipynb
geektoni/shogun
bsd-3-clause
There are various application handlers that can be used to build up Bokeh documents. For example, there is a ScriptHandler that uses the code from a .py file to produce Bokeh documents. This is the handler that is used when we run bokeh serve app.py. Here we are going to use the lesser-known FunctionHandler, that gets ...
def modify_doc(doc): data_url = "http://www.neracoos.org/erddap/tabledap/B01_sbe37_all.csvp?time,temperature&depth=1&temperature_qc=0&time>=2016-02-15&time<=2017-03-22" df = pd.read_csv(data_url, parse_dates=True, index_col=0) df = df.rename(columns={'temperature (celsius)': 'temperature'}) df.index.nam...
examples/howto/server_embed/notebook_embed.ipynb
schoolie/bokeh
bsd-3-clause
We take the function above and configure a FunctionHandler with it. Then we create an Application that uses handler. (It is possible, but uncommon, for Bokeh applications to have more than one handler.) The end result is that the Bokeh server will call modify_doc to build new documents for every new sessions that is op...
from bokeh.application.handlers import FunctionHandler from bokeh.application import Application handler = FunctionHandler(modify_doc) app = Application(handler)
examples/howto/server_embed/notebook_embed.ipynb
schoolie/bokeh
bsd-3-clause
Now we can display our application using show:
show(app)
examples/howto/server_embed/notebook_embed.ipynb
schoolie/bokeh
bsd-3-clause
Universal Sentence Encoder <table class="tfo-notebook-buttons" align="left"> <td><a target="_blank" href="https://www.tensorflow.org/hub/tutorials/semantic_similarity_with_tf_hub_universal_encoder"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org에서 보기</a></td> <td> <a target="_blank"...
%%capture !pip3 install seaborn
site/ko/hub/tutorials/semantic_similarity_with_tf_hub_universal_encoder.ipynb
tensorflow/docs-l10n
apache-2.0
Tensorflow 설치에 대한 자세한 내용은 https://www.tensorflow.org/install/에서 찾을 수 있습니다.
#@title Load the Universal Sentence Encoder's TF Hub module from absl import logging import tensorflow as tf import tensorflow_hub as hub import matplotlib.pyplot as plt import numpy as np import os import pandas as pd import re import seaborn as sns module_url = "https://tfhub.dev/google/universal-sentence-encoder/...
site/ko/hub/tutorials/semantic_similarity_with_tf_hub_universal_encoder.ipynb
tensorflow/docs-l10n
apache-2.0
의미론적 텍스트 유사성 작업 예 Universal Sentence Encoder에 의해 생성된 임베딩은 대략적으로 정규화됩니다. 두 문장의 의미론적 유사성은 인코딩의 내적으로 간편하게 계산될 수 있습니다.
def plot_similarity(labels, features, rotation): corr = np.inner(features, features) sns.set(font_scale=1.2) g = sns.heatmap( corr, xticklabels=labels, yticklabels=labels, vmin=0, vmax=1, cmap="YlOrRd") g.set_xticklabels(labels, rotation=rotation) g.set_title("Semantic Text...
site/ko/hub/tutorials/semantic_similarity_with_tf_hub_universal_encoder.ipynb
tensorflow/docs-l10n
apache-2.0
시각화된 유사성 여기서는 히트 맵으로 유사성을 나타냅니다. 최종 그래프는 9x9 행렬이며, 각 항 [i, j]는 문장 i 및 j에 대한 인코딩의 내적을 바탕으로 색상이 지정됩니다.
messages = [ # Smartphones "I like my phone", "My phone is not good.", "Your cellphone looks great.", # Weather "Will it snow tomorrow?", "Recently a lot of hurricanes have hit the US", "Global warming is real", # Food and health "An apple a day, keeps the doctors away", "E...
site/ko/hub/tutorials/semantic_similarity_with_tf_hub_universal_encoder.ipynb
tensorflow/docs-l10n
apache-2.0
평가: 의미론적 텍스트 유사성(STS) 벤치마크 STS 벤치마크는 문장 임베딩을 사용하여 계산된 유사성 점수가 사람의 판단과 일치하는 정도에 대한 내재적 평가를 제공합니다. 벤치마크를 위해 시스템이 다양한 문장 쌍 선택에 대한 유사성 점수를 반환해야 합니다. 그런 다음 Pearson 상관 관계를 사용하여 사람의 판단에 대한 머신 유사성 점수의 품질을 평가합니다. 데이터 다운로드하기
import pandas import scipy import math import csv sts_dataset = tf.keras.utils.get_file( fname="Stsbenchmark.tar.gz", origin="http://ixa2.si.ehu.es/stswiki/images/4/48/Stsbenchmark.tar.gz", extract=True) sts_dev = pandas.read_table( os.path.join(os.path.dirname(sts_dataset), "stsbenchmark", "sts-dev.cs...
site/ko/hub/tutorials/semantic_similarity_with_tf_hub_universal_encoder.ipynb
tensorflow/docs-l10n
apache-2.0
문장 임베딩 평가하기
sts_data = sts_dev #@param ["sts_dev", "sts_test"] {type:"raw"} def run_sts_benchmark(batch): sts_encode1 = tf.nn.l2_normalize(embed(tf.constant(batch['sent_1'].tolist())), axis=1) sts_encode2 = tf.nn.l2_normalize(embed(tf.constant(batch['sent_2'].tolist())), axis=1) cosine_similarities = tf.reduce_sum(tf.multip...
site/ko/hub/tutorials/semantic_similarity_with_tf_hub_universal_encoder.ipynb
tensorflow/docs-l10n
apache-2.0
HELPERS:
# Global vars DATA_DIR = 'D:/larc_projects/job_analytics/data/clean/' RES_DIR = 'd:/larc_projects/job_analytics/results/' AGG_DIR = RES_DIR + 'agg/' FIG_DIR = RES_DIR + 'figs/' apps = pd.read_csv(DATA_DIR + 'apps_with_time.csv') apps.shape # Rm noise (numbers) in job_title column apps['is_number'] = map(is_number, ap...
.ipynb_checkpoints/user_apply_job-checkpoint.ipynb
musketeer191/job_analytics
gpl-3.0
Basic statistics
n_applicant = apps['uid'].nunique(); n_application = apps.shape[0] n_job = len(np.unique(apps['job_id'])); n_job_title = len(np.unique(apps['job_title'])) n_company = posts['company_registration_number_uen_ep'].nunique() stats = pd.DataFrame({'n_application': n_application, 'n_applicant': n_applicant, ...
.ipynb_checkpoints/user_apply_job-checkpoint.ipynb
musketeer191/job_analytics
gpl-3.0
Applicant-apply-Job matrix A. Number of times an applicant applies a specific job title (position).
agg_apps = apps.groupby(by=['uid', 'job_title']).agg({'job_id': 'nunique', 'apply_date': 'nunique'}) # convert to DF agg_apps = agg_apps.add_prefix('n_').reset_index() agg_apps['n_apply'] = agg_apps['n_job_id'] agg_apps.head(3)
.ipynb_checkpoints/user_apply_job-checkpoint.ipynb
musketeer191/job_analytics
gpl-3.0
Let's look at the quartiles of the number of times an applicant applies for a specific job.
quantile(df['n_apply'])
.ipynb_checkpoints/user_apply_job-checkpoint.ipynb
musketeer191/job_analytics
gpl-3.0
As expected, for most of the cases (50%), an applicant applies just once for a specific job. However, we can also see at least 1 extreme case where an applicant applies 582 times for just a job title. Thus, let's look more closely at the distribution of $N_{apply}$.
plt.hist(df['n_apply'], bins=np.unique(df['n_apply']), log=True) plt.xlabel(r'$N_{apply}$') plt.ylabel('# applicant-job pairs (log scale)') plt.savefig(DATA_DIR + 'apply_freq.pdf') plt.show() plt.close()
.ipynb_checkpoints/user_apply_job-checkpoint.ipynb
musketeer191/job_analytics
gpl-3.0
From the histogram, we can see that there are cases when a user applies for a job titles at least 100 times. Let's look closer at those extreme cases. Extreme cases (a user applies the same job title at least 100 times)
extremes = agg_apps.query('n_apply >= 100') extremes.sort_values(by='n_apply', ascending=False, inplace=True) extremes.head() print('No. of extreme cases: {}'.format(extremes.shape[0]))
.ipynb_checkpoints/user_apply_job-checkpoint.ipynb
musketeer191/job_analytics
gpl-3.0
To get a more complete picture on these extreme cases, let's put in apply dates and companies of those jobs. Get dates and compute duration of extreme applications:
# ext_users = np.unique(extremes['uid']) df = apps[apps['uid'].isin(extremes['uid'])] df = df[df['job_title'].isin(extremes['job_title'])] ext_apps = df ext_apps.head(1) res = calDuration(ext_apps) res = pd.merge(res, extremes, left_index=True, right_on=['uid', 'job_title']) res.sort_values(by='uid', inplace=True) r...
.ipynb_checkpoints/user_apply_job-checkpoint.ipynb
musketeer191/job_analytics
gpl-3.0
Dates/duration of all applications:
apps_with_duration = calDuration(apps) apps_with_duration.head() all_res = pd.merge(apps_with_duration, agg_apps, left_index=True, right_on=['uid', 'job_title']) all_res.sort_values(by='uid', inplace=True) all_res = all_res[['uid', 'job_title', 'n_apply', 'first_apply_date', 'last_apply_date', 'n_active_day', 'total_...
.ipynb_checkpoints/user_apply_job-checkpoint.ipynb
musketeer191/job_analytics
gpl-3.0
B. Number of different job titles an applicant applies
agg_job_title = apps[['uid', 'job_title']].groupby('uid').agg({'job_title': 'nunique'}) agg_job_title = agg_job_title.add_prefix('n_').reset_index() agg_job_title.sort_values('n_job_title', ascending=False, inplace=True) # agg_job_title.head() agg_job_id = apps[['uid', 'job_id']].groupby('uid').agg({'job_id': 'nuniqu...
.ipynb_checkpoints/user_apply_job-checkpoint.ipynb
musketeer191/job_analytics
gpl-3.0
C. Number of company an applicant applies Merge necessary files to get a full dataset
posts = pd.read_csv(DATA_DIR + 'full_job_posts.csv') print(posts.shape) posts = dot2dash(posts) posts.head() # Extract just job id and employer id job_and_employer = posts[['job_id', 'company_registration_number_uen_ep']].drop_duplicates() job_and_employer.head(1) # Load employer details (names, desc,...) employer_d...
.ipynb_checkpoints/user_apply_job-checkpoint.ipynb
musketeer191/job_analytics
gpl-3.0
D. Number of (job title, company) an applicant applies
tmp = df[['uid', 'job_title', 'reg_no_uen_ep', 'organisation_name_ep']] tmp['n_apply'] = '' apps_by_job_comp = tmp.groupby(['uid', 'job_title', 'reg_no_uen_ep', 'organisation_name_ep']).count() apps_by_job_comp = apps_by_job_comp.reset_index() apps_by_job_comp.sort_values('n_apply', ascending=False, inplace=True) prin...
.ipynb_checkpoints/user_apply_job-checkpoint.ipynb
musketeer191/job_analytics
gpl-3.0
Importing all the data
train_dataset, train_labels, valid_dataset, valid_labels, test_dataset, test_labels = get_data_4d() print('Training:', train_dataset.shape, train_labels.shape) print('Validation:', valid_dataset.shape, valid_labels.shape) print('Testing:', test_dataset.shape, test_labels.shape)
src/tutorials/notMNIST.ipynb
felipessalvatore/CNNexample
mit
Visualizing some examples
train_classes = np.argmax(train_labels, axis=1) train_classes = [chr(i + ord('A')) for i in train_classes] img_size = 28 img_shape = (img_size, img_size) images = train_dataset[0:9] cls_true = train_classes[0:9] plot9images(images, cls_true, img_shape)
src/tutorials/notMNIST.ipynb
felipessalvatore/CNNexample
mit
The hyperparameters of the model are
my_config = Config() print("batch_size = {}".format(my_config.batch_size)) print("patch_size = {}".format(my_config.patch_size)) print("image_size = {}".format(my_config.image_size)) print("num_labels = {}".format(my_config.num_labels)) print("num_channels = {}".format(my_config.num_channels)) print("num_filters_1 = {}...
src/tutorials/notMNIST.ipynb
felipessalvatore/CNNexample
mit
Now, training the model using 10001 steps
my_dataholder = DataHolder(train_dataset, train_labels, valid_dataset, valid_labels, test_dataset, test_labels) my_model = CNNModel(my_config, my_dataholder) train_model(my_model, my_da...
src/tutorials/notMNIST.ipynb
felipessalvatore/CNNexample
mit
Cheking the trained model with the test dataset
print("Test accuracy: %.2f%%" % (check_test(my_model) * 100))
src/tutorials/notMNIST.ipynb
felipessalvatore/CNNexample
mit
Seeing the model perform in 9 images from the valid dataset
randomize_in_place(valid_dataset, valid_labels, 0) valid_classes = np.argmax(valid_labels, axis=1) valid_classes = [chr(i + ord('A')) for i in valid_classes] cls_true = valid_classes[0:9] images = valid_dataset[0:9] images = [image.reshape(1, image.shape[0], image.shape[1...
src/tutorials/notMNIST.ipynb
felipessalvatore/CNNexample
mit
카테고리 분포의 모수 추정 각각의 시도 $x_i$에 대한 확률은 카테고리 분포 $$ P(x | \theta ) = \text{Cat}(x | \theta) = \prod_{k=1}^K \theta_k^{x_k} $$ $$ \sum_{k=1}^K \theta_k = 1 $$ 샘플이 $N$개 있는 경우, Likelihood $$ L = P(x_{1:N}|\theta) = \prod_{i=1}^N \prod_{k=1}^K \theta_k^{x_{i,k}} $$ Log-Likelihood $$ \begin{eqnarray} \log L &=& \log P(x_...
theta0 = np.array([0.1, 0.3, 0.6]) x = np.random.choice(np.arange(3), 1000, p=theta0) N0, N1, N2 = np.bincount(x, minlength=3) N = N0 + N1 + N2 theta = np.array([N0, N1, N2]) / N theta
Lecture/11. 추정 및 검정/6) MLE 모수 추정의 예.ipynb
junhwanjang/DataSchool
mit
정규 분포의 모수 추정 각각의 시도 $x_i$에 대한 확률은 가우시안 정규 분포 $$ P(x | \theta ) = N(x | \mu, \sigma^2) = \dfrac{1}{\sqrt{2\pi\sigma^2}} \exp \left(-\dfrac{(x-\mu)^2}{2\sigma^2}\right) $$ 샘플이 $N$개 있는 경우, Likelihood $$ L = P(x_{1:N}|\theta) = \prod_{i=1}^N \dfrac{1}{\sqrt{2\pi\sigma^2}} \exp \left(-\dfrac{(x_i-\mu)^2}{2\sigma^2}\rig...
mu0 = 1 sigma0 = 2 x = sp.stats.norm(mu0, sigma0).rvs(1000) xbar = x.mean() s2 = x.std(ddof=1) xbar, s2
Lecture/11. 추정 및 검정/6) MLE 모수 추정의 예.ipynb
junhwanjang/DataSchool
mit
다변수 정규 분포의 모수 추정 MLE for Multivariate Gaussian Normal Distribution 각각의 시도 $x_i$에 대한 확률은 다변수 정규 분포 $$ P(x | \theta ) = N(x | \mu, \Sigma) = \dfrac{1}{(2\pi)^{D/2} |\Sigma|^{1/2}} \exp \left( -\dfrac{1}{2} (x-\mu)^T \Sigma^{-1} (x-\mu) \right) $$ 샘플이 $N$개 있는 경우, Likelihood $$ L = P(x_{1:N}|\theta) = \prod_{i=1}^N \d...
mu0 = np.array([0, 1]) sigma0 = np.array([[1, 0.2], [0.2, 4]]) x = sp.stats.multivariate_normal(mu0, sigma0).rvs(1000) xbar = x.mean(axis=0) S2 = np.cov(x, rowvar=0) print(xbar) print(S2)
Lecture/11. 추정 및 검정/6) MLE 모수 추정의 예.ipynb
junhwanjang/DataSchool
mit
<br/> Init the Watson Visual Recognition Python Library you may need to install the SDK first: !pip install --upgrade watson-developer-cloud you will need the API key from the Watson Visual Recognition Service
vr = VisualRecognitionV3(apiVer, api_key=apiKey)
tutorials/Step_4_Classify_with_WatsonVR_old.ipynb
setiQuest/ML4SETI
apache-2.0
<br/> Look For Existing Custom Classifier Use an existing custom classifier (and update) if one exists, else a new custom classifier will be created
## View all of your classifiers classifiers = vr.list_classifiers() print json.dumps(classifiers, indent=2) ## Run this cell ONLY IF you want to REMOVE all classifiers # Otherwise, the subsequent cell will append images to the `classifier_prefix` classifier classifiers = vr.list_classifiers() for c in classifiers['cl...
tutorials/Step_4_Classify_with_WatsonVR_old.ipynb
setiQuest/ML4SETI
apache-2.0
<br/> Send the Images Archives to the Watson Visual Recognition Service for Training https://www.ibm.com/watson/developercloud/doc/visual-recognition/customizing.html https://www.ibm.com/watson/developercloud/visual-recognition/api/v3/ https://github.com/watson-developer-cloud/python-sdk
squiggle = sorted(glob.glob('{}/classification_*_squiggle.zip'.format(mydatafolder))) narrowband = sorted(glob.glob('{}/classification_*_narrowband.zip'.format(mydatafolder))) narrowbanddrd = sorted(glob.glob('{}/classification_*_narrowbanddrd.zip'.format(mydatafolder))) noise = sorted(glob.glob('{}/classification_*_no...
tutorials/Step_4_Classify_with_WatsonVR_old.ipynb
setiQuest/ML4SETI
apache-2.0
<br/> Take a Random Data File for Testing Take a random data file from the test set Create a Spectrogram Image
zz = zipfile.ZipFile(mydatafolder + '/' + 'testset_narrowband.zip') test_list = zz.namelist() randomSignal = zz.open(test_list[10],'r') from IPython.display import Image squigImg = randomSignal.read() Image(squigImg) #note - have to 'open' this again because it was already .read() out in the line above randomSignal ...
tutorials/Step_4_Classify_with_WatsonVR_old.ipynb
setiQuest/ML4SETI
apache-2.0
<br/> Run the Complete Test Set
#Create a dictionary object to store results from Watson from collections import defaultdict class_list = ['squiggle', 'noise', 'narrowband', 'narrowbanddrd'] results_group_by_class = {} for classification in class_list: results_group_by_class[classification] = defaultdict(list) failed_to_classify_uuid_list...
tutorials/Step_4_Classify_with_WatsonVR_old.ipynb
setiQuest/ML4SETI
apache-2.0
Generate CSV file for Scoreboard Here's an example of what the CSV file should look like for submission to the scoreboard. Although, in this case, we only have 4 classes instead of 7. NOTE: This uses the PNG files created in the Step 3 notebook, which only contain the BASIC4 data set. The code challenge and hackathon w...
import csv my_output_results = my_team_name_data_folder + '/' + 'watson_scores.csv' with open(my_output_results, 'w') as csvfile: fwriter = csv.writer(csvfile, delimiter=',') for row in class_scores: fwriter.writerow([row[0]] + row[2]) !cat my_team_name_data_folder/watson_scores.csv
tutorials/Step_4_Classify_with_WatsonVR_old.ipynb
setiQuest/ML4SETI
apache-2.0
Loading data and model Initialise, loading the settings and the test dataset we're going to be using:
cd .. settings = neukrill_net.utils.Settings("settings.json") run_settings = neukrill_net.utils.load_run_settings( "run_settings/alexnet_based_norm_global_8aug.json", settings, force=True) %%time # loading the model model = pylearn2.utils.serial.load(run_settings['pickle abspath']) reload(neukrill_net.dense_data...
notebooks/Holdout testing for Pylearn2 Pickles.ipynb
Neuroglycerin/neukrill-net-work
mit
Setting up forward pass Now we've loaded the data and the model we're going to set up a forward pass through the data in the same way we do it in the test.py script: pick a batch size, compile a Theano function and then iterate over the whole dataset in batches, filling an array of predictions.
# find allowed batch size over 1000 (want big batches) # (Theano has to have fixed batch size and we don't want leftover) batch_size=1000 while dataset.X.shape[0]%batch_size != 0: batch_size += 1 n_batches = int(dataset.X.shape[0]/batch_size) %%time # set this batch size model.set_batch_size(batch_size) # compile...
notebooks/Holdout testing for Pylearn2 Pickles.ipynb
Neuroglycerin/neukrill-net-work
mit
Compute probabilities The following is the same as the code in test.py that applies the processing.
%%time y = np.zeros((dataset.X.shape[0],len(settings.classes))) for i in xrange(n_batches): print("Batch {0} of {1}".format(i+1,n_batches)) x_arg = dataset.X[i*batch_size:(i+1)*batch_size,:] if X.ndim > 2: x_arg = dataset.get_topological_view(x_arg) y[i*batch_size:(i+1)*batch_size,:] = (f(x_arg....
notebooks/Holdout testing for Pylearn2 Pickles.ipynb
Neuroglycerin/neukrill-net-work
mit
Of course, it's strange that there are any zeros at all. Hopefully they'll go away when we start averaging. Score before averaging We can score the model before averaging by just using the class labels as they were going to be used for training. Using Sklearn's utility for calculating log_loss:
import sklearn.metrics sklearn.metrics.log_loss(dataset.y,y)
notebooks/Holdout testing for Pylearn2 Pickles.ipynb
Neuroglycerin/neukrill-net-work
mit
Score after averaging In test.py we take the least intelligent approach to dealing with averaging over the different augmented versions. Basically, we just assume that whatever the augmentation factor is, the labels must repeat over that step size, so we can just collapse those into a single vector of probabilities. Fi...
# augmentation factor af = 8 for low,high in zip(range(0,dataset.y.shape[0],af),range(af,dataset.y.shape[0]+af,af)): first = dataset.y[low][0] if any(first != i for i in dataset.y[low:high].ravel()): print("Labels do not match at:", (low,high)) break y_collapsed = np.zeros((int(dataset.X.shape...
notebooks/Holdout testing for Pylearn2 Pickles.ipynb
Neuroglycerin/neukrill-net-work
mit
There are no zeros in there now!
labels_collapsed = dataset.y[range(0,dataset.y.shape[0],af)] labels_collapsed.shape sklearn.metrics.log_loss(labels_collapsed,y_collapsed)
notebooks/Holdout testing for Pylearn2 Pickles.ipynb
Neuroglycerin/neukrill-net-work
mit
Partie 1 Un langage de programmation permet de décrire avec précision des opérations très simples sur des données. Comme tout langage, il a une grammaire et des mot-clés. La complexité d'un programme vient de ce qu'il faut beaucoup d'opérations simples pour arriver à ses fins. Voyons cela quelques usages simples. Il vo...
x = 5 y = 10 z = x + y print(z) # affiche z
_doc/notebooks/td1a/td1a_cenonce_session1.ipynb
sdpython/ensae_teaching_cs
mit
On programme sert souvent à automatiser un calcul comme le calcul mensuel du taux de chômage, le taux d'inflation, le temps qu'il fera demain... Pour pouvoir répéter ce même calcul sur des valeurs différentes, il faut pouvoir décrire ce calcul sans savoir ce que sont ces valeurs. Un moyen simple est de les nommer : on ...
x = 2 y = x + 1 print(y) x += 5 print(x)
_doc/notebooks/td1a/td1a_cenonce_session1.ipynb
sdpython/ensae_teaching_cs
mit
Lorsqu'on programme, on passe son temps à écrire des calculs à partir de variables pour les stocker dans d'autres variables voire dans les mêmes variables. Lorsqu'on écrit y=x+5, cela veut dire qu'on doit ajouter 5 à x et qu'on stocke le résultat dans y. Lorsqu'on écrit x += 5, cela veut dire qu'on doit ajouter 5 à x e...
a = 0 for i in range (0, 10) : a = a + i # répète dix fois cette ligne print (a)
_doc/notebooks/td1a/td1a_cenonce_session1.ipynb
sdpython/ensae_teaching_cs
mit
Le mot-clé print n'a pas d'incidence sur le programme. En revanche, il permet d'afficher l'état d'une variable au moment où on exécute l'instruction print. L'aiguillage ou les tests
a = 10 if a > 0 : print(a) # un seul des deux blocs est pris en considération else: a -= 1 print(a)
_doc/notebooks/td1a/td1a_cenonce_session1.ipynb
sdpython/ensae_teaching_cs
mit
Les chaînes de caractères
a = 10 print(a) # quelle est la différence print("a") # entre les deux lignes s = "texte" s += "c" print(s)
_doc/notebooks/td1a/td1a_cenonce_session1.ipynb
sdpython/ensae_teaching_cs
mit
Toute valeur a un type et cela détermine les opérations qu'on peut faire dessus. 2 + 2 fait 4 pour tout le monde. 2 + "2" fait quatre pour un humain, mais est incompréhensible pour l'ordinateur car on ajoute deux choses différentes (torchon + serviette).
print("2" + "3") print(2+3)
_doc/notebooks/td1a/td1a_cenonce_session1.ipynb
sdpython/ensae_teaching_cs
mit
Partie 2 Dans cette seconde série, partie, il s'agit d'interpréter pourquoi un programme ne fait pas ce qu'il est censé faire ou pourquoi il provoque une erreur, et si possible, de corriger cette erreur. Un oubli
a = 5 a += 4 print(a) # on voudrait voir 9 mais c'est 5 qui apparaît
_doc/notebooks/td1a/td1a_cenonce_session1.ipynb
sdpython/ensae_teaching_cs
mit
Une erreur de syntaxe
a = 0 for i in range (0, 10): # il manque quelque chose sur cette ligne a = a + i print(a)
_doc/notebooks/td1a/td1a_cenonce_session1.ipynb
sdpython/ensae_teaching_cs
mit
Une autre erreur de syntaxe
a = 0 for i in range (0, 10): a = a + i # regardez bien print(a)
_doc/notebooks/td1a/td1a_cenonce_session1.ipynb
sdpython/ensae_teaching_cs
mit
Une opération interdite
a = 0 s = "e" print(a + s) # petit problème de type
_doc/notebooks/td1a/td1a_cenonce_session1.ipynb
sdpython/ensae_teaching_cs
mit
Un nombre impair de...
a = 0 for i in range (0, 10) : a = (a + (i+2)*3 ) # comptez bien print(a)
_doc/notebooks/td1a/td1a_cenonce_session1.ipynb
sdpython/ensae_teaching_cs
mit
Partie 3 Il faut maintenant écrire trois programmes qui : Ecrire un programme qui calcule la somme des 10 premiers entiers au carré. Ecrire un programme qui calcule la somme des 5 premiers entiers impairs au carré. Ecrire un programme qui calcule la somme des qui 10 premières factorielles : $\sum_{i=1}^{10} i!$. A pr...
14%2, 233%2
_doc/notebooks/td1a/td1a_cenonce_session1.ipynb
sdpython/ensae_teaching_cs
mit
Tutor Magic Cet outil permet de visualiser le déroulement des programmes (pas trop grand, site original pythontutor.com).
%load_ext tutormagic %%tutor --lang python3 a = 0 for i in range (0, 10): a = a + i
_doc/notebooks/td1a/td1a_cenonce_session1.ipynb
sdpython/ensae_teaching_cs
mit
LightGBM Gradient boosting is a machine learning technique that produces a prediction model in the form of an ensemble of weak classifiers, optimizing for a differentiable loss function. One of the most popular types of gradient boosting is gradient boosted trees, that internally is made up of an ensemble of week decis...
def get_data(): file_path = 'adult.csv' if not os.path.isfile(file_path): def chunks(input_list, n_chunk): """take a list and break it up into n-size chunks""" for i in range(0, len(input_list), n_chunk): yield input_list[i:i + n_chunk] columns = [ ...
trees/lightgbm.ipynb
ethen8181/machine-learning
mit
We'll perform very little feature engineering as that's not our main focus here. The following code chunk only one hot encodes the categorical features. There will be follow up discussions on this in later section.
from sklearn.preprocessing import OneHotEncoder one_hot_encoder = OneHotEncoder(sparse=False, dtype=np.int32) one_hot_encoder.fit(df_train[cat_cols]) cat_one_hot_cols = one_hot_encoder.get_feature_names(cat_cols) print('number of one hot encoded categorical columns: ', len(cat_one_hot_cols)) cat_one_hot_cols[:5] def...
trees/lightgbm.ipynb
ethen8181/machine-learning
mit
Benchmarking The next section compares the xgboost and lightgbm's implementation in terms of both execution time and model performance. There are a bunch of other hyperparameters that we as the end-user can specify, but here we explicity specify arguably the most important ones.
time.sleep(5) lgb = LGBMClassifier( n_jobs=-1, max_depth=6, subsample=1, n_estimators=100, learning_rate=0.1, colsample_bytree=1, objective='binary', boosting_type='gbdt') start = time.time() lgb.fit(df_train_one_hot, y_train) lgb_elapse = time.time() - start print('elapse:, ', lgb_ela...
trees/lightgbm.ipynb
ethen8181/machine-learning
mit
XGBoost includes a tree_method = 'hist'option that buckets continuous variables into bins to speed up training, we also set grow_policy = 'lossguide' to favor splitting at nodes with highest loss change, which mimics LightGBM.
time.sleep(5) xgb_hist = XGBClassifier( n_jobs=-1, max_depth=6, subsample=1, n_estimators=100, learning_rate=0.1, colsample_bytree=1, objective='binary:logistic', booster='gbtree', tree_method='hist', grow_policy='lossguide') start = time.time() xgb_hist.fit(df_train_one_hot, y...
trees/lightgbm.ipynb
ethen8181/machine-learning
mit
From the resulting table, we can see that there isn't a noticeable difference in auc score between the two implementations. On the other hand, there is a significant difference in the time it takes to finish the whole training procedure. This is a huge advantage and makes LightGBM a much better approach when dealing wi...
ordinal_encoder = OrdinalEncoder(dtype=np.int32) ordinal_encoder.fit(df_train[cat_cols]) def preprocess_ordinal(df, ordinal_encoder, cat_cols, cat_dtype='int32'): df = df.copy() df[cat_cols] = ordinal_encoder.transform(df[cat_cols]) df[cat_cols] = df[cat_cols].astype(cat_dtype) return df df_train_ordi...
trees/lightgbm.ipynb
ethen8181/machine-learning
mit
From the result above, we can see that it requires even less training time without sacrificing any sort of performance. What's even more is that we now no longer need to perform the one hot encoding on our categorical features. The code chunk below shows this is highly advantageous from a memory-usage perspective when ...
print('OneHot Encoding') print('number of columns: ', df_train_one_hot.shape[1]) print('memory usage: ', df_train_one_hot.memory_usage(deep=True).sum()) print() print('Ordinal Encoding') print('number of columns: ', df_train_ordinal.shape[1]) print('memory usage: ', df_train_ordinal.memory_usage(deep=True).sum()) # p...
trees/lightgbm.ipynb
ethen8181/machine-learning
mit
Contents Overview General set-up Hugging Face BERT models and tokenizers BERT featurization with Hugging Face Simple feed-forward experiment A feed-forward experiment with the sst module An RNN experiment with the sst module BERT fine-tuning with Hugging Face HfBertClassifier HfBertClassifier experiment Overview ...
import os from sklearn.metrics import classification_report import torch import torch.nn as nn import transformers from transformers import BertModel, BertTokenizer from torch_shallow_neural_classifier import TorchShallowNeuralClassifier from torch_rnn_classifier import TorchRNNModel from torch_rnn_classifier import T...
finetuning.ipynb
cgpotts/cs224u
apache-2.0
The transformers library does a lot of logging. To avoid ending up with a cluttered notebook, I am changing the logging level. You might want to skip this as you scale up to building production systems, since the logging is very good – it gives you a lot of insights into what the models and code are doing.
transformers.logging.set_verbosity_error()
finetuning.ipynb
cgpotts/cs224u
apache-2.0