markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Criterio de la tasa interna de retorno Tasa Interna de Retorno (IRR). $r^*$ -- tasa interna de retorno (irr). Es la tasa que hace el valor presente igual a cero. $$PV(r^) = \sum_{t=0}^N \frac{F_t}{(1+r^)^t} ~=~0$$ Ejemplo.-- Calcule la irr para el ejemplo anterior.
cf.irr(cflo)
2016-03/IE-06-analisis.ipynb
jdvelasq/ingenieria-economica
mit
Tasa Interna de Retorno Modificada. <img src="images/mirr.png" width=650> $$(1+MIRR)^N = \frac{\sum_{t=0}^N \max(F_t,0) * (1+r_1)^{N-t} }{ \sum_{t=0}^N \min(F_t,0) * (1+r_2)^{-t} }$$ $r_1$ -- tasa de reinversión. $r_2$ -- tasa de financiamiento.
cf.mirr(cflo=cflo, finance_rate=0, reinvest_rate=0) ## la función puede recibir varios flujos de fondos simulataneamente. cf.irr([cflo, cflo, cflo])
2016-03/IE-06-analisis.ipynb
jdvelasq/ingenieria-economica
mit
Análisis de sensibilidad Se realiza variando una de las variables del problema para determinar el efecto en el indicador utilizado. Ejemplo.-- Se tiene un proyecto con la siguiente información: Años de operación: 10 Años de construcción: 1 Producción: 1000 unidades por año Precio de venta: \$ 10 por unidad Co...
## se construye una función que recibe la información relevante y retorn el npv def project(marr, produccion, precio, costo, inversion): ingre = cf.cashflow(const_value=0, nper=11, spec = [(t, precio * producci...
2016-03/IE-06-analisis.ipynb
jdvelasq/ingenieria-economica
mit
Data Pre-processing Now we need to do data cleaning and preprocessing on the raw data. Note that this part could vary for different dataset. For the stock price data we're using, we add normlization such that the normalized stock prices fall in the range of 0 to 1. And here we aims at using historical values to predict...
from zoo.chronos.data import TSDataset from sklearn.preprocessing import MinMaxScaler df = data[["ds", "y"]] df_train = df[:-24] df_test = df[-24:] tsdata_train = TSDataset.from_pandas(df_train, dt_col="ds", target_col="y") tsdata_test = TSDataset.from_pandas(df_test, dt_col="ds", target_col="y") minmax_scaler = Min...
pyzoo/zoo/chronos/use-case/fsi/stock_prediction_prophet.ipynb
intel-analytics/analytics-zoo
apache-2.0
ProphetForecaster Demonstration Here we provide a simple demonstration of basic operations with the ProphetForecaster.
from zoo.chronos.forecaster.prophet_forecaster import ProphetForecaster model = ProphetForecaster() val_mse = model.fit(data=train_data, validation_data=validation_data)['mse'] print(f"Validation MSE = {val_mse}") # Plot predictions pred = model.predict(ds_data=validation_data[["ds"]]) import matplotlib.pyplot as pl...
pyzoo/zoo/chronos/use-case/fsi/stock_prediction_prophet.ipynb
intel-analytics/analytics-zoo
apache-2.0
AutoProphet Demonstration Here we provide a demonstration of our AutoProphet AutoEstimator that could search for best hyperparameters for the model automatically.
from zoo.chronos.autots.model.auto_prophet import AutoProphet from zoo.orca.automl import hp from zoo.orca import init_orca_context init_orca_context(cores=10, init_ray_on_spark=True) %%time auto_prophet = AutoProphet() auto_prophet.fit(data=train_data, cross_validation=False, freq=...
pyzoo/zoo/chronos/use-case/fsi/stock_prediction_prophet.ipynb
intel-analytics/analytics-zoo
apache-2.0
Transforming an input to a known output
input = [[-1], [0], [1], [2], [3], [4]] output = [[2], [1], [0], [-1], [-2], [-3]] import matplotlib.pyplot as plt plt.xlabel('input') plt.ylabel('output') plt.plot(input, output, 'kX')
notebooks/tensorflow/tf_low_level_training.ipynb
DJCordhose/ai
mit
relation between input and output is linear
plt.plot(input, output) plt.plot(input, output, 'ro') x = tf.constant(input, dtype=tf.float32) y_true = tf.constant(output, dtype=tf.float32) y_true
notebooks/tensorflow/tf_low_level_training.ipynb
DJCordhose/ai
mit
Defining the model to train untrained single unit (neuron) also outputs a line from same input, although another one The Artificial Neuron: Foundation of Deep Neural Networks (simplified, more later) a neuron takes a number of numerical inputs multiplies each with a weight, sums up all weighted input and adds bias ...
# short version, though harder to inspect # y_pred = tf.layers.dense(inputs=x, units=1) # matrix multiplication under the hood # tf.matmul(x, w) + b linear_model = tf.layers.Dense(units=1) y_pred = linear_model(x) y_pred # single neuron and single input: one weight and one bias # weights and biases are represented as...
notebooks/tensorflow/tf_low_level_training.ipynb
DJCordhose/ai
mit
Output of a single untrained neuron
# when you execute this cell, you should see a different line, as the initialization is random with tf.Session() as sess: sess.run(tf.global_variables_initializer()) output_pred = sess.run(y_pred) print(output_pred) weights = sess.run(linear_model.trainable_weights) print(weights) plt.plot(input, output_pre...
notebooks/tensorflow/tf_low_level_training.ipynb
DJCordhose/ai
mit
Loss - Mean Squared Error Loss function is the prerequisite to training. We need an objective to optimize for. We calculate the difference between what we get as output and what we would like to get. Mean Squared Error $MSE = {\frac {1}{n}}\sum {i=1}^{n}(Y{i}-{\hat {Y_{i}}})^{2}$ https://en.wikipedia.org/wiki/Mean_squa...
loss = tf.losses.mean_squared_error(labels=y_true, predictions=y_pred) loss # when this loss is zero (which it is not right now) we get the desired output with tf.Session() as sess: sess.run(tf.global_variables_initializer()) print(sess.run(loss))
notebooks/tensorflow/tf_low_level_training.ipynb
DJCordhose/ai
mit
Minimize Loss by changing parameters of neuron Move in parameter space in the direction of a descent <img src='https://djcordhose.github.io/ai/img/gradients.jpg'> https://twitter.com/colindcarroll/status/1090266016259534848 Job of the optimizer <img src='https://djcordhose.github.io/ai/img/manning/optimizer.png' height...
# move the parameters of our single neuron in the right direction with a pretty high intensity (learning rate) optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01) train = optimizer.minimize(loss) train losses = [] sess = tf.Session() sess.run(tf.global_variables_initializer()) # iterations aka epochs, ...
notebooks/tensorflow/tf_low_level_training.ipynb
DJCordhose/ai
mit
Learning Curve after training
# wet dream of every machine learning person (typically you see a noisy curve only sort of going down) plt.yscale('log') plt.ylabel("loss") plt.xlabel("epochs") plt.plot(losses)
notebooks/tensorflow/tf_low_level_training.ipynb
DJCordhose/ai
mit
Line drawn by neuron after training result after training is not perfect, but almost looks like the same line https://en.wikipedia.org/wiki/Linear_equation#Slope%E2%80%93intercept_form
output_pred = sess.run(y_pred) print(output_pred) plt.plot(input, output_pred) plt.plot(input, output, 'ro') # single neuron and single input: one weight and one bias # slope m ~ -1 # y-axis offset y0 ~ 1 # https://en.wikipedia.org/wiki/Linear_equation#Slope%E2%80%93intercept_form weights = sess.run(linear_model.trai...
notebooks/tensorflow/tf_low_level_training.ipynb
DJCordhose/ai
mit
Time to build the network Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes. <img src="assets/neural_network.p...
class NeuralNetwork(object): def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate): # Set number of nodes in input, hidden and output layers. self.input_nodes = input_nodes self.hidden_nodes = hidden_nodes self.output_nodes = output_nodes # Initialize we...
first-neural-network/Your_first_neural_network.ipynb
geilerloui/deep-learning
mit
Training the network Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training se...
import sys from time import time from sklearn.grid_search import ParameterGrid ### Set the hyperparameters here ### iterations = [10000] learning_rate = [0.03, 0.3] hidden_nodes = [10, 15, 20, 25] output_nodes = 1 # Set the parameter grid param_grid = {'ii': iterations, 'lrate': learning_rate, 'hnodes': hidden_nodes}...
first-neural-network/Your_first_neural_network.ipynb
geilerloui/deep-learning
mit
epochs: 10000, #hidden_nodes: 20:, learning_rate: 0.3, Progress: 100.0% ... Training loss: 0.056% ... Validation loss: 0.131
import sys from time import time ### Set the hyperparameters here ### iterations = 10000 learning_rate = 0.3 hidden_nodes = 20 output_nodes = 1 # We add a model to assess the duration start_time = time() N_i = train_features.shape[1] network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate) losses = {'...
first-neural-network/Your_first_neural_network.ipynb
geilerloui/deep-learning
mit
Input (CSV) filename
#CSV_FILE = 'NYC-311-2M.csv' CSV_FILE = None
10b--csv-to-db.ipynb
rvuduc/cse6040-ipynbs
bsd-3-clause
Determine output (SQLite DB) filename from the input filename
assert CSV_FILE import re CSV_BASES = re.findall (r'(.*)\.csv$', CSV_FILE, re.I) assert len (CSV_BASES) >= 1 CSV_BASE = CSV_BASES[0] DB_FILE = "%s.db" % CSV_BASE print ("Converting: %s to %s (an SQLite DB) ..." % (CSV_FILE, DB_FILE))
10b--csv-to-db.ipynb
rvuduc/cse6040-ipynbs
bsd-3-clause
Connect to an SQL data source
disk_engine = create_engine ('sqlite:///%s' % DB_FILE)
10b--csv-to-db.ipynb
rvuduc/cse6040-ipynbs
bsd-3-clause
Convert CSV to SQLite DB Load the CSV, chunk-by-chunk, into a DataFrame Process each chunk by doing some minimal data normalization and stripping out uninteresting columns Append each chunk to the SQLite database
# Convert .csv to .db CHUNKSIZE = 25000 # Number of rows to read at a time # List of columns to keep KEEP_COLS = ['Agency', 'CreatedDate', 'ClosedDate', 'ComplaintType', 'Descriptor', 'CreatedDate', 'ClosedDate', 'TimeToCompletion', 'City'] start = dt.datetime.now () # start timer j = 0 in...
10b--csv-to-db.ipynb
rvuduc/cse6040-ipynbs
bsd-3-clause
Example plot for LFPy: Hay et al. (2011) spike waveforms Run Hay et al. (2011) layer 5b pyramidal cell model, generating and plotting a single action potential and corresponding extracellular potentials (spikes) Copyright (C) 2017 Computational Neuroscience Group, NMBU. This program is free software: you can redistribu...
import numpy as np import sys from urllib.request import urlopen import ssl from warnings import warn import zipfile import os import matplotlib.pyplot as plt from matplotlib.collections import LineCollection import LFPy import neuron
examples/LFPy-example-04.ipynb
espenhgn/LFPy
gpl-3.0
Fetch Hay et al. 2011 model files
if not os.path.isfile('L5bPCmodelsEH/morphologies/cell1.asc'): #get the model files: u = urlopen('http://senselab.med.yale.edu/ModelDB/eavBinDown.asp?o=139653&a=23&mime=application/zip', context=ssl._create_unverified_context()) localFile = open('L5bPCmodelsEH.zip', 'wb') localFile.write...
examples/LFPy-example-04.ipynb
espenhgn/LFPy
gpl-3.0
Simulation parameters:
# define cell parameters used as input to cell-class cellParameters = { 'morphology' : 'L5bPCmodelsEH/morphologies/cell1.asc', 'templatefile' : ['L5bPCmodelsEH/models/L5PCbiophys3.hoc', 'L5bPCmodelsEH/models/L5PCtemplate.hoc'], 'templatename' : 'L5PCtemplate', 'templateargs' ...
examples/LFPy-example-04.ipynb
espenhgn/LFPy
gpl-3.0
Main simulation procedure, setting up extracellular electrode, cell, synapse:
# delete old sections from NEURON namespace LFPy.cell.neuron.h("forall delete_section()") # Initialize cell instance, using the LFPy.Cell class cell = LFPy.TemplateCell(**cellParameters) cell.set_rotation(x=4.729, y=-3.166) # Override passive reversal potential, AP is generated for sec in cell.allseclist: for seg...
examples/LFPy-example-04.ipynb
espenhgn/LFPy
gpl-3.0
Plot output
def plotstuff(cell, electrode): '''plotting''' fig = plt.figure(dpi=160) ax1 = fig.add_axes([0.05, 0.1, 0.55, 0.9], frameon=False) cax = fig.add_axes([0.05, 0.115, 0.55, 0.015]) ax1.plot(electrode.x, electrode.z, 'o', markersize=1, color='k', zorder=0) #normalize to m...
examples/LFPy-example-04.ipynb
espenhgn/LFPy
gpl-3.0
Parsing the Data in the .DAT file Most of the time, when you open a data file in Python or Stata or Matlab or Excel (etc) the data is already pre-configured with variable names and (maybe) descriptions. This isn't the case here. The .DAT file that we unzipped above contains all the relevent data, but as of now its just...
""" code format: tuples are (a, b, c) a = varable name, b = column range (start-1,end), x = description """ codes = [('DUPERSID',(8,16),'Person identifier'), ('TTLP12X',(1501,1507),"Person's total income"), ('FAMINC12',(1507,1513),"Family's total income"), ('DOBYY',(204,208),"Date of Bir...
Code/Lab/MEPS_data_experiment_Brian.ipynb
DaveBackus/Data_Bootcamp
mit
The goal is to re-create Figure 1 from here. It paints a pretty vivid picture about the distribution of personal health care spending in the US -- the top 5% of spenders account for nearly 50% of total spending and the top 1% account for more than 20% of the total. Each observation in the MEPS dataset recieves a weight...
df['TOTEXP12_WT'] = df['TOTEXP12']*df['PERWT12F'] df['age'] = 2015-df['DOBYY'] percentiles = [i * 0.1 for i in range(10)] + [0.95, 0.99, 1.00] def weighted_percentiles(subgroup = None, limits = []): import numpy a, b = list(), list() if subgroup is None: tt = df else: tt = df[df[subg...
Code/Lab/MEPS_data_experiment_Brian.ipynb
DaveBackus/Data_Bootcamp
mit
The tables above show that personal health care spending is very highly concentrated in the United States. The top 1% of spenders account for over 20% of total spending and the top 5% accounts for nearly 50%. It's also clear that the distribution of health care spending is pretty similar within different income groupi...
%pylab inline import matplotlib.pyplot as plt from IPython.html.widgets import * def figureprefs(values, Group, Title = 'text', Labels = False): """ values = variable(s) labels = variable labels (appears in legend) plot_text = If True, then the text will appear in the graph like in Figure 1 ...
Code/Lab/MEPS_data_experiment_Brian.ipynb
DaveBackus/Data_Bootcamp
mit
Dirichlet Distribution The symmetric Dirichlet distribution (DD) can be considered a distribution of distributions. Each sample from the DD is a categorial distribution over $K$ categories. It is parameterized $G_0$, a distribution over $K$ categories and $\alpha$, a scale factor. The expected value of the DD is $G_0$....
import numpy as np from scipy.stats import dirichlet np.set_printoptions(precision=2) def stats(scale_factor, G0=[.2, .2, .6], N=10000): samples = dirichlet(alpha = scale_factor * np.array(G0)).rvs(N) print(" alpha:", scale_factor) print(" element-wise mean:", samples....
pages/2015-07-28-dirichlet-distribution-dirichlet-process.ipynb
tdhopper/notes-on-dirichlet-processes
mit
Often we want to draw samples from a distribution sampled from a Dirichlet process instead of from the Dirichlet process itself. Much of the literature on the topic unhelpful refers to this as sampling from a Dirichlet process. Fortunately, we don't have to draw an infinite number of samples from the base distribution...
from numpy.random import choice class DirichletProcessSample(): def __init__(self, base_measure, alpha): self.base_measure = base_measure self.alpha = alpha self.cache = [] self.weights = [] self.total_stick_used = 0. def __call__(self): remaining = 1.0...
pages/2015-07-28-dirichlet-distribution-dirichlet-process.ipynb
tdhopper/notes-on-dirichlet-processes
mit
Read in the Data First, let's read in some SMPS data. There is currently one 'sample' data set available with the py-smps library. This data was collected on an SMPS at MIT during the wintertime. Read in to an SMPS object and Plot the Histogram
s = smps.io.load_sample("boston") # plot the histogram ax = smps.plots.histplot(s.dndlogdp, s.bins, plot_kws={'linewidth': .1}, fig_kws={'figsize': (12, 6)}) ax.set_title("Wintertime in Cambridge, MA", y=1.02) ax.set_ylabel("$dN/dlogD_p \; [cm^{-3}]$") # remove the spines of the plot sns.despine()
examples/Fit a Multi-Modal Distribution.ipynb
dhhagan/py-smps
mit
We went ahead and plotted the histogram of the mean values across all size bins for the entire collection period. Why? Because I can. We see we have a single distribution with a mode of somewhere around 50-60 nm. We can go ahead and fit a simple 1-mode distribution to this data. Fit a Single Mode
# Grab the LogNormal class from the library from smps.fit import LogNormal # Initiate an instance of the class model = LogNormal() # Gather our X and Y data X = s.midpoints Y = s.dndlogdp.mean() # Go ahead and fit results = model.fit(X, Y, modes=1) # print the results print (results.summary())
examples/Fit a Multi-Modal Distribution.ipynb
dhhagan/py-smps
mit
Above, we see the results for the three fit parameters: Number of particles per cubic centimeter, the geometric mean diameter, and the geometric standard deviation. All three have error estimates (standard deviation) as shown in parentheses next to each value. Now that we successfully fit our data, let's go ahead and p...
# plot the histogram ax = smps.plots.histplot(s.dndlogdp, s.bins, plot_kws={'linewidth': 0, 'alpha': .6, 'edgecolor': None}, fig_kws={'figsize': (12, 6)}) # Plot the fit values ax.plot(X, results.fittedvalues, lw=6, label="Fit Data") ax.set_ylabel("$dN/dlogD_p \; [cm^{-3}]$") ax.set_title("Win...
examples/Fit a Multi-Modal Distribution.ipynb
dhhagan/py-smps
mit
What else is stored in the fit results? Glad you asked! We can go ahead and retrieve our fit parameters at results['params']. They are stored as an array with order [N, GM, GSD].
print (results.params)
examples/Fit a Multi-Modal Distribution.ipynb
dhhagan/py-smps
mit
We can also go ahead and look at the error associated with those values at results['error']. They are in the same order as above.
print (results.errors)
examples/Fit a Multi-Modal Distribution.ipynb
dhhagan/py-smps
mit
Upon fitting, an instance of the LogNormalFitResults class is returned and has available a couple of useful methods. The first is .summary() which we saw above. It simply prints out a very basic overview of the fit statistics in table format. We can also make new predictions using the LogNormalFitResults.predict method...
results.predict(1)
examples/Fit a Multi-Modal Distribution.ipynb
dhhagan/py-smps
mit
Plot the Missing Data Let's use the predict method to fill in the lower portion of the distribution where we were not originally scanning. Is this a great idea? Well, probably not. But we can do it anyways..
newX = np.logspace(np.log10(.01), np.log10(1), 1000) # plot the histogram ax = smps.plots.histplot(s.dndlogdp, s.bins, plot_kws={'linewidth': 0., 'alpha': .5}, fig_kws={'figsize': (12, 6)}) # Plot the fit values ax.plot(newX, results.predict(newX), lw=6, label="Fit Data") ax.set_title("Winter...
examples/Fit a Multi-Modal Distribution.ipynb
dhhagan/py-smps
mit
Multiple Modes I don't have access to a dataset with multiple modes in it at the moment, so we are going to fake it! We're going to mimick a multi-modal distribution and then show how to fit it. If someone wants to donate a dataset, that'd be cool!
dp = np.logspace(np.log10(0.0001), np.log10(1), 500) # Sample data pulled from S+P pg371 N = np.array([9.93e4, 3.64e4]) GM = np.array([1.3e-3, 20e-3]) GSD = np.array([10**.245, 10**0.336]) total = 0 for j in range(len(N)): total += smps.fit.dndlogdp(dp, N[j], GM[j], GSD[j]) # Let's confuzzle our data twiste...
examples/Fit a Multi-Modal Distribution.ipynb
dhhagan/py-smps
mit
Now that we have some fake data, let's go ahead and fit it! We're also going to go ahead and throw some initial guesses at it. There needs to be 3n guesses where n is the number of modes you are fitting. They should be in the order [Ni, GMi, GSDi] for i=1 to i=number of modes.
model = LogNormal() X = dp Y = twisted # Let's set some initial guesses p0 = [1e5, 1e-3, 2, 3e4, 20e-3, 2] results = model.fit(X, Y, modes=2, p0=p0) print (results.summary())
examples/Fit a Multi-Modal Distribution.ipynb
dhhagan/py-smps
mit
Now that we have our results, let's go ahead and plot them!
with sns.axes_style('ticks'): fig, ax = plt.subplots(1, figsize=(12, 6)) ax.plot(dp, twisted, 'o', label="Twisted Data") ax.plot(dp, results.fittedvalues, lw=6, label="Fitted Values") ax.set_xlabel("$D_p \; [\mu m]$") ax.set_ylabel("$dN/dlogD_p$") ax.semilogx() ax.xaxis.set_major_form...
examples/Fit a Multi-Modal Distribution.ipynb
dhhagan/py-smps
mit
Last but not least...what if we only want to fit one of the modes, or fit to just a small portion of the data? Easy! Fit to a subset of the data
model = LogNormal() X = dp Y = twisted results = model.fit(X, Y, modes=1, xmax=8.5, xmin=0) print (results.summary()) with sns.axes_style('ticks'): fig, ax = plt.subplots(1, figsize=(12, 6)) ax.plot(dp, twisted, 'o', label="Twisted Data") ax.plot(X[X <= 8.5], results.fittedvalues, lw=6, label="Fitted ...
examples/Fit a Multi-Modal Distribution.ipynb
dhhagan/py-smps
mit
Training and Evaluation In this exercise, we'll be trying to predict median_house_value It will be our label (sometimes also called a target). We'll modify the feature_cols and input function to represent the features you want to use. Hint: Some of the features in the dataframe aren't directly correlated with median_ho...
def add_more_features(df): # TODO: Add more features to the dataframe return df # Create pandas input function def make_input_fn(df, num_epochs): return tf.compat.v1.estimator.inputs.pandas_input_fn( x = add_more_features(df), y = df['median_house_value'] / 100000, # will talk about why later in the cour...
courses/machine_learning/deepdive/04_features/labs/a_features.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Read and preprocess the data. Preprocessing consists of: MEG channel selection 1-30 Hz band-pass filter
data_path = sample.data_path() raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif' raw = mne.io.read_raw_fif(raw_fname).crop(0, 60).pick('meg').load_data() reject = dict(mag=5e-12, grad=4000e-13) raw.filter(1, 30, fir_design='firwin')
0.24/_downloads/a96f6d7ea0f7ccafcacc578a25e1f8c5/ica_comparison.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
First take a look at the data types and non-null entries
print titanic.info() print titanic.describe().T print titanic.head(5)
Titanic_data_exploration_MASTER.ipynb
aaschroeder/Titanic_example
gpl-3.0
Some observations from looking at the above data include: Age contains missing values for a relatively small set of the population. We will need to impute numbers for this somehow. Cabin contains many, many missing values. The first character also likely contains information on the deck location Embarked contains two ...
titanic.Sex.replace(['male','female'],[True,False], inplace=True)
Titanic_data_exploration_MASTER.ipynb
aaschroeder/Titanic_example
gpl-3.0
Next, let's fill in the missing Age values. We have a pretty full dataset (about 7/8ths), so we can probably get away, in a first pass, with doing some simple stratification and assuming ages are missing-at-random within those strata. For now, let's stratify on gender and class.
#print titanic.Age.mean() titanic.Age= titanic.groupby(['Sex','Pclass'])[['Age']].transform(lambda x: x.fillna(x.mean())) titanic.Fare= titanic.groupby(['Pclass'])[['Fare']].transform(lambda x: x.fillna(x.mean())) #print titanic.info()
Titanic_data_exploration_MASTER.ipynb
aaschroeder/Titanic_example
gpl-3.0
Next, deal with converting Pclass into something we can work with, and also create dummies for deck location and port of embarkation (when found)
titanic_class=pd.get_dummies(titanic.Pclass,prefix='Pclass',dummy_na=False) titanic=pd.merge(titanic,titanic_class,on=titanic['PassengerId']) titanic=pd.merge(titanic,pd.get_dummies(titanic.Embarked, prefix='Emb', dummy_na=True), on=titanic['PassengerId']) titanic['Floor']=titanic['Cabin'].str.extract('^([A-Z])', expa...
Titanic_data_exploration_MASTER.ipynb
aaschroeder/Titanic_example
gpl-3.0
Finally, before going forward I'd really like to be able to separate spouses from siblings in that variable. One way to do this is that we see married women have their husbands' names located outside of parentheses within their own name. We create a new variable that just contains the words outside of the parentheses, ...
import re as re titanic['Title']=titanic['Name'].str.extract(', (.*)\.', expand=False)
Titanic_data_exploration_MASTER.ipynb
aaschroeder/Titanic_example
gpl-3.0
So there is some cleaning that could be done here. We'll do three things: 1.) Turn French ladies' titles into English ones 2.) Aggregate Military titles 3.) For all remaining titles with count less than five, create remainder bin
titanic['Title'].replace(to_replace='Mrs\. .*',value='Mrs', inplace=True, regex=True) titanic.loc[titanic.Title.isin(['Col','Major','Capt']),['Title']]='Mil' titanic.loc[titanic.Title=='Mlle',['Title']]='Miss' titanic.loc[titanic.Title=='Mme',['Title']]='Mrs' print titanic.Title.value_counts() titanic['Title_ct']=tit...
Titanic_data_exploration_MASTER.ipynb
aaschroeder/Titanic_example
gpl-3.0
Typically, the next step here would be to perform a univariate (and, for learners that do not naturally perform feature selection with interactions, potentially bivariate) analysis to see which features best predict the outcome. In some cases (Random Forests, Boosting trees) the learner naturally performs feature selec...
execfile('./Final_setup_Random_Forest.py') execfile('./Final_setup_GBoost.py') execfile('./Final_setup_SVM.py') execfile('./Final_setup_Logit.py') execfile('./Final_setup_NB.py') execfile('./Final_ensemble.py')
Titanic_data_exploration_MASTER.ipynb
aaschroeder/Titanic_example
gpl-3.0
Agregamos un $1$ al final, debido a que las matrices de transformación homogenea son de dimensión $\Re^{4\times 4}$ y de otra manera no concordarian las dimensiones. Ahora podemos graficar en el plano $XY$ de la siguiente manera:
f1 = figure(figsize=(8, 8)) a1 = f1.gca() a1.plot([0, pos_1[0]], [0, pos_1[1]], "-o") a1.set_xlim(-0.1, 1.1) a1.set_ylim(-0.1, 1.1);
Practicas/.ipynb_checkpoints/Practica 4 - Movimientos de cuerpos rigidos-checkpoint.ipynb
robblack007/clase-dinamica-robot
mit
Pero podemos hacer algo mejor, podemos definir una función que nos devuelva una matriz de rotación, dandole como argumento el angulo de rotación.
def rotacion_z(θ): A = matrix([[cos(θ), -sin(θ), 0, 0], [sin(θ), cos(θ), 0, 0], [0, 0, 1, 0], [0, 0, 0, 1]]) return A
Practicas/.ipynb_checkpoints/Practica 4 - Movimientos de cuerpos rigidos-checkpoint.ipynb
robblack007/clase-dinamica-robot
mit
Entonces, tendremos el mismo resultado, con un codigo mas limpio.
rot_2 = rotacion_z(τ/12) p = rot_2*pos_1 p f1 = figure(figsize=(8, 8)) a1 = f1.gca() a1.plot([0, p[0]], [0, p[1]], "-o") a1.set_xlim(-0.1, 1.1) a1.set_ylim(-0.1, 1.1);
Practicas/.ipynb_checkpoints/Practica 4 - Movimientos de cuerpos rigidos-checkpoint.ipynb
robblack007/clase-dinamica-robot
mit
2. Define a function max_of_three that takes three numbers as arguments and returns the largest of them.
def max_of_three(x, y, z): if x > y and x > z: return x elif y > x and y > z: return y elif z > x and z > y: return z else: return x assert max_of_three(1, 2, 3) == 3 assert max_of_three(1, 1, 2) == 2 assert max_of_three(2, 1 , .5) == 2 assert max_of_three(0, 0, 0) == 0
exercises/solutions/0 Python basics exercises.ipynb
n-witt/MachineLearningWithText_SS2017
gpl-3.0
3. Define a function length that computes the length of a given list or string. (It is true that Python has the len() function built in, but writing it yourself is nevertheless a good exercise.)
def length(obj): len = 0 for _ in obj: len += 1 return len assert length([1, 2, 3]) == 3 assert length('this is some string') == 19
exercises/solutions/0 Python basics exercises.ipynb
n-witt/MachineLearningWithText_SS2017
gpl-3.0
4. Write a function is_vowel that takes a character (i.e. a string of length 1) and returns True if it is a vowel, False otherwise.
def is_vowel(char): return char in 'aeiou' assert is_vowel('t') == False assert is_vowel('a') == True
exercises/solutions/0 Python basics exercises.ipynb
n-witt/MachineLearningWithText_SS2017
gpl-3.0
5. Define a function accumulate and a function multiply that sums and multiplies (respectively) all the numbers in a list of numbers. For example, sum([1, 2, 3, 4]) should return 10, and multiply([1, 2, 3, 4]) should return 24.
def accumulate(obj): res = 0 for num in obj: res += num return res def multiply(obj): res = 1 for num in obj: res *= num return res assert accumulate([1, 2, 3, 4]) == 10 assert multiply([1, 2, 3, 4]) == 24
exercises/solutions/0 Python basics exercises.ipynb
n-witt/MachineLearningWithText_SS2017
gpl-3.0
6. Define a function reverse that computes the reversal of a string. For example, reverse("I am testing") should return the string "gnitset ma I".
def reverse(s): return ''.join([c for c in s[::-1]]) assert reverse('I am testing') == 'gnitset ma I'
exercises/solutions/0 Python basics exercises.ipynb
n-witt/MachineLearningWithText_SS2017
gpl-3.0
7. Define a function is_palindrome that recognizes palindromes (i.e. words that look the same written backwards). For example, is_palindrome("radar") should return True.
def is_palindrome(s): return s == reverse(s) assert is_palindrome('radar') == True assert is_palindrome('sonar') == False
exercises/solutions/0 Python basics exercises.ipynb
n-witt/MachineLearningWithText_SS2017
gpl-3.0
8. Write a function is_member that takes a value (i.e. a number, string, etc) x and a list of values a, and returns True if x is a member of a, False otherwise. (Note that this is exactly what the in operator does, but for the sake of the exercise you should pretend Python did not have this operator.)
def is_member(x, a): for v in x: if v == a: return True return False assert is_member([1, 2, 3], 4) == False assert is_member([1, 2, 3], 2) == True
exercises/solutions/0 Python basics exercises.ipynb
n-witt/MachineLearningWithText_SS2017
gpl-3.0
9. Define a procedure histogram that takes a list of integers and prints a histogram to the screen. For example, histogram([4, 9, 7]) should print the following: ``` ```
def histogram(obj): for n in obj: print('*' * n, '\n') histogram([4, 9, 7])
exercises/solutions/0 Python basics exercises.ipynb
n-witt/MachineLearningWithText_SS2017
gpl-3.0
10. Write a function filter_long_words that takes a list of words and an integer n and returns the list of words that are longer than n.
def filter_long_words(words, n): return [word for word in words if len(word) >= n] assert len(filter_long_words('this is some sentence'.split(), 3)) == 3
exercises/solutions/0 Python basics exercises.ipynb
n-witt/MachineLearningWithText_SS2017
gpl-3.0
11. A pangram is a sentence that contains all the letters of the English alphabet at least once, for example: "The quick brown fox jumps over the lazy dog". Your task here is to write a function is_pangram to check a sentence to see if it is a pangram or not.
def is_pangram(sentence): alphabet = set('a b c d e f g h i j k l m n o p q r s t u v w x y z'.split()) for char in sentence: try: alphabet.remove(char) except KeyError: pass if len(alphabet) == 0: return True else: return False assert is_pangram(...
exercises/solutions/0 Python basics exercises.ipynb
n-witt/MachineLearningWithText_SS2017
gpl-3.0
12. Represent a small bilingual lexicon as a Python dictionary in the following fashion {"may": "möge", "the": "die", "force": "macht", "be": "sein", "with": "mit", "you": "dir"} and use it to translate the sentence "may the force be with you" from English into German. That is, write a function translate that takes a l...
def translate(eng): dictionary = { "may": "möge", "the": "die", "force": "macht", "be": "sein", "with": "mit", "you": "dir" } ger = [] for word in eng: if word in dictionary: ger.append(dictionary[word]) else: g...
exercises/solutions/0 Python basics exercises.ipynb
n-witt/MachineLearningWithText_SS2017
gpl-3.0
13. In cryptography, a Caesar cipher is a very simple encryption techniques in which each letter in the plain text is replaced by a letter some fixed number of positions down the alphabet. For example, with a shift of 3, A would be replaced by D, B would become E, and so on. The method is named after Julius Caesar, who...
def rot13(msg): key = {'a':'n', 'b':'o', 'c':'p', 'd':'q', 'e':'r', 'f':'s', 'g':'t', 'h':'u', 'i':'v', 'j':'w', 'k':'x', 'l':'y', 'm':'z', 'n':'a', 'o':'b', 'p':'c', 'q':'d', 'r':'e', 's':'f', 't':'g', 'u':'h', 'v':'i', 'w':'j', 'x':'k', 'y':'l', 'z':'m', 'A':'N', 'B':'O', 'C':'P', 'D':'Q', ...
exercises/solutions/0 Python basics exercises.ipynb
n-witt/MachineLearningWithText_SS2017
gpl-3.0
14. Write a procedure char_freq_table that accepts the file name material/jedi.txt as argument, builds a frequency listing of the characters contained in the file, and prints a sorted and nicely formatted character frequency table to the screen.
from collections import defaultdict import string def char_freq_table(filename): char_counter = defaultdict(int) with open(filename) as fh: text = fh.read() for character in text: char_counter[character] += 1 return char_counter frequencies = char_freq_table('material/jedi.tx...
exercises/solutions/0 Python basics exercises.ipynb
n-witt/MachineLearningWithText_SS2017
gpl-3.0
<table align="left"> <td> <a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/official/automl/automl-text-classification.ipynb"> <img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab </a> </td>...
import os # The Google Cloud Notebook product has specific requirements IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version") # Google Cloud Notebook requires dependencies to be installed with '--user' USER_FLAG = "" if IS_GOOGLE_CLOUD_NOTEBOOK: USER_FLAG = "--user" ! pip install {U...
notebooks/official/automl/automl-text-classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Set your project ID Finally, you must initialize the client library before you can send requests to the Vertex AI service. With the Python SDK, you initialize the client library as shown in the following cell. This tutorial also uses the Cloud Storage Python library for accessing batch prediction results. Be sure to pr...
import os PROJECT_ID = "" if not os.getenv("IS_TESTING"): # Get your Google Cloud project ID from gcloud shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null PROJECT_ID = shell_output[0] print("Project ID: ", PROJECT_ID)
notebooks/official/automl/automl-text-classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Otherwise, set your project ID here.
if PROJECT_ID == "" or PROJECT_ID is None: PROJECT_ID = "[your-project-id]" # @param {type:"string"} import sys from datetime import datetime import jsonlines from google.cloud import aiplatform, storage from google.protobuf import json_format REGION = "us-central1" # If you are running this notebook in Colab,...
notebooks/official/automl/automl-text-classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Create a dataset and import your data The notebook uses the 'Happy Moments' dataset for demonstration purposes. You can change it to another text classification dataset that conforms to the data preparation requirements. Using the Python SDK, you can create a dataset and import the dataset in one call to TextDataset.cr...
# Use a timestamp to ensure unique resources TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S") src_uris = "gs://cloud-ml-data/NL-classification/happiness.csv" display_name = f"e2e-text-dataset-{TIMESTAMP}" ds = aiplatform.TextDataset.create( display_name=display_name, gcs_source=src_uris, import_schema_...
notebooks/official/automl/automl-text-classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Train your text classification model Once your dataset has finished importing data, you are ready to train your model. To do this, you first need the full resource name of your dataset, where the full name has the format projects/[YOUR_PROJECT]/locations/us-central1/datasets/[YOUR_DATASET_ID]. If you don't have the res...
datasets = aiplatform.TextDataset.list(filter=f'display_name="{display_name}"') print(datasets)
notebooks/official/automl/automl-text-classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Now you can begin training your model. Training the model is a two part process: Define the training job. You must provide a display name and the type of training you want when you define the training job. Run the training job. When you run the training job, you need to supply a reference to the dataset to use for tra...
# Define the training job training_job_display_name = f"e2e-text-training-job-{TIMESTAMP}" job = aiplatform.AutoMLTextTrainingJob( display_name=training_job_display_name, prediction_type="classification", multi_label=False, ) model_display_name = f"e2e-text-classification-model-{TIMESTAMP}" # Run the trai...
notebooks/official/automl/automl-text-classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Get and review model evaluation scores After your model has finished training, you can review the evaluation scores for it. First, you need to get a reference to the new model. As with datasets, you can either use the reference to the model variable you created when deployed the model or you can list all of the models...
models = aiplatform.Model.list(filter=f'display_name="{model_display_name}"') print(models)
notebooks/official/automl/automl-text-classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Deploy your text classification model Once your model has completed training, you must deploy it to an endpoint to get online predictions from it. When you deploy the model to an endpoint, a copy of the model is made on the endpoint with a new resource name and display name. You can deploy multiple models to the same e...
deployed_model_display_name = f"e2e-deployed-text-classification-model-{TIMESTAMP}" endpoint = model.deploy( deployed_model_display_name=deployed_model_display_name, sync=True )
notebooks/official/automl/automl-text-classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Get online predictions from your model Now that you have your endpoint's resource name, you can get online predictions from the text classification model. To get the online prediction, you send a prediction request to your endpoint.
endpoint_name = "[your-endpoint-name]" if endpoint_name == "[your-endpoint-name]": endpoint_name = endpoint.resource_name print(f"Endpoint name: {endpoint_name}") endpoint = aiplatform.Endpoint(endpoint_name) content = "I got a high score on my math final!" response = endpoint.predict(instances=[{"content": cont...
notebooks/official/automl/automl-text-classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Now that you have the bucket with the prediction instances ready, you can send a batch prediction request to Vertex AI. When you send a request to the service, you must provide the URI of your JSONL file and your output bucket, including the gs:// protocols. With the Python SDK, you can create a batch prediction job by...
job_display_name = "e2e-text-classification-batch-prediction-job" model = aiplatform.Model(model_name=model_name) batch_prediction_job = model.batch_predict( job_display_name=job_display_name, gcs_source=f"{BUCKET_URI}/{input_file_name}", gcs_destination_prefix=f"{BUCKET_URI}/output", sync=True, ) bat...
notebooks/official/automl/automl-text-classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
The output from the batch prediction job should be contained in a folder (or prefix) that includes the name of the batch prediction job plus a time stamp for when it was created. For example, if your batch prediction job name is my-job and your bucket name is my-bucket, the URI of the folder containing your output migh...
RESULTS_DIRECTORY = "prediction_results" RESULTS_DIRECTORY_FULL = f"{RESULTS_DIRECTORY}/output" # Create missing directories os.makedirs(RESULTS_DIRECTORY, exist_ok=True) # Get the Cloud Storage paths for each result ! gsutil -m cp -r $BUCKET_OUTPUT $RESULTS_DIRECTORY # Get most recently modified directory latest_di...
notebooks/official/automl/automl-text-classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
With all of the results files downloaded locally, you can open them and read the results. In this tutorial, you use the jsonlines library to read the output results. The following cell opens up the JSONL output file and then prints the predictions for each instance.
# Get downloaded results in directory results_files = [] for dirpath, _, files in os.walk(latest_directory): for file in files: if file.find("predictions") >= 0: results_files.append(os.path.join(dirpath, file)) # Consolidate all the results into a list results = [] for results_file in results...
notebooks/official/automl/automl-text-classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
This tutorial introduces the student to the basic methods of linear systems theory, focusing on the tools of shift-invariant linear systems. The tutorial introduces the idea of superposition, shift-invariance, and impulse response functions. These ideas are fundamental to many methods and analyses used in functional n...
from scipy.special import gammaln def spm_hrf(RT, fMRI_T=16): """Python implementation of spm_hrf""" _spm_Gpdf = lambda x, h, l: exp(h * log(l) + (h - 1) * log(x) - (l * x) - gammaln(h)) dt = RT / float(fMRI_T) u = np.arange(1, int(32 / dt) + 1) hrf = np.concatenate(([0.0], _spm_Gpdf(u, 6 , dt) - _s...
mrTutLinearSystems.ipynb
cni/psych204a
gpl-2.0
Let's take a look at this model (or "canonical" function)
RT = 1 # Repetition time in seconds (sample every second) hrf = spm_hrf(RT) nT = len(hrf) # Number of time steps t = arange(nT) # Individual time samples of the HRF plot(t, hrf) xlabel('Time (sec)') ylabel('HRF level');
mrTutLinearSystems.ipynb
cni/psych204a
gpl-2.0
Notice that the hrf values sum to 1. This means that convolution with the HRF will preserve the mean of the input stimulus level.
print(sum(hrf))
mrTutLinearSystems.ipynb
cni/psych204a
gpl-2.0
Also, notice that the default SPM HRF has a post-stimulus undershoot. Next we will plot several instances of impulse functions with the corresponding estimated hemodynamic response using this model. To create the estimated response trace, we will use the convolve function from the numpy package (loaded automatically as...
def plot_imp_hrf(times): """Make a plot of impulse functions and estimated hemodynamic response.""" # Get the predicted model hrf = spm_hrf(1) nT = len(hrf) t = arange(nT) # Make impulse vectors for each input stim stims = [] for time in times: stim = zeros(nT) ...
mrTutLinearSystems.ipynb
cni/psych204a
gpl-2.0
Now let's see what this plot looks like for a simple single stimulus.
plot_imp_hrf([1])
mrTutLinearSystems.ipynb
cni/psych204a
gpl-2.0
We see that if we move around the time of the stimulus impulse, it also changes the timing of the expected BOLD response
plot_imp_hrf([1]) plot_imp_hrf([8])
mrTutLinearSystems.ipynb
cni/psych204a
gpl-2.0
Now, plot the stimulus and output of a linear system that responds to both stimulus impulses.
plot_imp_hrf([1, 8])
mrTutLinearSystems.ipynb
cni/psych204a
gpl-2.0
Block Design Experiments We will start with blocks of 2 events and then continue to longer blocks. We will examine how the number of events and the spacing between events affects the predicted bold signal. Example 1: Two events that are spaced 4 seconds apart
plot_imp_hrf([1, 5])
mrTutLinearSystems.ipynb
cni/psych204a
gpl-2.0
Example 2: Two events that are spaced 2 seconds apart Note that here you get one peak and not 2 peaks. Why?
plot_imp_hrf([1, 3])
mrTutLinearSystems.ipynb
cni/psych204a
gpl-2.0
Example 3: 3 events that are spaced 2s apart
plot_imp_hrf([1, 3, 5])
mrTutLinearSystems.ipynb
cni/psych204a
gpl-2.0
Example 4: 5 stimuli given 2 seconds -> block which is 10 seconds long What changed as you increased the number of stimuli? Why?
plot_imp_hrf([1, 3, 5, 7, 9])
mrTutLinearSystems.ipynb
cni/psych204a
gpl-2.0
Compute cross-talk functions for LCMV beamformers Visualise cross-talk functions at one vertex for LCMV beamformers computed with different data covariance matrices, which affects their cross-talk functions.
# Author: Olaf Hauk <olaf.hauk@mrc-cbu.cam.ac.uk> # # License: BSD (3-clause) import mne from mne.datasets import sample from mne.beamformer import make_lcmv, make_lcmv_resolution_matrix from mne.minimum_norm import get_cross_talk print(__doc__) data_path = sample.data_path() subjects_dir = data_path + '/subjects/' ...
0.23/_downloads/4a4a8e5bd5ae7cafea93a04d8c0a0d00/psf_ctf_vertices_lcmv.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Compute resolution matrices for the two LCMV beamformers
rm_pre = make_lcmv_resolution_matrix(filters_pre, forward, info) rm_post = make_lcmv_resolution_matrix(filters_post, forward, info) # compute cross-talk functions (CTFs) for one target vertex sources = [3000] stc_pre = get_cross_talk(rm_pre, forward['src'], sources, norm=True) stc_post = get_cross_talk(rm_post, for...
0.23/_downloads/4a4a8e5bd5ae7cafea93a04d8c0a0d00/psf_ctf_vertices_lcmv.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
데이터 불러오기 및 처리 오늘 사용할 데이터는 다음과 같다. 미국 51개 주(State)별 담배(식물) 도매가격 및 판매일자: Weed_price.csv 아래 그림은 미국의 주별 담배(식물) 판매 데이터를 담은 Weed_Price.csv 파일를 엑셀로 읽었을 때의 일부를 보여준다. 실제 데이터량은 22899개이며, 아래 그림에는 5개의 데이터만을 보여주고 있다. * 주의: 1번줄은 테이블의 열별 목록(column names)을 담고 있다. * 열별 목록: State, HighQ, HighQN, MedQ, MedQN, LowQ, LowQN, date <p> <tab...
prices_pd = pd.read_csv("data/Weed_Price.csv", parse_dates=[-1])
previous/y2017/W09-numpy-averages/.ipynb_checkpoints/GongSu21_Statistics_Averages-checkpoint.ipynb
liganega/Gongsu-DataSci
gpl-3.0
Some questions you will be asked on your quiz: Quiz Question: Is the sign (positive or negative) for power_15 the same in all four models? Quiz Question: (True/False) the plotted fitted lines look the same in all four plots Selecting a Polynomial Degree Whenever we have a "magic" parameter like the degree of the polyno...
training_and_validation, testing = sales.random_split(.9,seed=1) training, validation = training_and_validation.random_split(.5,seed=1)
Regression/Assignment_three/.ipynb_checkpoints/week-3-polynomial-regression-assignment-blank-checkpoint.ipynb
rashikaranpuria/Machine-Learning-Specialization
mit
영화 리뷰를 사용한 텍스트 분류 <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/tutorials/keras/text_classification"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org에서 보기</a> </td> <td><a target="_blank" href="https://colab.research.goog...
import matplotlib.pyplot as plt import os import re import shutil import string import tensorflow as tf from tensorflow.keras import layers from tensorflow.keras import losses from tensorflow.keras import preprocessing from tensorflow.keras.layers.experimental.preprocessing import TextVectorization print(tf.__version...
site/ko/tutorials/keras/text_classification.ipynb
tensorflow/docs-l10n
apache-2.0
감정 분석 This notebook trains a sentiment analysis model to classify movie reviews as positive or negative, based on the text of the review. This is an example of binary—or two-class—classification, an important and widely applicable kind of machine learning problem. You'll use the Large Movie Review Dataset that contains...
url = "https://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz" dataset = tf.keras.utils.get_file("aclImdb_v1", url, untar=True, cache_dir='.', cache_subdir='') dataset_dir = os.path.join(os.path.dirname(dataset), 'aclImdb') os.listdir(d...
site/ko/tutorials/keras/text_classification.ipynb
tensorflow/docs-l10n
apache-2.0
데이터세트 로드하기 다음으로, 디스크에서 데이터를 로드하고 훈련에 적합한 형식으로 준비합니다. 이를 위해 다음과 같은 디렉토리 구조를 예상하는 유용한 text_dataset_from_directory 유틸리티를 사용합니다. main_directory/ ...class_a/ ......a_text_1.txt ......a_text_2.txt ...class_b/ ......b_text_1.txt ......b_text_2.txt 이진 분류를 위한 데이터세트를 준비하려면 디스크에 class_a 및 class_b에 해당하는 두 개의 폴더가 필요합니다. 이것들은 aclImd...
remove_dir = os.path.join(train_dir, 'unsup') shutil.rmtree(remove_dir)
site/ko/tutorials/keras/text_classification.ipynb
tensorflow/docs-l10n
apache-2.0
다음으로 text_dataset_from_directory 유틸리티를 사용하여 레이블이 지정된 tf.data.Dataset를 만듭니다. tf.data는 데이터 작업을 위한 강력한 도구 모음입니다. 머신러닝 실험을 실행할 때 데이터세트를 train, validation 및 test의 세 부분으로 나누는 것이 가장 좋습니다. IMDB 데이터세트는 이미 훈련과 테스트로 나누어져 있지만 검증 세트가 부족합니다. 아래 validation_split 인수를 사용하여 훈련 데이터를 80:20으로 분할하여 검증 세트를 생성해 보겠습니다.
batch_size = 32 seed = 42 raw_train_ds = tf.keras.preprocessing.text_dataset_from_directory( 'aclImdb/train', batch_size=batch_size, validation_split=0.2, subset='training', seed=seed)
site/ko/tutorials/keras/text_classification.ipynb
tensorflow/docs-l10n
apache-2.0
위에서 볼 수 있듯이 training 폴더에는 25,000개의 예제가 있으며 그 중 80%(또는 20,000개)를 훈련에 사용할 것입니다. 잠시 후에 알 수 있겠지만 데이터세트를 model.fit에 직접 전달하여 모델을 훈련할 수 있습니다. tf.data를 처음 사용하는 경우 데이터세트를 반복하고 다음과 같이 몇 가지 예를 출력할 수도 있습니다.
for text_batch, label_batch in raw_train_ds.take(1): for i in range(3): print("Review", text_batch.numpy()[i]) print("Label", label_batch.numpy()[i])
site/ko/tutorials/keras/text_classification.ipynb
tensorflow/docs-l10n
apache-2.0
리뷰에는 &lt;br/&gt;와 같은 간헐적 HTML 태그와 구두점을 포함한 원시 텍스트가 포함되어 있다는 점에 주목하세요. 다음 섹션에서 이를 처리하는 방법을 보여줍니다. 레이블은 0 또는 1입니다. 이들 중 어느 것이 긍정적이고 부정적인 영화 리뷰에 해당하는지 확인하려면 데이터세트에서 class_names 속성을 확인할 수 있습니다.
print("Label 0 corresponds to", raw_train_ds.class_names[0]) print("Label 1 corresponds to", raw_train_ds.class_names[1])
site/ko/tutorials/keras/text_classification.ipynb
tensorflow/docs-l10n
apache-2.0