markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
As there are $3$ different values and $6$ different variables, there are $3^6 = 729$ different ways to assign values to the variables.
print(3 ** 6)
Python/2 Constraint Solver/Map-Coloring.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
Encrypted Media Extension Diversity Analysis Encrypted Media Extension (EME) is the controvertial draft standard at W3C which aims to aims to prevent copyright infrigement in digital video but opens up door for lots of issues regarding security, accessibility, privacy and interoperability. This notebook tries to analyz...
def filter_messages(df, column, keywords): filters = [] for keyword in keywords: filters.append(df[column].str.contains(keyword, case=False)) return df[reduce(lambda p, q: p | q, filters)] # Get the Archieves pd.options.display.mpl_style = 'default' # pandas has a set of preferred graph formattin...
EME Diversity Analysis.ipynb
hargup/eme_diversity_analysis
gpl-3.0
Notice that there is absolutely no one from Asia, Africa or South America. This is important because the DRM laws, attitude towards IP vary considerably across the world.
grouped = eme_activites.groupby(get_cat_val_func("work"), axis=1) print("Emails sent per work category") print(grouped.sum().sum()) print("Participants per work category") for group in grouped.groups: print "%s: %s" % (group,len(grouped.get_group(group).sum())) grouped = eme_activites.groupby(get_cat_val_func("ge...
EME Diversity Analysis.ipynb
hargup/eme_diversity_analysis
gpl-3.0
With the command line python %%writefile &lt;file name&gt; the content of the code cell underneath this line will be written into the file <file name> in current the directory. If it already exist it will be overwritten.
%%writefile ipython_ncl.ncl begin f = addfile("$HOME/NCL/NUG/Version_1.0/data/rectilinear_grid_2D.nc","r") printVarSummary(f) t = f->tsurf printVarSummary(t) wks_type = "png" wks_type@wkWidth = 800 wks_type@wkHeight = 800 wks = gsn_open_wks(wks_type,"plot_contour") res ...
Visualization/NCL notebooks/Call_NCL_script_from_python_notebook.ipynb
KMFleischer/PyEarthScience
mit
Running the NCL script and save messages from stdout into the file log. Cut off the white space around the plot and displa ythe plot inline.
!ncl ipython_ncl.ncl > log !convert -trim +repage plot_contour.png plot_contour_small.png Image('plot_contour_small.png')
Visualization/NCL notebooks/Call_NCL_script_from_python_notebook.ipynb
KMFleischer/PyEarthScience
mit
We will use the AirQualityUCI.csv file as our dataset. It is a ';' seperated file so we'll specify it as a parameter for the read_csv function. We'll also use parse_dates parameter so that pandas recognizes the 'Date' and 'Time' columns and format them accordingly
df.head() df.dropna(how="all",axis=1,inplace=True) df.dropna(how="all",axis=0,inplace=True)
Regression/Air-Quality-Prediction.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
The data contains null values. So we drop those rows and columns containing nulls.
df.shape
Regression/Air-Quality-Prediction.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
The last few lines(specifically 9357 to 9471) of the dataset are empty and of no use. So we'll ignore them too:
df = df[:9357] df.tail() cols = list(df.columns[2:])
Regression/Air-Quality-Prediction.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
If you might have noticed, the values in our data don't contain decimal places but have weird commas in place of them. For example 9.4 is written as 9,4. We'll correct it using the following piece of code:
for col in cols: if df[col].dtype != 'float64': str_x = pd.Series(df[col]).str.replace(',','.') float_X = [] for value in str_x.values: fv = float(value) float_X.append(fv) df[col] = pd.DataFrame(float_X) df.head() features=list(df.columns)
Regression/Air-Quality-Prediction.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
We will define our features and ignore those that might not be of help in our prediction. For example, date is not a very useful feature that can assist in predicting the future values.
features.remove('Date') features.remove('Time') features.remove('PT08.S4(NO2)') X = df[features] y = df['C6H6(GT)']
Regression/Air-Quality-Prediction.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Here we will try to predict the C6H6(GT) values. Hence we set it as our target variables We split the dataset to 60% training and 40% testing sets.
# split dataset to 60% training and 40% testing X_train, X_test, y_train, y_test = cross_validation.train_test_split(X,y, test_size=0.4, random_state=0) print(X_train.shape, y_train.shape)
Regression/Air-Quality-Prediction.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Regression Please see the previous examples for better explanations. We have already implemented Decision Tree Regression and Random Forest Regression to predict the Electrical Energy Output. Decision tree regression
from sklearn.tree import DecisionTreeRegressor tree = DecisionTreeRegressor(max_depth=3) tree.fit(X_train, y_train) y_train_pred = tree.predict(X_train) y_test_pred = tree.predict(X_test) print('MSE train: %.3f, test: %.3f' % ( mean_squared_error(y_train, y_train_pred), mean_squared_error(y_test, y_t...
Regression/Air-Quality-Prediction.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Random forest regression
from sklearn.ensemble import RandomForestRegressor forest = RandomForestRegressor(n_estimators=1000, criterion='mse', random_state=1, n_jobs=-1) forest.fit(X_train, y_train) y_train_pred = forest.predict(X_train) y_test_pre...
Regression/Air-Quality-Prediction.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Linear Regression
regressor = LinearRegression() regressor.fit(X_train, y_train) y_predictions = regressor.predict(X_test) print('R-squared:', regressor.score(X_test, y_test))
Regression/Air-Quality-Prediction.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
The R-squared score of 1 indicates that 100 percent of the variance in the test set is explained by the model. The performance can change if a different set of 75 percent of the data is partitioned to the training set. Hence Cross-validation can be used to produce a better estimate of the estimator's performance. Each ...
scores = cross_val_score(regressor, X, y, cv=5) print ("Average of scores: ", scores.mean()) print ("Cross validation scores: ", scores)
Regression/Air-Quality-Prediction.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Let's inspect some of the model's predictions and plot the true quality scores against the predicted scores: Fitting models with gradient descent Gradient descent is an optimization algorithm that can be used to estimate the local minimum of a function. We can use gradient descent to find the values of the model's para...
# Scaling the features using StandardScaler: X_scaler = StandardScaler() y_scaler = StandardScaler() X_train = X_scaler.fit_transform(X_train) y_train = y_scaler.fit_transform(y_train) X_test = X_scaler.transform(X_test) y_test = y_scaler.transform(y_test) regressor = SGDRegressor(loss='squared_loss') scores = cross_v...
Regression/Air-Quality-Prediction.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Selecting the best features
from sklearn.cross_validation import train_test_split X_train, X_test, y_train, y_test = train_test_split(X,y, test_size=0.25, random_state=33)
Regression/Air-Quality-Prediction.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Sometimes there are a lot of features in the dataset, so before learning, we should try to see which features are more relevant for our learning task, i.e. which of them are better prize predictors. We will use the SelectKBest method from the feature_selection package, and plot the results.
df.columns feature_names = list(df.columns[2:]) feature_names.remove('PT08.S4(NO2)') import matplotlib.pyplot as plt %matplotlib inline from sklearn.feature_selection import * fs=SelectKBest(score_func=f_regression,k=5) X_new=fs.fit_transform(X_train,y_train) print((fs.get_support(),feature_names)) x_min, x_max = ...
Regression/Air-Quality-Prediction.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
A Linear Model Let's try a lineal model, SGDRegressor, that tries to find the hyperplane that minimizes a certain loss function (typically, the sum of squared distances from each instance to the hyperplane). It uses Stochastic Gradient Descent to find the minimum. Regression poses an additional problem: how should we ...
from sklearn.cross_validation import * def train_and_evaluate(clf, X_train, y_train): clf.fit(X_train, y_train) print ("Coefficient of determination on training set:",clf.score(X_train, y_train)) # create a k-fold croos validation iterator of k=5 folds cv = KFold(X_train.shape[0], 5, shuf...
Regression/Air-Quality-Prediction.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
You probably noted the penalty=None parameter when we called the method. The penalization parameter for linear regression methods is introduced to avoid overfitting. It does this by penalizing those hyperplanes having some of their coefficients too large, seeking hyperplanes where each feature contributes more or less ...
clf_sgd1 = linear_model.SGDRegressor(loss='squared_loss', penalty='l2', random_state=42) train_and_evaluate(clf_sgd1,X_train,y_train) clf_sgd2 = linear_model.SGDRegressor(loss='squared_loss', penalty='l1', random_state=42) train_and_evaluate(clf_sgd2,X_train,y_train)
Regression/Air-Quality-Prediction.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Random Forests for Regression Analysis Finally, let's try again Random Forests, in their Extra Trees, and Regression version
from sklearn import ensemble clf_et=ensemble.ExtraTreesRegressor(n_estimators=10,random_state=42) train_and_evaluate(clf_et,X_train,y_train)
Regression/Air-Quality-Prediction.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
An interesting side effect of random forest classification, is that you can measure how 'important' each feature is when predicting the final result
imp_features = (np.sort((clf_et.feature_importances_,features),axis=0)) for rank,f in zip(imp_features[0],imp_features[1]): print("{0:.3f} <-> {1}".format(float(rank), f))
Regression/Air-Quality-Prediction.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Note that the equation sign (i.e., =) must be enclosed by two spaces, i.e.: lhs = rhs. If the variable name is also desired this can be triggered by ##:
ydot1 = y1.diff(t) ##: ydot2 = y2.diff(t) ##: ydot1_obj = y1.diff(t, evaluate=False) ##:
example1_python3.ipynb
cknoll/displaytools
mit
Printing can be combined with LaTeX rendering:
sp.interactive.printing.init_printing(1) ydot1 = y1.diff(t) ##: ydot2 = y2.diff(t) ##: ydot1_obj = y1.diff(t, evaluate=False) ##:
example1_python3.ipynb
cknoll/displaytools
mit
If there is no assignment taking place, ## nevertheless causes the display of the respective result.
y1.diff(t,t) ## y2.diff(t,t) ##
example1_python3.ipynb
cknoll/displaytools
mit
Transposition Sometimes, it can save much space if some return value is displayed in transposed form (while still being assigned not transposed). Compare these examples:
xx = sp.Matrix(sp.symbols('x1:11')) ## yy = sp.Matrix(sp.symbols('y1:11')) ##:T xx.shape, yy.shape ## # combination with other comments a = 3 # comment ##: # Multiline statements and indended lines are not yet supported: a = [1, 2] ##: if 1: b = [10, 20] ##: c = [100, 200] ##:
example1_python3.ipynb
cknoll/displaytools
mit
Plotting a 2-dimensional function This is a visualization of the function $f(x, y) = \text{cos}(x^2+y^2)$
fig = plt.figure( title="Cosine", layout=Layout(width="650px", height="650px"), min_aspect_ratio=1, max_aspect_ratio=1, padding_y=0, ) heatmap = plt.heatmap(color, x=x, y=y) fig
examples/Marks/Pyplot/HeatMap.ipynb
bloomberg/bqplot
apache-2.0
Displaying an image The HeatMap can be used as is to display a 2d grayscale image, by feeding the matrix of pixel intensities to the color attribute
from scipy.misc import ascent Z = ascent() Z = Z[::-1, :] aspect_ratio = Z.shape[1] / Z.shape[0] img = plt.figure( title="Ascent", layout=Layout(width="650px", height="650px"), min_aspect_ratio=aspect_ratio, max_aspect_ratio=aspect_ratio, padding_y=0, ) plt.scales(scales={"color": ColorScale(schem...
examples/Marks/Pyplot/HeatMap.ipynb
bloomberg/bqplot
apache-2.0
Microsoft Emotion API Data Images were placed into the API by hand since there were so few. This step was automated using the API for the Baseline data
def read_jsons(f, candidate): tmp_dict = {} with open(f) as json_file: data = json.load(json_file) for i in data[0]['scores']: if data[0]['scores'][i] > 0.55: # confidence score threshold. tmp_dict[i] = data[0]['scores'][i] else: tmp_dict[i] = n...
IMAGE_BOX/ImageAPI_analysis.ipynb
comp-journalism/Baseline_Problem_for_Algorithm_Audits
mit
Plotting sentiment for each image.
HCDF.plot(kind='bar', ylim=(0,1)) plt.legend(bbox_to_anchor=(1.1, 1)) row_list = [] get_json(basefilepath, 'donald_trump') DTDF = pd.DataFrame(row_list) DTDF.head(12)
IMAGE_BOX/ImageAPI_analysis.ipynb
comp-journalism/Baseline_Problem_for_Algorithm_Audits
mit
Plotting sentiment for each image
DTDF.plot(kind='bar',ylim=(0,1)) plt.legend(bbox_to_anchor=(1.12, 1))
IMAGE_BOX/ImageAPI_analysis.ipynb
comp-journalism/Baseline_Problem_for_Algorithm_Audits
mit
Load data
HOME_DIR = 'd:/larc_projects/job_analytics/'; DATA_DIR = HOME_DIR + 'data/clean/' RES_DIR = HOME_DIR + 'results/' skill_df = pd.read_csv(DATA_DIR + 'skill_index.csv')
extract_feat.ipynb
musketeer191/job_analytics
gpl-3.0
Build feature matrix The matrix is a JD-Skill matrix where each entry $e(d, s)$ is the number of times skill $s$ occurs in job description $d$.
doc_skill = buildDocSkillMat(jd_docs, skill_df, folder=DATA_DIR) with(open(DATA_DIR + 'doc_skill.mtx', 'w')) as f: mmwrite(f, doc_skill)
extract_feat.ipynb
musketeer191/job_analytics
gpl-3.0
Get skills in each JD Using the matrix, we can retrieve skills in each JD.
extracted_skill_df = getSkills4Docs(docs=doc_index['doc'], doc_term=doc_skill, skills=skills) df = pd.merge(doc_index, extracted_skill_df, left_index=True, right_index=True) print(df.shape) df.head() df.to_csv(DATA_DIR + 'doc_index.csv') # later no need to extract skill again
extract_feat.ipynb
musketeer191/job_analytics
gpl-3.0
Extract features of new documents
reload(ja_helpers) from ja_helpers import * # load frameworks of SF as docs pst_docs = pd.read_csv(DATA_DIR + 'SF/pst.csv') pst_docs pst_skill = buildDocSkillMat(pst_docs, skill_df, folder=None) with(open(DATA_DIR + 'pst_skill.mtx', 'w')) as f: mmwrite(f, pst_skill)
extract_feat.ipynb
musketeer191/job_analytics
gpl-3.0
Loading data
from iuvs import io %autocall 1 files = !ls ~/data/iuvs/level1b/*.gz files l1b = io.L1BReader(files[1])
notebooks/dark_analysis.ipynb
michaelaye/iuvs
isc
The darks_interpolated data-cube consists of the interpolated darks that have been subtracted from the raw image cube for this observation. They are originally named background_dark but I find that confusing with detector_dark.
l1b.darks_interpolated.shape
notebooks/dark_analysis.ipynb
michaelaye/iuvs
isc
defining dark1 and 2. (Could be 2 and 3rd of a set of 3, with 2 before light images, or just 1 and 1.
dark0 = l1b.detector_dark[0] dark1 = l1b.detector_dark[1] dark2 = l1b.detector_dark[2]
notebooks/dark_analysis.ipynb
michaelaye/iuvs
isc
some dark stats
io.image_stats(dark0) io.image_stats(dark1) io.image_stats(dark2) def compare_darks(dark1, dark2): fig, ax = subplots(nrows=2) ax[0].imshow(dark1, vmin=0, vmax=1000,cmap='gray') ax[1].imshow(dark2, vmin=0, vmax=1000,cmap='gray') l1b.detector_raw.shape l1b.detector_dark.shape rcParams['figure.figsize']...
notebooks/dark_analysis.ipynb
michaelaye/iuvs
isc
dark histograms first showing how the 3 darks are differing in their histogram:
# _, axes = subplots(2) for i, dark in enumerate([dark0, dark1, dark2]): hist(dark.ravel(), 100, range=(0,5000), log=True, label='dark'+str(i),alpha=0.5) legend()
notebooks/dark_analysis.ipynb
michaelaye/iuvs
isc
As one can see, the first 2 darks are very similar and the last dark taken 47 minutes later has a quite different histogram. Let's see how it looks if we just push up the histogram by the mean value difference of the 2 darks:
delta_mean = abs(dark1.mean() - dark2.mean()) delta_mean def myhist(data, **kwargs): hist(data.ravel(), 100, range=(0,5000), log=True, alpha=0.5, **kwargs) fig, axes = subplots(nrows=2) axes = axes.ravel() for i, dark in enumerate([dark1, dark2]): axes[0].hist(dark.ravel(), 100, range=(0, 5000), log=True, lab...
notebooks/dark_analysis.ipynb
michaelaye/iuvs
isc
The remaining difference in the shape of the histogram make me believe that a pure additive fix can never make one dark to be subtractable by the other one. line profiles The failure of additive correction can also be shown for a line profile at an arbitrary spatial pixel. Below I plot a line profile for row spatial f...
spatial = 30 fig, axes = subplots(ncols=2) axes = axes.ravel() def do_plot(ax): ax.plot(dark2[spatial], '--', label='dark2') ax.plot(dark1[spatial], '--', label='dark1') for delta in range(180, 230, 10): ax.plot(dark1[spatial]+delta, label=delta) ax.legend(loc='best',ncol=2) ax.set_ylim(0,10...
notebooks/dark_analysis.ipynb
michaelaye/iuvs
isc
What do interpolated darks do I'm having doubts about the premise that a dark should be subtractable from a dark in all cases. I'm thinking that if the main premise is that our data should tell us what to do, then the corollar is that some kind of interpolated dark between 2 darks is the truth that needs to be subtract...
# missing code for interpolation check. raw0 = l1b.detector_raw[0] spatial = l1b.detector_raw.shape[1]//2 plot(raw0[spatial] - dark1[spatial], 'g', label='first light minus 2nd dark') plot(raw0[spatial] - dark2[spatial], 'b', label='first light minus last dark') title("Show the importance of taking the rig...
notebooks/dark_analysis.ipynb
michaelaye/iuvs
isc
How do the darks ratio with each other? Dark 0 / Dark 1
spatial = 20 data = dark0[spatial]/dark1[spatial] plot(data, label=spatial) legend(loc='best') title("one row, first dark / second dark. Mean:{:.2f}".format(data.mean())) grid()
notebooks/dark_analysis.ipynb
michaelaye/iuvs
isc
Dark 0 / Dark 2 (last one)
for spatial in range(20,60,10): plot(dark0[spatial]/dark2[spatial], label=spatial) legend(loc='best') raw45 = raw[45] fig, axes = subplots(nrows=2) im = axes[0].imshow(dark1, vmax=600, vmin=0) colorbar(im, ax=axes[0]) im = axes[1].imshow(dark2, vmax=600, vmin=0) colorbar(im, ax=axes[1]) fig.suptitle("Comparing 2n...
notebooks/dark_analysis.ipynb
michaelaye/iuvs
isc
Showing again how it is important to subtract the right dark:
spatial=30 raw45 = l1b.detector_raw[-1] plot(raw45[spatial]-dark2[spatial], label='last light - dark2') plot(raw45[spatial]-dark1[spatial], label='last light - dark1') legend(loc='best') title("Important to subtract the right dark") spatial=30 plot(raw0[spatial] - dark2[spatial], label='raw0 - last dark') plot(raw0[sp...
notebooks/dark_analysis.ipynb
michaelaye/iuvs
isc
Multiplicative comparison of darks
myhist(dark2, label='dark2') for a in linspace(1.4,1.6, 3): myhist(dark1*a, label=str(a)) legend() dettemp = l1b.DarkEngineering.T['DET_TEMP'] casetemp = l1b.DarkEngineering.T['CASE_TEMP'] print(dettemp[0]/dettemp[1]) print(casetemp[1]/dettemp[0]) for a in [1.5,1.52, 1.54]: plot(a*dark1.mean(axis=0), label=st...
notebooks/dark_analysis.ipynb
michaelaye/iuvs
isc
Animation a la Nick's analysis
fig, ax = plt.subplots() # lines = [] # for i in range(0, 11): # frac = 0.8 + i*0.03 # diff = rawa - frac*dark # lines.append(plt.plot(diff[:, j] + i*1000)) diff = rawa - 0.8*dark[...,0] line, = ax.plot(diff[:, 0]) # ax.set_ylim(-1000, 11000) def animate(j): # for i in range(0, 11): # frac = ...
notebooks/dark_analysis.ipynb
michaelaye/iuvs
isc
Some Data Sets Surface Temperature Data - http://data.giss.nasa.gov/gistemp/ Solar Spot Number Data Global Surface Temperature Global surface temperature data from http://data.giss.nasa.gov/gistemp/.
data=pandas.read_csv('temperatures.txt',sep='\s*') # cleaned version data plot(data['Year'],data['J-D'],'-o') xlabel('Year') ylabel('Temperature Deviation')
examples/Working with Data.ipynb
bblais/Python-for-Science
mit
Actually, the deviation is 1/100 of this, so let's adjust...
x=data['Year'] y=data['J-D']/100.0 plot(x,y,'-o') xlabel('Year') ylabel('Temperature Deviation')
examples/Working with Data.ipynb
bblais/Python-for-Science
mit
Or if you like Excel
xls = pandas.ExcelFile('temperatures.xls') print xls.sheet_names data=xls.parse('Sheet 1') data
examples/Working with Data.ipynb
bblais/Python-for-Science
mit
Station Data This data is from http://data.giss.nasa.gov/gistemp/station_data/
data=pandas.read_csv('station.txt',sep='\s*') data
examples/Working with Data.ipynb
bblais/Python-for-Science
mit
This plot will look weird, because of the 999's.
x,y=data['YEAR'],data['metANN'] plot(x,y,'-o') xlabel('Year') ylabel('Temperature Deviation')
examples/Working with Data.ipynb
bblais/Python-for-Science
mit
replace the 999's with Not-a-Number (NaN) which is ignored in plots.
y[y>400]=NaN plot(x,y,'-o') xlabel('Year') ylabel('Temperature Deviation')
examples/Working with Data.ipynb
bblais/Python-for-Science
mit
Fitting First, ordinary least squares (ols)
model=pandas.ols(x=x,y=y) print model.summary print "Beta",model.beta m,b=model.beta['x'],model.beta['intercept'] plot(x,y,'-o') x1=linspace(1890,2000,100) y1=x1*m+b plot(x1,y1,'-') xlabel('Year') ylabel('Temperature Deviation') data
examples/Working with Data.ipynb
bblais/Python-for-Science
mit
Next, try fitting a polynomial
result=fit(x,y,'power',2) xfit = linspace(1850,2000,100) yfit = fitval(result,xfit) plot(x,y,'-o') plot(xfit,yfit,'-') xlabel('Year') ylabel('Temperature Deviation')
examples/Working with Data.ipynb
bblais/Python-for-Science
mit
printing out the results of the fit.
result
examples/Working with Data.ipynb
bblais/Python-for-Science
mit
This should do the same thing.
result=fit(x,y,'quadratic') result
examples/Working with Data.ipynb
bblais/Python-for-Science
mit
Do a super-crazy high polynomial
result=fit(x,y,'power',4) xfit = linspace(1890,1980,100) yfit = fitval(result,xfit) plot(x,y,'-o') plot(xfit,yfit,'-') xlabel('Year') ylabel('Temperature Deviation') result
examples/Working with Data.ipynb
bblais/Python-for-Science
mit
Constantes Podemos construir ops de constantes utilizando constant, su API es bastante simple: constant(value, dtype=None, shape=None, name='Const') Le debemos pasar un valor, el cual puede ser cualquier tipo de tensor (un escalar, un vector, una matriz, etc) y luego opcionalmente le podemos pasar el tipo de datos, la ...
# Creación de Constantes # El valor que retorna el constructor es el valor de la constante. # creamos constantes a=2 y b=3 a = tf.constant(2) b = tf.constant(3) # creamos matrices de 3x3 matriz1 = tf.constant([[1, 3, 2], [1, 0, 0], [1, 2, 2]]) matriz2 = tf.constant([[1, ...
content/notebooks/IntroTensorFlow.ipynb
relopezbriega/mi-python-blog
gpl-2.0
Sesiones Ahora que ya definimos algunas ops constantes y algunos cálculos con ellas, debemos lanzar el grafo dentro de una Sesión. Para realizar esto utilizamos el objeto Session. Este objeto va a encapsular el ambiente en el que las operaciones que definimos en el grafo van a ser ejecutadas y los tensores son evaluado...
# Todo en TensorFlow ocurre dentro de una Sesión # creamos la sesion y realizamos algunas operaciones con las constantes # y lanzamos la sesión with tf.Session() as sess: print("Suma de las constantes: {}".format(sess.run(suma))) print("Multiplicación de las constantes: {}".format(sess.run(mult))) print("...
content/notebooks/IntroTensorFlow.ipynb
relopezbriega/mi-python-blog
gpl-2.0
Las Sesiones deben ser cerradas para liberar los recursos, por lo que es una buena práctica incluir la Sesión dentro de un bloque "with" que la cierra automáticamente cuando el bloque termina de ejecutar. Para ejecutar las operaciones y evaluar los tensores utilizamos Session.run(). Variables persistentes Las Variables...
# Creamos una variable y la inicializamos con 0 estado = tf.Variable(0, name="contador") # Creamos la op que le va a sumar uno a la Variable `estado`. uno = tf.constant(1) nuevo_valor = tf.add(estado, uno) actualizar = tf.assign(estado, nuevo_valor) # Las Variables deben ser inicializadas por la operación `init` lue...
content/notebooks/IntroTensorFlow.ipynb
relopezbriega/mi-python-blog
gpl-2.0
Variables simbólicas (contenedores) Las Variables simbólicas o Contenedores nos van a permitir alimentar a las operaciones con los datos durante la ejecución del grafo. Estos contenedores deben ser alimentados antes de ser evaluados en la sesión, sino obtendremos un error.
# Ejemplo variables simbólicas en los grafos # El valor que devuelve el constructor representa la salida de la # variable (la entrada de la variable se define en la sesion) # Creamos un contenedor del tipo float. Un tensor de 4x4. x = tf.placeholder(tf.float32, shape=(4, 4)) y = tf.matmul(x, x) with tf.Session() as ...
content/notebooks/IntroTensorFlow.ipynb
relopezbriega/mi-python-blog
gpl-2.0
Ahora ya conocemos en líneas generales como es la mecánica detrás del funcionamiento de TensorFlow y como deberíamos proceder para crear las operaciones dentro de los grafos. Veamos si podemos implementar modelos de neuronas simples con la ayuda de esta librería. Ejemplo de neuronas simples Una neurona simple, va a ten...
# Neurona con TensorFlow # Defino las entradas entradas = tf.placeholder("float", name='Entradas') datos = np.array([[0, 0] ,[1, 0] ,[0, 1] ,[1, 1]]) # Defino las salidas uno = lambda: tf.constant(1.0) cero = lambda: tf.constant(0.0) with tf.name_scope('Pesos'): ...
content/notebooks/IntroTensorFlow.ipynb
relopezbriega/mi-python-blog
gpl-2.0
Aquí podemos ver los datos de entrada de $x_1$ y $x_2$, el resultado de la función de activación y la decisión final que toma la neurona de acuerdo este último resultado. Como podemos ver en la tabla de verdad, la neurona nos dice que $x_1$ and $x_2$ solo es verdad cuando ambos son verdaderos, lo que es correcto. Neuro...
# Neurona OR, solo cambiamos el valor del sesgo with tf.Session() as sess: # para armar el grafo summary_writer = tf.train.SummaryWriter(logs_path, graph=sess.graph) # para armar tabla de verdad x_1 = [] x_2 = [] out = [] act = [] for i in ra...
content/notebooks/IntroTensorFlow.ipynb
relopezbriega/mi-python-blog
gpl-2.0
Como vemos, cambiando simplemente el peso del sesgo, convertimos a nuestra neurona AND en una neurona OR. Como muestra la tabla de verdad, el único caso en que $x_1$ OR $x_2$ es falso es cuando ambos son falsos. Red Neuronal XNOR El caso de la función XNOR, ya es más complicado y no puede modelarse utilizando una sola ...
# Red Neuronal XNOR con TensorFlow # Defino las entradas entradas = tf.placeholder("float", name='Entradas') datos = np.array([[0, 0] ,[1, 0] ,[0, 1] ,[1, 1]]) # Defino las salidas uno = lambda: tf.constant(1.0) cero = lambda: tf.constant(0.0) with tf.name_scope('Pes...
content/notebooks/IntroTensorFlow.ipynb
relopezbriega/mi-python-blog
gpl-2.0
Como vemos, la red neuronal nos da el resultado correcto para la función lógica XNOR, solo es verdadera si ambos valores son verdaderos, o ambos son falsos. Hasta aquí implementamos simples neuronas y les pasamos los valores de sus pesos y sesgo a mano; esto es sencillo para los ejemplos; pero en la vida real, si quere...
# importando el dataset from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
content/notebooks/IntroTensorFlow.ipynb
relopezbriega/mi-python-blog
gpl-2.0
Explorando MNIST dataset
# forma del dataset 55000 imagenes mnist.train.images.shape # cada imagen es un array de 28x28 con cada pixel # definido como escala de grises. digito1 = mnist.train.images[0].reshape((28, 28)) # visualizando el primer digito plt.imshow(digito1, cmap = cm.Greys) plt.show() # valor correcto mnist.train.labels[0].non...
content/notebooks/IntroTensorFlow.ipynb
relopezbriega/mi-python-blog
gpl-2.0
Construyendo el perceptron multicapa Ahora que ya conocemos los datos con los que vamos a trabajar, ya estamos en condiciones de construir el modelo. Vamos a construir un peceptron multicapa que es una de las redes neuronales más simples. El modelo va a tener dos capas ocultas, que se van a activar con la función de ac...
# Parametros tasa_aprendizaje = 0.001 epocas = 15 lote = 100 display_step = 1 logs_path = "/tmp/tensorflow_logs/perceptron" # Parametros de la red n_oculta_1 = 256 # 1ra capa de atributos n_oculta_2 = 256 # 2ra capa de atributos n_entradas = 784 # datos de MNIST(forma img: 28*28) n_clases = 10 # Total de clases a clas...
content/notebooks/IntroTensorFlow.ipynb
relopezbriega/mi-python-blog
gpl-2.0
Dimensionality reduction Many types of data can contain a massive number of features. Whether this is individual pixels in images, transcripts or proteins in -omics data or word occurrances in text data this bounty of features can bring with it several challenges. Visualizing more than 4 dimensions directly is difficu...
from sklearn.feature_selection import VarianceThreshold X = np.array([[0, 0, 1], [0, 1, 0], [1, 0, 0], [0, 1, 1], [0, 1, 0], [0, 1, 1]]) print(X) sel = VarianceThreshold(threshold=(.8 * (1 - .8))) X_selected = sel.fit_transform(X) print(X_selected) from sklearn.datasets import load_iris from sklearn.feature_selectio...
Wk10-feature_selection_dimension_reduction_clustering/Wk10-dimensionality-reduction-clustering.ipynb
beyondvalence/biof509_wtl
mit
When iteratively removing weak features the choice of model is important. We will discuss the different models available for regression and classification next week but there are a few points relevant to feature selection we will cover here. A linear model is a useful and easily interpreted model, and when used for fea...
from sklearn import linear_model from sklearn.datasets import load_digits from sklearn.feature_selection import RFE # Load the digits dataset digits = load_digits() X = digits.images.reshape((len(digits.images), -1)) y = digits.target # Create the RFE object and rank each pixel clf = linear_model.LogisticRegression(...
Wk10-feature_selection_dimension_reduction_clustering/Wk10-dimensionality-reduction-clustering.ipynb
beyondvalence/biof509_wtl
mit
The disadvantage with L1 regularization is that if multiple features are correlated only one of them will have a high coefficient.
from sklearn.linear_model import RandomizedLogisticRegression randomized_logistic = RandomizedLogisticRegression() # Load the digits dataset digits = load_digits() X = digits.images.reshape((len(digits.images), -1)) y = digits.target randomized_logistic.fit(X, y) ranking = randomized_logistic.scores_.reshape(digits....
Wk10-feature_selection_dimension_reduction_clustering/Wk10-dimensionality-reduction-clustering.ipynb
beyondvalence/biof509_wtl
mit
Also important is to normalize the means and variances of the features before comparing the coefficients. The approaches we covered last week are crucial for feature selection from a linear model. A limitation of linear models is that any interactions must be hand coded. A feature that is poorly predictive overall may ...
from sklearn.ensemble import RandomForestClassifier clf = RandomForestClassifier(n_estimators=100) # Load the digits dataset digits = load_digits() X = digits.images.reshape((len(digits.images), -1)) y = digits.target clf.fit(X, y) ranking = clf.feature_importances_.reshape(digits.images[0].shape) # Plot pixel rank...
Wk10-feature_selection_dimension_reduction_clustering/Wk10-dimensionality-reduction-clustering.ipynb
beyondvalence/biof509_wtl
mit
Transformation into lower dimensional space An alternative approach is to transform the data in such a way that the variance observed in the features is maintained while only using a smaller number of dimensions. This approach includes all the features so is not a simpler model when considering the entire process from ...
# http://scikit-learn.org/stable/auto_examples/decomposition/plot_pca_vs_lda.html#example-decomposition-plot-pca-vs-lda-py from sklearn import datasets from sklearn.decomposition import PCA from sklearn.discriminant_analysis import LinearDiscriminantAnalysis iris = datasets.load_iris() X = iris.data y = iris.target ...
Wk10-feature_selection_dimension_reduction_clustering/Wk10-dimensionality-reduction-clustering.ipynb
beyondvalence/biof509_wtl
mit
Exercises Apply feature selection to the Olivetti faces dataset, identifying the most important 25% of features. Apply PCA and LDA to the digits dataset used above
# Exercise 1 import sklearn.datasets faces = sklearn.datasets.fetch_olivetti_faces() # Load the olivetti faces dataset X = faces.data y = faces.target plt.matshow(X[0].reshape((64,64))) plt.colorbar() plt.title("face1") plt.show() from sklearn.ensemble import RandomForestClassifier clf = RandomForestClassifier(n_est...
Wk10-feature_selection_dimension_reduction_clustering/Wk10-dimensionality-reduction-clustering.ipynb
beyondvalence/biof509_wtl
mit
Clustering In clustering we attempt to group observations in such a way that observations assigned to the same cluster are more similar to each other than to observations in other clusters. Although labels may be known, clustering is usually performed on unlabeled data as a step in exploratory data analysis. Previously...
import matplotlib import matplotlib.pyplot as plt from skimage.data import camera from skimage.filters import threshold_otsu matplotlib.rcParams['font.size'] = 9 image = camera() thresh = threshold_otsu(image) binary = image > thresh #fig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(8, 2.5)) fig = plt.figure(fi...
Wk10-feature_selection_dimension_reduction_clustering/Wk10-dimensionality-reduction-clustering.ipynb
beyondvalence/biof509_wtl
mit
Different clustering algorithms Cluster comparison The following algorithms are provided by scikit-learn K-means Affinity propagation Mean Shift Spectral clustering Ward Agglomerative Clustering DBSCAN Birch K-means clustering divides samples between clusters by attempting to minimize the within-cluster sum of squar...
from sklearn import cluster, datasets dataset, true_labels = datasets.make_blobs(n_samples=200, n_features=2, random_state=0, centers=3, cluster_std=0.1) fig, ax = plt.subplots(1,1) ax.scatter(dataset[:,0], dataset[:,1], c=true_labels) plt.show() # Clustering algorithm can ...
Wk10-feature_selection_dimension_reduction_clustering/Wk10-dimensionality-reduction-clustering.ipynb
beyondvalence/biof509_wtl
mit
Model evaluation Several approaches have been developed for evaluating clustering models but are generally limited in requiring the true clusters to be known. In the general use case for clustering this is not known with the goal being exploratory. Ultimately, a model is just a tool to better understand the structure ...
from sklearn import cluster, datasets, metrics dataset, true_labels = datasets.make_blobs(n_samples=200, n_features=2, random_state=0, centers=[[x,y] for x in range(3) for y in range(3)], cluster_std=0.1) fig, ax = plt.subplots(1,1) ax.scatter(dataset[:,0], dataset[:,1], c...
Wk10-feature_selection_dimension_reduction_clustering/Wk10-dimensionality-reduction-clustering.ipynb
beyondvalence/biof509_wtl
mit
This is an ideal case with clusters that can be clearly distinguished - convex clusters with similar distributions and large gaps between the clusters. Most real world datasets will not be as easy to work with and determining the correct number of clusters will be more challenging. As an example, compare the performanc...
# Exercise 1 dataset, true_labels = datasets.make_blobs(n_samples=200, n_features=2, random_state=0, centers=[[x,y] for x in range(0,20,2) for y in range(2)], cluster_std=0.2) inertia = [] predictions = [] for i in range(1,25): means = cluster.KMeans(n_clusters=i) p...
Wk10-feature_selection_dimension_reduction_clustering/Wk10-dimensionality-reduction-clustering.ipynb
beyondvalence/biof509_wtl
mit
Orthographic features alone contribute to a relatively high precision but very low recall. This implies that orthographic features alone are not enough to carve out the decision boundary for all the positive instances hence the low recall.However,the decision boundary created is very selective as indicated by the high ...
import subprocess """ Creates models for each fold and runs evaluation with results """ featureset = "om" entity_name = "adversereaction" for fold in range(1,1): #training has already been done training_data = "../ARFF_Files/%s_ARFF/_%s/_train/%s_train-%i.arff" % (entity_name, featureset, entity_name, fold) o...
NERD/DecisionTreeRandomForestEnsemble/Random Forest Ensemble NER Model Results.ipynb
bmcinnes/VCU-VIP-Nanoinformatics
gpl-3.0
It appears adding in the morphological features greatly increased classifier performance.<br> Below, find the underlying decision tree structure representing the classifier. Orthographic + Morphological + Lexical Features
import subprocess """ Creates models for each fold and runs evaluation with results """ featureset = "omt" entity_name = "adversereaction" for fold in range(1,1): #training has already been done training_data = "../ARFF_Files/%s_ARFF/_%s/_train/%s_train-%i.arff" % (entity_name, featureset, entity_name, fold) ...
NERD/DecisionTreeRandomForestEnsemble/Random Forest Ensemble NER Model Results.ipynb
bmcinnes/VCU-VIP-Nanoinformatics
gpl-3.0
Vertex SDK: Train and deploy an XGBoost model with pre-built containers (formerly hosted runtimes) Installation Install the latest (preview) version of Vertex SDK.
! pip3 install -U google-cloud-aiplatform --user
notebooks/community/migration/UJ9 Custom Training Prebuilt Container XGBoost.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Clients The Vertex SDK works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the server (Vertex). You will use several clients in this tutorial, so set them all up upfront. Model Service for managed models. Endpoint Service for deploym...
# client options same for all services client_options = {"api_endpoint": API_ENDPOINT} def create_model_client(): client = aip.ModelServiceClient(client_options=client_options) return client def create_endpoint_client(): client = aip.EndpointServiceClient(client_options=client_options) return client...
notebooks/community/migration/UJ9 Custom Training Prebuilt Container XGBoost.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Train a model projects.locations.customJobs.create Request
TRAIN_IMAGE = "gcr.io/cloud-aiplatform/training/xgboost-cpu.1-1:latest" JOB_NAME = "custom_job_XGB" + TIMESTAMP WORKER_POOL_SPEC = [ { "replica_count": 1, "machine_spec": {"machine_type": "n1-standard-4"}, "python_package_spec": { "executor_image_uri": TRAIN_IMAGE, ...
notebooks/community/migration/UJ9 Custom Training Prebuilt Container XGBoost.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "parent": "projects/migration-ucaip-training/locations/us-central1", "customJob": { "displayName": "custom_job_XGB20210323142337", "jobSpec": { "workerPoolSpecs": [ { "machineSpec": { "machineType": "n1-standard-4" }, "replicaCount": ...
request = clients["job"].create_custom_job(parent=PARENT, custom_job=training_job)
notebooks/community/migration/UJ9 Custom Training Prebuilt Container XGBoost.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "name": "projects/116273516712/locations/us-central1/customJobs/7371064379959148544", "displayName": "custom_job_XGB20210323142337", "jobSpec": { "workerPoolSpecs": [ { "machineSpec": { "machineType": "n1-standard-4" }, "replicaCount": "1", "di...
# The full unique ID for the custom training job custom_training_id = request.name # The short numeric ID for the custom training job custom_training_short_id = custom_training_id.split("/")[-1] print(custom_training_id)
notebooks/community/migration/UJ9 Custom Training Prebuilt Container XGBoost.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "name": "projects/116273516712/locations/us-central1/customJobs/7371064379959148544", "displayName": "custom_job_XGB20210323142337", "jobSpec": { "workerPoolSpecs": [ { "machineSpec": { "machineType": "n1-standard-4" }, "replicaCount": "1", "di...
while True: response = clients["job"].get_custom_job(name=custom_training_id) if response.state != aip.PipelineState.PIPELINE_STATE_SUCCEEDED: print("Training job has not completed:", response.state) if response.state == aip.PipelineState.PIPELINE_STATE_FAILED: break else: ...
notebooks/community/migration/UJ9 Custom Training Prebuilt Container XGBoost.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Deploy the model projects.locations.models.upload Request
DEPLOY_IMAGE = "gcr.io/cloud-aiplatform/prediction/xgboost-cpu.1-1:latest" model = { "display_name": "custom_job_XGB" + TIMESTAMP, "artifact_uri": model_artifact_dir, "container_spec": {"image_uri": DEPLOY_IMAGE, "ports": [{"container_port": 8080}]}, } print(MessageToJson(aip.UploadModelRequest(parent=PAR...
notebooks/community/migration/UJ9 Custom Training Prebuilt Container XGBoost.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "parent": "projects/migration-ucaip-training/locations/us-central1", "model": { "displayName": "custom_job_XGB20210323142337", "containerSpec": { "imageUri": "gcr.io/cloud-aiplatform/prediction/xgboost-cpu.1-1:latest", "ports": [ { "containerPort": 8080 ...
request = clients["model"].upload_model(parent=PARENT, model=model)
notebooks/community/migration/UJ9 Custom Training Prebuilt Container XGBoost.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "model": "projects/116273516712/locations/us-central1/models/2093698837704081408" }
# The full unique ID for the model version model_id = result.model
notebooks/community/migration/UJ9 Custom Training Prebuilt Container XGBoost.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Make batch predictions Make a batch prediction file
import json import tensorflow as tf INSTANCES = [[1.4, 1.3, 5.1, 2.8], [1.5, 1.2, 4.7, 2.4]] gcs_input_uri = "gs://" + BUCKET_NAME + "/" + "test.jsonl" with tf.io.gfile.GFile(gcs_input_uri, "w") as f: for i in INSTANCES: f.write(str(i) + "\n") ! gsutil cat $gcs_input_uri
notebooks/community/migration/UJ9 Custom Training Prebuilt Container XGBoost.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: [1.4, 1.3, 5.1, 2.8] [1.5, 1.2, 4.7, 2.4] projects.locations.batchPredictionJobs.create Request
model_parameters = Value( struct_value=Struct( fields={ "confidence_threshold": Value(number_value=0.5), "max_predictions": Value(number_value=10000.0), } ) ) batch_prediction_job = { "display_name": "custom_job_XGB" + TIMESTAMP, "model": model_id, "input_con...
notebooks/community/migration/UJ9 Custom Training Prebuilt Container XGBoost.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "parent": "projects/migration-ucaip-training/locations/us-central1", "batchPredictionJob": { "displayName": "custom_job_XGB20210323142337", "model": "projects/116273516712/locations/us-central1/models/2093698837704081408", "inputConfig": { "instancesFormat": "jsonl", "gcsSo...
request = clients["job"].create_batch_prediction_job( parent=PARENT, batch_prediction_job=batch_prediction_job )
notebooks/community/migration/UJ9 Custom Training Prebuilt Container XGBoost.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "name": "projects/116273516712/locations/us-central1/batchPredictionJobs/1415053872761667584", "displayName": "custom_job_XGB20210323142337", "model": "projects/116273516712/locations/us-central1/models/2093698837704081408", "inputConfig": { "instancesFormat": "jsonl", "gcsSource": { ...
# The fully qualified ID for the batch job batch_job_id = request.name # The short numeric ID for the batch job batch_job_short_id = batch_job_id.split("/")[-1] print(batch_job_id)
notebooks/community/migration/UJ9 Custom Training Prebuilt Container XGBoost.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "name": "projects/116273516712/locations/us-central1/batchPredictionJobs/1415053872761667584", "displayName": "custom_job_XGB20210323142337", "model": "projects/116273516712/locations/us-central1/models/2093698837704081408", "inputConfig": { "instancesFormat": "jsonl", "gcsSource": { ...
def get_latest_predictions(gcs_out_dir): """ Get the latest prediction subfolder using the timestamp in the subfolder name""" folders = !gsutil ls $gcs_out_dir latest = "" for folder in folders: subfolder = folder.split("/")[-2] if subfolder.startswith("prediction-"): if subf...
notebooks/community/migration/UJ9 Custom Training Prebuilt Container XGBoost.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: ``` ==> gs://migration-ucaip-trainingaip-20210323142337/batch_output/prediction-custom_job_XGB20210323142337-2021_03_23T07_25_10_544Z/prediction.errors_stats-00000-of-00001 <== ==> gs://migration-ucaip-trainingaip-20210323142337/batch_output/prediction-custom_job_XGB20210323142337-2021_03_23T07_25_10_54...
endpoint = {"display_name": "custom_job_XGB" + TIMESTAMP} print( MessageToJson( aip.CreateEndpointRequest(parent=PARENT, endpoint=endpoint).__dict__["_pb"] ) )
notebooks/community/migration/UJ9 Custom Training Prebuilt Container XGBoost.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "parent": "projects/migration-ucaip-training/locations/us-central1", "endpoint": { "displayName": "custom_job_XGB20210323142337" } } Call
request = clients["endpoint"].create_endpoint(parent=PARENT, endpoint=endpoint)
notebooks/community/migration/UJ9 Custom Training Prebuilt Container XGBoost.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "name": "projects/116273516712/locations/us-central1/endpoints/1733903448723685376" }
# The full unique ID for the endpoint endpoint_id = result.name # The short numeric ID for the endpoint endpoint_short_id = endpoint_id.split("/")[-1] print(endpoint_id)
notebooks/community/migration/UJ9 Custom Training Prebuilt Container XGBoost.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
projects.locations.endpoints.deployModel Request
deployed_model = { "model": model_id, "display_name": "custom_job_XGB" + TIMESTAMP, "dedicated_resources": { "min_replica_count": 1, "max_replica_count": 1, "machine_spec": {"machine_type": "n1-standard-4", "accelerator_count": 0}, }, } print( MessageToJson( aip.Depl...
notebooks/community/migration/UJ9 Custom Training Prebuilt Container XGBoost.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "endpoint": "projects/116273516712/locations/us-central1/endpoints/1733903448723685376", "deployedModel": { "model": "projects/116273516712/locations/us-central1/models/2093698837704081408", "displayName": "custom_job_XGB20210323142337", "dedicatedResources": { "machineSpec": { ...
request = clients["endpoint"].deploy_model( endpoint=endpoint_id, deployed_model=deployed_model, traffic_split={"0": 100} )
notebooks/community/migration/UJ9 Custom Training Prebuilt Container XGBoost.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0