markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
As there are $3$ different values and $6$ different variables, there are $3^6 = 729$ different ways to assign values to the variables.
print(3 ** 6)
Python/2 Constraint Solver/Map-Coloring.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
Encrypted Media Extension Diversity Analysis Encrypted Media Extension (EME) is the controvertial draft standard at W3C which aims to aims to prevent copyright infrigement in digital video but opens up door for lots of issues regarding security, accessibility, privacy and interoperability. This notebook tries to analyze if the interests of the important stakeholders were well represented in the debate that happened on public-html mailing list of W3C. Methodology Any emails with EME, Encrypted Media or Digital Rights Managagement in the subject line is considered to about EME. Then each of the participant is categorized on the basis of region of the world they belong to and their employeer's interest to the debate. Notes about the participants can be found here. Region Methodology: Look up their personal website and social media accounts (Twitter, LinkedIn, Github) and see if it mentions the country they live in. (Works in Most of the cases) If the person's email has uses a country specific top level domain, assume that as the country If github profile is available look up the timezone on last 5 commits. For people who have moved from their home country consider the country where they live now. Work Methodology Look up their personal website and social media accounts (Twitter, LinkedIn, Github) and see if it mentions the employer and categorize accordingly. People who work on Accessibility, Privacy or Security but also fit into first three categories are categorized in one of the first three categories. For example someone who works on privacy in Google will be placed in "DRM platform provider" instead of "Privacy". If no other category can be assigned, then assign "None of the Above" Other Notes Google's position is very interesting, it is DRM provider as a browser manufacturer but also a content provider in Youtube and fair number of Google Employers are against EME due to other concerns. I've categorized Christian as Content provider because he works on Youtube, and I've placed everyone else as DRM provider.
def filter_messages(df, column, keywords): filters = [] for keyword in keywords: filters.append(df[column].str.contains(keyword, case=False)) return df[reduce(lambda p, q: p | q, filters)] # Get the Archieves pd.options.display.mpl_style = 'default' # pandas has a set of preferred graph formatting options mlist = mailman.open_list_archives("https://lists.w3.org/Archives/Public/public-html/", archive_dir="./archives") # The spaces around eme are **very** important otherwise it can catch things like "emerging", "implement" etc eme_messages = filter_messages(mlist, 'Subject', [' EME ', 'Encrypted Media', 'Digital Rights Managagement']) eme_activites = Archive.get_activity(Archive(eme_messages)) eme_activites.sum(0).sum() # XXX: Bugzilla might also contain discussions eme_activites.drop("bugzilla@jessica.w3.org", axis=1, inplace=True) # Remove Dupicate senders levdf = process.sorted_matrix(eme_activites) consolidates = [] # gather pairs of names which have a distance of less than 10 for col in levdf.columns: for index, value in levdf.loc[levdf[col] < 10, col].iteritems(): if index != col: # the name shouldn't be a pair for itself consolidates.append((col, index)) # Handpick special cases which aren't covered with string matching consolidates.extend([(u'Kornel Lesi\u0144ski <kornel@geekhood.net>', u'wrong string <kornel@geekhood.net>'), (u'Charles McCathie Nevile <chaals@yandex-team.ru>', u'Charles McCathieNevile <chaals@opera.com>')]) eme_activites = process.consolidate_senders_activity(eme_activites, consolidates) sender_categories = pd.read_csv('people_tag.csv',delimiter=',', encoding="utf-8-sig") # match sender using email only sender_categories['email'] = map(lambda x: CommonRegex(x).emails[0].lower(), sender_categories['name_email']) sender_categories.index = sender_categories['email'] cat_dicts = { "region":{ 1: "Asia", 2: "Australia and New Zealand", 3: "Europe", 4: "Africa", 5: "North America", 6: "South America" }, "work":{ 1: "Foss Browser Developer", 2: "Content Provider", 3: "DRM platform provider", 4: "Accessibility", 5: "Security Researcher", 6: "Other W3C Empoyee", 7: "Privacy", 8: "None of the above" }, "gender":{ 1: "Female", 2: "Male" } } def get_cat_val_func(cat): """ Given category type, returns a function which gives the category value for a sender. """ def _get_cat_val(sender): try: sender_email = CommonRegex(sender).emails[0].lower() return cat_dicts[cat][sender_categories.loc[sender_email][cat]] except KeyError: return "Unknow" return _get_cat_val grouped = eme_activites.groupby(get_cat_val_func("region"), axis=1) print("Emails sent per region\n") print(grouped.sum().sum()) print("Total emails: %s" % grouped.sum().sum().sum()) print("Participants per region") for group in grouped.groups: print "%s: %s" % (group,len(grouped.get_group(group).sum())) print("Total participants: %s" % len(eme_activites.columns))
EME Diversity Analysis.ipynb
hargup/eme_diversity_analysis
gpl-3.0
Notice that there is absolutely no one from Asia, Africa or South America. This is important because the DRM laws, attitude towards IP vary considerably across the world.
grouped = eme_activites.groupby(get_cat_val_func("work"), axis=1) print("Emails sent per work category") print(grouped.sum().sum()) print("Participants per work category") for group in grouped.groups: print "%s: %s" % (group,len(grouped.get_group(group).sum())) grouped = eme_activites.groupby(get_cat_val_func("gender"), axis=1) print("Emails sent per Gender") print(grouped.sum().sum()) print("Participants per work category") for group in grouped.groups: print "%s: %s" % (group,len(grouped.get_group(group).sum()))
EME Diversity Analysis.ipynb
hargup/eme_diversity_analysis
gpl-3.0
With the command line python %%writefile &lt;file name&gt; the content of the code cell underneath this line will be written into the file <file name> in current the directory. If it already exist it will be overwritten.
%%writefile ipython_ncl.ncl begin f = addfile("$HOME/NCL/NUG/Version_1.0/data/rectilinear_grid_2D.nc","r") printVarSummary(f) t = f->tsurf printVarSummary(t) wks_type = "png" wks_type@wkWidth = 800 wks_type@wkHeight = 800 wks = gsn_open_wks(wks_type,"plot_contour") res = True res@gsnMaximize = True res@cnFillOn = True res@cnLevelSpacingF = 5 res@tiMainString = "Title string" plot = gsn_csm_contour_map(wks,t(1,:,:),res) end
Visualization/NCL notebooks/Call_NCL_script_from_python_notebook.ipynb
KMFleischer/PyEarthScience
mit
Running the NCL script and save messages from stdout into the file log. Cut off the white space around the plot and displa ythe plot inline.
!ncl ipython_ncl.ncl > log !convert -trim +repage plot_contour.png plot_contour_small.png Image('plot_contour_small.png')
Visualization/NCL notebooks/Call_NCL_script_from_python_notebook.ipynb
KMFleischer/PyEarthScience
mit
We will use the AirQualityUCI.csv file as our dataset. It is a ';' seperated file so we'll specify it as a parameter for the read_csv function. We'll also use parse_dates parameter so that pandas recognizes the 'Date' and 'Time' columns and format them accordingly
df.head() df.dropna(how="all",axis=1,inplace=True) df.dropna(how="all",axis=0,inplace=True)
Regression/Air-Quality-Prediction.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
The data contains null values. So we drop those rows and columns containing nulls.
df.shape
Regression/Air-Quality-Prediction.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
The last few lines(specifically 9357 to 9471) of the dataset are empty and of no use. So we'll ignore them too:
df = df[:9357] df.tail() cols = list(df.columns[2:])
Regression/Air-Quality-Prediction.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
If you might have noticed, the values in our data don't contain decimal places but have weird commas in place of them. For example 9.4 is written as 9,4. We'll correct it using the following piece of code:
for col in cols: if df[col].dtype != 'float64': str_x = pd.Series(df[col]).str.replace(',','.') float_X = [] for value in str_x.values: fv = float(value) float_X.append(fv) df[col] = pd.DataFrame(float_X) df.head() features=list(df.columns)
Regression/Air-Quality-Prediction.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
We will define our features and ignore those that might not be of help in our prediction. For example, date is not a very useful feature that can assist in predicting the future values.
features.remove('Date') features.remove('Time') features.remove('PT08.S4(NO2)') X = df[features] y = df['C6H6(GT)']
Regression/Air-Quality-Prediction.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Here we will try to predict the C6H6(GT) values. Hence we set it as our target variables We split the dataset to 60% training and 40% testing sets.
# split dataset to 60% training and 40% testing X_train, X_test, y_train, y_test = cross_validation.train_test_split(X,y, test_size=0.4, random_state=0) print(X_train.shape, y_train.shape)
Regression/Air-Quality-Prediction.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Regression Please see the previous examples for better explanations. We have already implemented Decision Tree Regression and Random Forest Regression to predict the Electrical Energy Output. Decision tree regression
from sklearn.tree import DecisionTreeRegressor tree = DecisionTreeRegressor(max_depth=3) tree.fit(X_train, y_train) y_train_pred = tree.predict(X_train) y_test_pred = tree.predict(X_test) print('MSE train: %.3f, test: %.3f' % ( mean_squared_error(y_train, y_train_pred), mean_squared_error(y_test, y_test_pred))) print('R^2 train: %.3f, test: %.3f' % ( r2_score(y_train, y_train_pred), r2_score(y_test, y_test_pred)))
Regression/Air-Quality-Prediction.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Random forest regression
from sklearn.ensemble import RandomForestRegressor forest = RandomForestRegressor(n_estimators=1000, criterion='mse', random_state=1, n_jobs=-1) forest.fit(X_train, y_train) y_train_pred = forest.predict(X_train) y_test_pred = forest.predict(X_test) print('MSE train: %.3f, test: %.3f' % ( mean_squared_error(y_train, y_train_pred), mean_squared_error(y_test, y_test_pred))) print('R^2 train: %.3f, test: %.3f' % ( r2_score(y_train, y_train_pred), r2_score(y_test, y_test_pred)))
Regression/Air-Quality-Prediction.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Linear Regression
regressor = LinearRegression() regressor.fit(X_train, y_train) y_predictions = regressor.predict(X_test) print('R-squared:', regressor.score(X_test, y_test))
Regression/Air-Quality-Prediction.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
The R-squared score of 1 indicates that 100 percent of the variance in the test set is explained by the model. The performance can change if a different set of 75 percent of the data is partitioned to the training set. Hence Cross-validation can be used to produce a better estimate of the estimator's performance. Each cross-validation round trains and tests different partitions of the data to reduce variability.
scores = cross_val_score(regressor, X, y, cv=5) print ("Average of scores: ", scores.mean()) print ("Cross validation scores: ", scores)
Regression/Air-Quality-Prediction.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Let's inspect some of the model's predictions and plot the true quality scores against the predicted scores: Fitting models with gradient descent Gradient descent is an optimization algorithm that can be used to estimate the local minimum of a function. We can use gradient descent to find the values of the model's parameters that minimize the value of the cost function. Gradient descent iteratively updates the values of the model's parameters by calculating the partial derivative of the cost function at each step. Although the calculus behind the cost function is not entirely required to implement it with scikit-learn, having an intuition for how gradient descent will always help to you use it effectively. There are two varieties of gradient descent that are distinguished by the number of training instances that are used to update the model parameters in each training iteration. Batch gradient descent, which is sometimes called only gradient descent, uses all of the training instances to update the model parameters in each iteration. Stochastic Gradient Descent (SGD), in contrast, updates the parameters using only a single training instance in each iteration. The training instance is usually selected randomly, hence the name stochastic. Stochastic gradient descent is often preferred to optimize cost functions when there are large number of training instances, as it will converge more quickly than batch gradient descent. Batch gradient descent is a deterministic algorithm, and will produce the exact same parameter values given the same training set. As a random algorithm, SGD can produce different parameter estimates each time it is run. SGD may not minimize the cost function as well as gradient descent because it uses only single training instances to update the weights. SGDRegressor Here we use Stochastic Gradient Descent to estimate the parameters of a model with Scikit-Learn. SGDRegressor is an implementation of SGD that can be used even for regression problems with large number of features. It can be used to optimize different cost functions to fit different linear models; by default, it will optimize the residual sum of squares.
# Scaling the features using StandardScaler: X_scaler = StandardScaler() y_scaler = StandardScaler() X_train = X_scaler.fit_transform(X_train) y_train = y_scaler.fit_transform(y_train) X_test = X_scaler.transform(X_test) y_test = y_scaler.transform(y_test) regressor = SGDRegressor(loss='squared_loss') scores = cross_val_score(regressor, X_train, y_train, cv=5) print ('Cross validation r-squared scores:', scores) print ('Average cross validation r-squared score:', np.mean(scores)) regressor.fit_transform(X_train, y_train) print ('Test set r-squared score', regressor.score(X_test, y_test))
Regression/Air-Quality-Prediction.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Selecting the best features
from sklearn.cross_validation import train_test_split X_train, X_test, y_train, y_test = train_test_split(X,y, test_size=0.25, random_state=33)
Regression/Air-Quality-Prediction.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Sometimes there are a lot of features in the dataset, so before learning, we should try to see which features are more relevant for our learning task, i.e. which of them are better prize predictors. We will use the SelectKBest method from the feature_selection package, and plot the results.
df.columns feature_names = list(df.columns[2:]) feature_names.remove('PT08.S4(NO2)') import matplotlib.pyplot as plt %matplotlib inline from sklearn.feature_selection import * fs=SelectKBest(score_func=f_regression,k=5) X_new=fs.fit_transform(X_train,y_train) print((fs.get_support(),feature_names)) x_min, x_max = X_new[:,0].min(), X_new[:, 0].max() y_min, y_max = y_train.min(), y_train.max() fig=plt.figure() #fig.subplots_adjust(left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05) # Two subplots, unpack the axes array immediately fig, axes = plt.subplots(1,5) fig.set_size_inches(12,12) for i in range(5): axes[i].set_aspect('equal') axes[i].set_title('Feature ' + str(i)) axes[i].set_xlabel('Feature') axes[i].set_ylabel('Target') axes[i].set_xlim(x_min, x_max) axes[i].set_ylim(y_min, y_max) plt.sca(axes[i]) plt.scatter(X_new[:,i],y_train)
Regression/Air-Quality-Prediction.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
A Linear Model Let's try a lineal model, SGDRegressor, that tries to find the hyperplane that minimizes a certain loss function (typically, the sum of squared distances from each instance to the hyperplane). It uses Stochastic Gradient Descent to find the minimum. Regression poses an additional problem: how should we evaluate our results? Accuracy is not a good idea, since we are predicting real values, as it is almost impossible for us to predict exactly the final value. There are several measures that can be used. The most common is the R2 score, or coefficient of determination that measures the proportion of the outcomes variation explained by the model, and is the default score function for regression methods in scikit-learn. This score reaches its maximum value of 1 when the model perfectly predicts all the test target values.
from sklearn.cross_validation import * def train_and_evaluate(clf, X_train, y_train): clf.fit(X_train, y_train) print ("Coefficient of determination on training set:",clf.score(X_train, y_train)) # create a k-fold croos validation iterator of k=5 folds cv = KFold(X_train.shape[0], 5, shuffle=True, random_state=33) scores = cross_val_score(clf, X_train, y_train, cv=cv) print ("Average coefficient of determination using 5-fold crossvalidation:",np.mean(scores)) from sklearn import linear_model clf_sgd = linear_model.SGDRegressor(loss='squared_loss', penalty=None, random_state=42) train_and_evaluate(clf_sgd,X_train,y_train) print( clf_sgd.coef_)
Regression/Air-Quality-Prediction.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
You probably noted the penalty=None parameter when we called the method. The penalization parameter for linear regression methods is introduced to avoid overfitting. It does this by penalizing those hyperplanes having some of their coefficients too large, seeking hyperplanes where each feature contributes more or less the same to the predicted value. This parameter is generally the L2 norm (the squared sums of the coefficients) or the L1 norm (that is the sum of the absolute value of the coefficients). Let's see how our model works if we introduce an L2 or L1 penalty.
clf_sgd1 = linear_model.SGDRegressor(loss='squared_loss', penalty='l2', random_state=42) train_and_evaluate(clf_sgd1,X_train,y_train) clf_sgd2 = linear_model.SGDRegressor(loss='squared_loss', penalty='l1', random_state=42) train_and_evaluate(clf_sgd2,X_train,y_train)
Regression/Air-Quality-Prediction.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Random Forests for Regression Analysis Finally, let's try again Random Forests, in their Extra Trees, and Regression version
from sklearn import ensemble clf_et=ensemble.ExtraTreesRegressor(n_estimators=10,random_state=42) train_and_evaluate(clf_et,X_train,y_train)
Regression/Air-Quality-Prediction.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
An interesting side effect of random forest classification, is that you can measure how 'important' each feature is when predicting the final result
imp_features = (np.sort((clf_et.feature_importances_,features),axis=0)) for rank,f in zip(imp_features[0],imp_features[1]): print("{0:.3f} <-> {1}".format(float(rank), f))
Regression/Air-Quality-Prediction.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Note that the equation sign (i.e., =) must be enclosed by two spaces, i.e.: lhs = rhs. If the variable name is also desired this can be triggered by ##:
ydot1 = y1.diff(t) ##: ydot2 = y2.diff(t) ##: ydot1_obj = y1.diff(t, evaluate=False) ##:
example1_python3.ipynb
cknoll/displaytools
mit
Printing can be combined with LaTeX rendering:
sp.interactive.printing.init_printing(1) ydot1 = y1.diff(t) ##: ydot2 = y2.diff(t) ##: ydot1_obj = y1.diff(t, evaluate=False) ##:
example1_python3.ipynb
cknoll/displaytools
mit
If there is no assignment taking place, ## nevertheless causes the display of the respective result.
y1.diff(t,t) ## y2.diff(t,t) ##
example1_python3.ipynb
cknoll/displaytools
mit
Transposition Sometimes, it can save much space if some return value is displayed in transposed form (while still being assigned not transposed). Compare these examples:
xx = sp.Matrix(sp.symbols('x1:11')) ## yy = sp.Matrix(sp.symbols('y1:11')) ##:T xx.shape, yy.shape ## # combination with other comments a = 3 # comment ##: # Multiline statements and indended lines are not yet supported: a = [1, 2] ##: if 1: b = [10, 20] ##: c = [100, 200] ##:
example1_python3.ipynb
cknoll/displaytools
mit
Plotting a 2-dimensional function This is a visualization of the function $f(x, y) = \text{cos}(x^2+y^2)$
fig = plt.figure( title="Cosine", layout=Layout(width="650px", height="650px"), min_aspect_ratio=1, max_aspect_ratio=1, padding_y=0, ) heatmap = plt.heatmap(color, x=x, y=y) fig
examples/Marks/Pyplot/HeatMap.ipynb
bloomberg/bqplot
apache-2.0
Displaying an image The HeatMap can be used as is to display a 2d grayscale image, by feeding the matrix of pixel intensities to the color attribute
from scipy.misc import ascent Z = ascent() Z = Z[::-1, :] aspect_ratio = Z.shape[1] / Z.shape[0] img = plt.figure( title="Ascent", layout=Layout(width="650px", height="650px"), min_aspect_ratio=aspect_ratio, max_aspect_ratio=aspect_ratio, padding_y=0, ) plt.scales(scales={"color": ColorScale(scheme="Greys", reverse=True)}) axes_options = { "x": {"visible": False}, "y": {"visible": False}, "color": {"visible": False}, } ascent = plt.heatmap(Z, axes_options=axes_options) img
examples/Marks/Pyplot/HeatMap.ipynb
bloomberg/bqplot
apache-2.0
Microsoft Emotion API Data Images were placed into the API by hand since there were so few. This step was automated using the API for the Baseline data
def read_jsons(f, candidate): tmp_dict = {} with open(f) as json_file: data = json.load(json_file) for i in data[0]['scores']: if data[0]['scores'][i] > 0.55: # confidence score threshold. tmp_dict[i] = data[0]['scores'][i] else: tmp_dict[i] = np.nan tmp_dict['image_file'] = f.split('/')[-1] return tmp_dict basefilepath = './MicrosoftEmotionAPI/' def get_json(path, candidate): for f in glob.glob(path + '*.json'): #print(f) if candidate in f: row_list.append(read_jsons(f, candidate)) row_list = [] get_json(basefilepath, 'hillary_clinton') HCDF = pd.DataFrame(row_list) HCDF.head(11)
IMAGE_BOX/ImageAPI_analysis.ipynb
comp-journalism/Baseline_Problem_for_Algorithm_Audits
mit
Plotting sentiment for each image.
HCDF.plot(kind='bar', ylim=(0,1)) plt.legend(bbox_to_anchor=(1.1, 1)) row_list = [] get_json(basefilepath, 'donald_trump') DTDF = pd.DataFrame(row_list) DTDF.head(12)
IMAGE_BOX/ImageAPI_analysis.ipynb
comp-journalism/Baseline_Problem_for_Algorithm_Audits
mit
Plotting sentiment for each image
DTDF.plot(kind='bar',ylim=(0,1)) plt.legend(bbox_to_anchor=(1.12, 1))
IMAGE_BOX/ImageAPI_analysis.ipynb
comp-journalism/Baseline_Problem_for_Algorithm_Audits
mit
Load data
HOME_DIR = 'd:/larc_projects/job_analytics/'; DATA_DIR = HOME_DIR + 'data/clean/' RES_DIR = HOME_DIR + 'results/' skill_df = pd.read_csv(DATA_DIR + 'skill_index.csv')
extract_feat.ipynb
musketeer191/job_analytics
gpl-3.0
Build feature matrix The matrix is a JD-Skill matrix where each entry $e(d, s)$ is the number of times skill $s$ occurs in job description $d$.
doc_skill = buildDocSkillMat(jd_docs, skill_df, folder=DATA_DIR) with(open(DATA_DIR + 'doc_skill.mtx', 'w')) as f: mmwrite(f, doc_skill)
extract_feat.ipynb
musketeer191/job_analytics
gpl-3.0
Get skills in each JD Using the matrix, we can retrieve skills in each JD.
extracted_skill_df = getSkills4Docs(docs=doc_index['doc'], doc_term=doc_skill, skills=skills) df = pd.merge(doc_index, extracted_skill_df, left_index=True, right_index=True) print(df.shape) df.head() df.to_csv(DATA_DIR + 'doc_index.csv') # later no need to extract skill again
extract_feat.ipynb
musketeer191/job_analytics
gpl-3.0
Extract features of new documents
reload(ja_helpers) from ja_helpers import * # load frameworks of SF as docs pst_docs = pd.read_csv(DATA_DIR + 'SF/pst.csv') pst_docs pst_skill = buildDocSkillMat(pst_docs, skill_df, folder=None) with(open(DATA_DIR + 'pst_skill.mtx', 'w')) as f: mmwrite(f, pst_skill)
extract_feat.ipynb
musketeer191/job_analytics
gpl-3.0
Loading data
from iuvs import io %autocall 1 files = !ls ~/data/iuvs/level1b/*.gz files l1b = io.L1BReader(files[1])
notebooks/dark_analysis.ipynb
michaelaye/iuvs
isc
The darks_interpolated data-cube consists of the interpolated darks that have been subtracted from the raw image cube for this observation. They are originally named background_dark but I find that confusing with detector_dark.
l1b.darks_interpolated.shape
notebooks/dark_analysis.ipynb
michaelaye/iuvs
isc
defining dark1 and 2. (Could be 2 and 3rd of a set of 3, with 2 before light images, or just 1 and 1.
dark0 = l1b.detector_dark[0] dark1 = l1b.detector_dark[1] dark2 = l1b.detector_dark[2]
notebooks/dark_analysis.ipynb
michaelaye/iuvs
isc
some dark stats
io.image_stats(dark0) io.image_stats(dark1) io.image_stats(dark2) def compare_darks(dark1, dark2): fig, ax = subplots(nrows=2) ax[0].imshow(dark1, vmin=0, vmax=1000,cmap='gray') ax[1].imshow(dark2, vmin=0, vmax=1000,cmap='gray') l1b.detector_raw.shape l1b.detector_dark.shape rcParams['figure.figsize'] compare_darks(dark1,dark2)
notebooks/dark_analysis.ipynb
michaelaye/iuvs
isc
dark histograms first showing how the 3 darks are differing in their histogram:
# _, axes = subplots(2) for i, dark in enumerate([dark0, dark1, dark2]): hist(dark.ravel(), 100, range=(0,5000), log=True, label='dark'+str(i),alpha=0.5) legend()
notebooks/dark_analysis.ipynb
michaelaye/iuvs
isc
As one can see, the first 2 darks are very similar and the last dark taken 47 minutes later has a quite different histogram. Let's see how it looks if we just push up the histogram by the mean value difference of the 2 darks:
delta_mean = abs(dark1.mean() - dark2.mean()) delta_mean def myhist(data, **kwargs): hist(data.ravel(), 100, range=(0,5000), log=True, alpha=0.5, **kwargs) fig, axes = subplots(nrows=2) axes = axes.ravel() for i, dark in enumerate([dark1, dark2]): axes[0].hist(dark.ravel(), 100, range=(0, 5000), log=True, label='dark'+str(i+1), alpha=0.5) axes[0].legend() axes[0].set_title('Original dark1 and dark2 histograms') for txt,dark in zip(['dark1+delta_mean','dark2'],[dark1+delta_mean,dark2]): axes[1].hist(dark.ravel(), 100, range=(0, 5000), log=True, label=txt, alpha=0.5) axes[1].legend() axes[1].set_title('Shifted dark1 histogram by the difference of their means')
notebooks/dark_analysis.ipynb
michaelaye/iuvs
isc
The remaining difference in the shape of the histogram make me believe that a pure additive fix can never make one dark to be subtractable by the other one. line profiles The failure of additive correction can also be shown for a line profile at an arbitrary spatial pixel. Below I plot a line profile for row spatial for the 2nd dark (dark1) and the last dark (dark2). In preparation for the future focus on only the sides of the spectral I zoom into the plot both on the left and right 25 pixels of the total range in the data. Adding a range of values trying to match the profile of dark1 to dark2 shows that either the minima or the maxima can be fit but never both at the same time. Possibly reason though: Could it be eval pixels that prevent this from working? Or are there by far not enough of them?
spatial = 30 fig, axes = subplots(ncols=2) axes = axes.ravel() def do_plot(ax): ax.plot(dark2[spatial], '--', label='dark2') ax.plot(dark1[spatial], '--', label='dark1') for delta in range(180, 230, 10): ax.plot(dark1[spatial]+delta, label=delta) ax.legend(loc='best',ncol=2) ax.set_ylim(0,1000) do_plot(axes[0]) axes[0].set_xlim(0,25) axes[0].set_title('Left 25 spectral bins') do_plot(axes[1]) length = dark2.shape[1] axes[1].set_xlim(length-25,length) axes[1].set_title('Right 25 spectral bins')
notebooks/dark_analysis.ipynb
michaelaye/iuvs
isc
What do interpolated darks do I'm having doubts about the premise that a dark should be subtractable from a dark in all cases. I'm thinking that if the main premise is that our data should tell us what to do, then the corollar is that some kind of interpolated dark between 2 darks is the truth that needs to be subtracted from the raw image. The question is what changed over time. As temperature is one of the main things that we know has changed from dark1 to dark2, I find it paramount to take temperature into account as a parameter for the interpolation. But let's see what the currently interpolated darks do.
# missing code for interpolation check. raw0 = l1b.detector_raw[0] spatial = l1b.detector_raw.shape[1]//2 plot(raw0[spatial] - dark1[spatial], 'g', label='first light minus 2nd dark') plot(raw0[spatial] - dark2[spatial], 'b', label='first light minus last dark') title("Show the importance of taking the right dark") legend() imshow(l1b.detector_raw[-1]) title("Last light image in cube") fig, axes = subplots(nrows=3) raw = l1b.detector_raw axes[0].imshow(raw[0]) axes[1].imshow(raw[1]) axes[2].imshow(raw[2]) fig.suptitle("First 3 lights in cube")
notebooks/dark_analysis.ipynb
michaelaye/iuvs
isc
How do the darks ratio with each other? Dark 0 / Dark 1
spatial = 20 data = dark0[spatial]/dark1[spatial] plot(data, label=spatial) legend(loc='best') title("one row, first dark / second dark. Mean:{:.2f}".format(data.mean())) grid()
notebooks/dark_analysis.ipynb
michaelaye/iuvs
isc
Dark 0 / Dark 2 (last one)
for spatial in range(20,60,10): plot(dark0[spatial]/dark2[spatial], label=spatial) legend(loc='best') raw45 = raw[45] fig, axes = subplots(nrows=2) im = axes[0].imshow(dark1, vmax=600, vmin=0) colorbar(im, ax=axes[0]) im = axes[1].imshow(dark2, vmax=600, vmin=0) colorbar(im, ax=axes[1]) fig.suptitle("Comparing 2nd dark (before) and last dark (after set)")
notebooks/dark_analysis.ipynb
michaelaye/iuvs
isc
Showing again how it is important to subtract the right dark:
spatial=30 raw45 = l1b.detector_raw[-1] plot(raw45[spatial]-dark2[spatial], label='last light - dark2') plot(raw45[spatial]-dark1[spatial], label='last light - dark1') legend(loc='best') title("Important to subtract the right dark") spatial=30 plot(raw0[spatial] - dark2[spatial], label='raw0 - last dark') plot(raw0[spatial] - dark1[spatial], label='raw0 - 2nd dark') plot(raw0[spatial] - dark0[spatial], label='raw0 - 1st dark') legend(loc='best')
notebooks/dark_analysis.ipynb
michaelaye/iuvs
isc
Multiplicative comparison of darks
myhist(dark2, label='dark2') for a in linspace(1.4,1.6, 3): myhist(dark1*a, label=str(a)) legend() dettemp = l1b.DarkEngineering.T['DET_TEMP'] casetemp = l1b.DarkEngineering.T['CASE_TEMP'] print(dettemp[0]/dettemp[1]) print(casetemp[1]/dettemp[0]) for a in [1.5,1.52, 1.54]: plot(a*dark1.mean(axis=0), label=str(a)) plot(dark2.mean(axis=0), label='dark2') legend(loc='best')
notebooks/dark_analysis.ipynb
michaelaye/iuvs
isc
Animation a la Nick's analysis
fig, ax = plt.subplots() # lines = [] # for i in range(0, 11): # frac = 0.8 + i*0.03 # diff = rawa - frac*dark # lines.append(plt.plot(diff[:, j] + i*1000)) diff = rawa - 0.8*dark[...,0] line, = ax.plot(diff[:, 0]) # ax.set_ylim(-1000, 11000) def animate(j): # for i in range(0, 11): # frac = 0.8 + i*0.03 # diff = rawa - frac*dark line.set_ydata(diff[:, j] + 0*1000) def init(): line.set_ydata(np.ma.array(np.arange(341), mask=True)) return line, ani = animation.FuncAnimation(fig, animate, np.arange(1, 61), init_func=init, interval=25, blit=True) plt.show() for j in range(61): plt.clf() for i in range(0,11): frac = 0.8 + i*0.03 diff = rawa - frac*dark[...,0] plt.plot(diff[:, j] + i*1000) plt.ylim(-1000, 11000) plt.waitforbuttonpress(0.1) fig, axes = plt.subplots(nrows=3) for ax,img in zip(axes, l1b.detector_dark): im = ax.imshow(img,vmax=2000,cmap='hot') ax.set_title("Mean: {:.1f}".format(img.mean())) plt.colorbar(im, ax=ax) ratio1 = dark0/dark1 ratio2 = dark0/dark2 ratio3 = dark1/dark2 fig, axes = plt.subplots(nrows=3) for ax,img in zip(axes, [ratio1, ratio2, ratio3]): im = ax.imshow(img, vmax=1.5, cmap='hot') plt.colorbar(im, ax=ax) ax.set_title("Mean: {:.2f}".format(img.mean()))
notebooks/dark_analysis.ipynb
michaelaye/iuvs
isc
Some Data Sets Surface Temperature Data - http://data.giss.nasa.gov/gistemp/ Solar Spot Number Data Global Surface Temperature Global surface temperature data from http://data.giss.nasa.gov/gistemp/.
data=pandas.read_csv('temperatures.txt',sep='\s*') # cleaned version data plot(data['Year'],data['J-D'],'-o') xlabel('Year') ylabel('Temperature Deviation')
examples/Working with Data.ipynb
bblais/Python-for-Science
mit
Actually, the deviation is 1/100 of this, so let's adjust...
x=data['Year'] y=data['J-D']/100.0 plot(x,y,'-o') xlabel('Year') ylabel('Temperature Deviation')
examples/Working with Data.ipynb
bblais/Python-for-Science
mit
Or if you like Excel
xls = pandas.ExcelFile('temperatures.xls') print xls.sheet_names data=xls.parse('Sheet 1') data
examples/Working with Data.ipynb
bblais/Python-for-Science
mit
Station Data This data is from http://data.giss.nasa.gov/gistemp/station_data/
data=pandas.read_csv('station.txt',sep='\s*') data
examples/Working with Data.ipynb
bblais/Python-for-Science
mit
This plot will look weird, because of the 999's.
x,y=data['YEAR'],data['metANN'] plot(x,y,'-o') xlabel('Year') ylabel('Temperature Deviation')
examples/Working with Data.ipynb
bblais/Python-for-Science
mit
replace the 999's with Not-a-Number (NaN) which is ignored in plots.
y[y>400]=NaN plot(x,y,'-o') xlabel('Year') ylabel('Temperature Deviation')
examples/Working with Data.ipynb
bblais/Python-for-Science
mit
Fitting First, ordinary least squares (ols)
model=pandas.ols(x=x,y=y) print model.summary print "Beta",model.beta m,b=model.beta['x'],model.beta['intercept'] plot(x,y,'-o') x1=linspace(1890,2000,100) y1=x1*m+b plot(x1,y1,'-') xlabel('Year') ylabel('Temperature Deviation') data
examples/Working with Data.ipynb
bblais/Python-for-Science
mit
Next, try fitting a polynomial
result=fit(x,y,'power',2) xfit = linspace(1850,2000,100) yfit = fitval(result,xfit) plot(x,y,'-o') plot(xfit,yfit,'-') xlabel('Year') ylabel('Temperature Deviation')
examples/Working with Data.ipynb
bblais/Python-for-Science
mit
printing out the results of the fit.
result
examples/Working with Data.ipynb
bblais/Python-for-Science
mit
This should do the same thing.
result=fit(x,y,'quadratic') result
examples/Working with Data.ipynb
bblais/Python-for-Science
mit
Do a super-crazy high polynomial
result=fit(x,y,'power',4) xfit = linspace(1890,1980,100) yfit = fitval(result,xfit) plot(x,y,'-o') plot(xfit,yfit,'-') xlabel('Year') ylabel('Temperature Deviation') result
examples/Working with Data.ipynb
bblais/Python-for-Science
mit
Constantes Podemos construir ops de constantes utilizando constant, su API es bastante simple: constant(value, dtype=None, shape=None, name='Const') Le debemos pasar un valor, el cual puede ser cualquier tipo de tensor (un escalar, un vector, una matriz, etc) y luego opcionalmente le podemos pasar el tipo de datos, la forma y un nombre.
# Creación de Constantes # El valor que retorna el constructor es el valor de la constante. # creamos constantes a=2 y b=3 a = tf.constant(2) b = tf.constant(3) # creamos matrices de 3x3 matriz1 = tf.constant([[1, 3, 2], [1, 0, 0], [1, 2, 2]]) matriz2 = tf.constant([[1, 0, 5], [7, 5, 0], [2, 1, 1]]) # Realizamos algunos cálculos con estas constantes suma = tf.add(a, b) mult = tf.mul(a, b) cubo_a = a**3 # suma de matrices suma_mat = tf.add(matriz1, matriz2) # producto de matrices mult_mat = tf.matmul(matriz1, matriz2)
content/notebooks/IntroTensorFlow.ipynb
relopezbriega/mi-python-blog
gpl-2.0
Sesiones Ahora que ya definimos algunas ops constantes y algunos cálculos con ellas, debemos lanzar el grafo dentro de una Sesión. Para realizar esto utilizamos el objeto Session. Este objeto va a encapsular el ambiente en el que las operaciones que definimos en el grafo van a ser ejecutadas y los tensores son evaluados.
# Todo en TensorFlow ocurre dentro de una Sesión # creamos la sesion y realizamos algunas operaciones con las constantes # y lanzamos la sesión with tf.Session() as sess: print("Suma de las constantes: {}".format(sess.run(suma))) print("Multiplicación de las constantes: {}".format(sess.run(mult))) print("Constante elevada al cubo: {}".format(sess.run(cubo_a))) print("Suma de matrices: \n{}".format(sess.run(suma_mat))) print("Producto de matrices: \n{}".format(sess.run(mult_mat)))
content/notebooks/IntroTensorFlow.ipynb
relopezbriega/mi-python-blog
gpl-2.0
Las Sesiones deben ser cerradas para liberar los recursos, por lo que es una buena práctica incluir la Sesión dentro de un bloque "with" que la cierra automáticamente cuando el bloque termina de ejecutar. Para ejecutar las operaciones y evaluar los tensores utilizamos Session.run(). Variables persistentes Las Variables mantienen el estado a través de las ejecuciones del grafo. Son buffers en memoria que contienen tensores. Se deben inicializar explícitamente y se pueden guardar en el disco para luego restaurar su estado de necesitarlo. Se crean utilizando el objeto Variable.
# Creamos una variable y la inicializamos con 0 estado = tf.Variable(0, name="contador") # Creamos la op que le va a sumar uno a la Variable `estado`. uno = tf.constant(1) nuevo_valor = tf.add(estado, uno) actualizar = tf.assign(estado, nuevo_valor) # Las Variables deben ser inicializadas por la operación `init` luego de # lanzar el grafo. Debemos agregar la op `init` a nuestro grafo. init = tf.initialize_all_variables() # Lanzamos la sesion y ejecutamos las operaciones with tf.Session() as sess: # Ejecutamos la op `init` sess.run(init) # imprimir el valor de la Variable estado. print(sess.run(estado)) # ejecutamos la op que va a actualizar a `estado`. for _ in range(3): sess.run(actualizar) print(sess.run(estado))
content/notebooks/IntroTensorFlow.ipynb
relopezbriega/mi-python-blog
gpl-2.0
Variables simbólicas (contenedores) Las Variables simbólicas o Contenedores nos van a permitir alimentar a las operaciones con los datos durante la ejecución del grafo. Estos contenedores deben ser alimentados antes de ser evaluados en la sesión, sino obtendremos un error.
# Ejemplo variables simbólicas en los grafos # El valor que devuelve el constructor representa la salida de la # variable (la entrada de la variable se define en la sesion) # Creamos un contenedor del tipo float. Un tensor de 4x4. x = tf.placeholder(tf.float32, shape=(4, 4)) y = tf.matmul(x, x) with tf.Session() as sess: # print(sess.run(y)) # ERROR: va a fallar porque no alimentamos a x. rand_array = np.random.rand(4, 4) print(sess.run(y, feed_dict={x: rand_array})) # ahora esta correcto.
content/notebooks/IntroTensorFlow.ipynb
relopezbriega/mi-python-blog
gpl-2.0
Ahora ya conocemos en líneas generales como es la mecánica detrás del funcionamiento de TensorFlow y como deberíamos proceder para crear las operaciones dentro de los grafos. Veamos si podemos implementar modelos de neuronas simples con la ayuda de esta librería. Ejemplo de neuronas simples Una neurona simple, va a tener una forma similar al siguiente diagrama: <img src="https://relopezbriega.github.io/images/neurona.png"> En donde sus componentes son: $x_1, x_2, \dots, x_n$: son los datos de entrada en la neurona, los cuales también puede ser que sean producto de la salida de otra neurona de la red. $x_0$: Es la unidad de sesgo; un valor constante que se le suma a la entrada de la función de activación de la neurona. Generalmente tiene el valor 1. Este valor va a permitir cambiar la función de activación hacia la derecha o izquierda, otorgándole más flexibilidad para aprender a la neurona. $w_0, w_1, w_2, \dots, w_n$: Los pesos relativos de cada entrada. Tener en cuenta que incluso la unidad de sesgo tiene un peso. a: La salida de la neurona. Que va a ser calculada de la siguiente forma: $$a = f\left(\sum_{i=0}^n w_i \cdot x_i \right)$$ Aquí $f$ es la función de activación de la neurona. Esta función es la que le otorga tanta flexibilidad a las redes neuronales y le permite estimar complejas relaciones no lineales en los datos. Puede ser tanto una función lineal, una función logística, hiperbólica, etc. Ahora que ya conocemos como se construye una neurona tratemos de implementar con este modelo las funciones lógicas AND, OR y XNOR. Podemos pensar a estas funciones como un problema de clasificación en el que la salida va a ser 0 o 1, de acuerdo a la combinación de las diferentes entradas. Las podemos modelar linealmente con la siguiente función de activación: $$f(x) = \left{ \begin{array}{ll} 0 & \mbox{si } x < 0 \ 1 & \mbox{si } x \ge 0 \end{array} \right.$$ Neurona AND La neurona AND puede ser modelada con el siguiente esquema: <img src="https://relopezbriega.github.io/images/NN_AND.png"> La salida de esta neurona entonces va a ser: $$a = f(-1.5 + x_1 + x_2)$$ Veamos como la podemos implementar en TensorFlow.
# Neurona con TensorFlow # Defino las entradas entradas = tf.placeholder("float", name='Entradas') datos = np.array([[0, 0] ,[1, 0] ,[0, 1] ,[1, 1]]) # Defino las salidas uno = lambda: tf.constant(1.0) cero = lambda: tf.constant(0.0) with tf.name_scope('Pesos'): # Definiendo pesos y sesgo pesos = tf.placeholder("float", name='Pesos') sesgo = tf.placeholder("float", name='Sesgo') with tf.name_scope('Activacion'): # Función de activación activacion = tf.reduce_sum(tf.add(tf.matmul(entradas, pesos), sesgo)) with tf.name_scope('Neurona'): # Defino la neurona def neurona(): return tf.case([(tf.less(activacion, 0.0), cero)], default=uno) # Salida a = neurona() # path de logs logs_path = '/tmp/tensorflow_logs/neurona' # Lanzar la Sesion with tf.Session() as sess: # para armar el grafo summary_writer = tf.train.SummaryWriter(logs_path, graph=sess.graph) # para armar tabla de verdad x_1 = [] x_2 = [] out = [] act = [] for i in range(len(datos)): t = datos[i].reshape(1, 2) salida, activ = sess.run([a, activacion], feed_dict={entradas: t, pesos:np.array([[1.],[1.]]), sesgo: -1.5}) # armar tabla de verdad en DataFrame x_1.append(t[0][0]) x_2.append(t[0][1]) out.append(salida) act.append(activ) tabla_info = np.array([x_1, x_2, act, out]).transpose() tabla = pd.DataFrame(tabla_info, columns=['x1', 'x2', 'f(x)', 'x1 AND x2']) tabla
content/notebooks/IntroTensorFlow.ipynb
relopezbriega/mi-python-blog
gpl-2.0
Aquí podemos ver los datos de entrada de $x_1$ y $x_2$, el resultado de la función de activación y la decisión final que toma la neurona de acuerdo este último resultado. Como podemos ver en la tabla de verdad, la neurona nos dice que $x_1$ and $x_2$ solo es verdad cuando ambos son verdaderos, lo que es correcto. Neurona OR La neurona OR puede ser modelada con el siguiente esquema: <img src="https://relopezbriega.github.io/images/NN_OR.png"> La salida de esta neurona entonces va a ser: $$a = f(-0.5 + x_1 + x_2)$$ Como se puede ver a simple vista, el modelo de esta neurona es similar a la de la neurona AND, con el único cambio en el valor del sesgo, por lo tanto solo tendríamos que cambiar ese valor en nuestro modelo anterior para crear esta nueva neurona.
# Neurona OR, solo cambiamos el valor del sesgo with tf.Session() as sess: # para armar el grafo summary_writer = tf.train.SummaryWriter(logs_path, graph=sess.graph) # para armar tabla de verdad x_1 = [] x_2 = [] out = [] act = [] for i in range(len(datos)): t = datos[i].reshape(1, 2) salida, activ = sess.run([a, activacion], feed_dict={entradas: t, pesos:np.array([[1.],[1.]]), sesgo: -0.5}) # sesgo ahora -0.5 # armar tabla de verdad en DataFrame x_1.append(t[0][0]) x_2.append(t[0][1]) out.append(salida) act.append(activ) tabla_info = np.array([x_1, x_2, act, out]).transpose() tabla = pd.DataFrame(tabla_info, columns=['x1', 'x2', 'f(x)', 'x1 OR x2']) tabla
content/notebooks/IntroTensorFlow.ipynb
relopezbriega/mi-python-blog
gpl-2.0
Como vemos, cambiando simplemente el peso del sesgo, convertimos a nuestra neurona AND en una neurona OR. Como muestra la tabla de verdad, el único caso en que $x_1$ OR $x_2$ es falso es cuando ambos son falsos. Red Neuronal XNOR El caso de la función XNOR, ya es más complicado y no puede modelarse utilizando una sola neurona como hicimos con los ejemplos anteriores. $x_1$ XNOR $x_2$ va a ser verdadero cuando ambos son verdaderos o ambos son falsos, para implementar esta función lógica debemos crear una red con dos capas, la primer capa tendrá dos neuronas cuya salida servirá de entrada para una nueva neurona que nos dará el resultado final. Esta red la podemos modelar de acuerdo al siguiente esquema: <img src="https://relopezbriega.github.io/images/NN_XNOR.png"> Veamos entonces si podemos implementar este modelo en TensorFlow.
# Red Neuronal XNOR con TensorFlow # Defino las entradas entradas = tf.placeholder("float", name='Entradas') datos = np.array([[0, 0] ,[1, 0] ,[0, 1] ,[1, 1]]) # Defino las salidas uno = lambda: tf.constant(1.0) cero = lambda: tf.constant(0.0) with tf.name_scope('Pesos'): # Definiendo pesos y sesgo pesos = { 'a1': tf.constant([[-1.0], [-1.0]], name='peso_a1'), 'a2': tf.constant([[1.0], [1.0]], name='peso_a2'), 'a3': tf.constant([[1.0], [1.0]], name='peso_a3') } sesgo = { 'a1': tf.constant(0.5, name='sesgo_a1'), 'a2': tf.constant(-1.5, name='sesgo_a2'), 'a3': tf.constant(-0.5, name='sesgo_a3') } with tf.name_scope('Red_neuronal'): # Defino las capas def capa1(entradas, pesos, sesgo): # activacion a1 a1 = tf.reduce_sum(tf.add(tf.matmul(entradas, pesos['a1']), sesgo['a1'])) a1 = tf.case([(tf.less(a1, 0.0), cero)], default=uno) # activacion a2 a2 = tf.reduce_sum(tf.add(tf.matmul(entradas, pesos['a2']), sesgo['a2'])) a2 = tf.case([(tf.less(a2, 0.0), cero)], default=uno) return a1, a2 def capa2(entradas, pesos, sesgo): # activacion a3 a3 = tf.reduce_sum(tf.add(tf.matmul(entradas, pesos['a3']), sesgo['a3'])) a3 = tf.case([(tf.less(a3, 0.0), cero)], default=uno) return a3 # path de logs logs_path = '/tmp/tensorflow_logs/redXNOR' # Sesion red neuronal XNOR with tf.Session() as sess: # para armar el grafo summary_writer = tf.train.SummaryWriter(logs_path, graph=sess.graph) # para armar tabla de verdad x_1 = [] x_2 = [] out = [] for i in range(len(datos)): t = datos[i].reshape(1, 2) # obtenos resultados 1ra capa a1, a2 = sess.run(capa1(entradas, pesos, sesgo), feed_dict={entradas: t}) # pasamos resultados a la 2da capa ent_a3 = np.array([[a1, a2]]) salida = sess.run(capa2(ent_a3, pesos, sesgo)) # armar tabla de verdad en DataFrame x_1.append(t[0][0]) x_2.append(t[0][1]) out.append(salida) tabla_info = np.array([x_1, x_2, out]).transpose() tabla = pd.DataFrame(tabla_info, columns=['x1', 'x2', 'x1 XNOR x2']) tabla
content/notebooks/IntroTensorFlow.ipynb
relopezbriega/mi-python-blog
gpl-2.0
Como vemos, la red neuronal nos da el resultado correcto para la función lógica XNOR, solo es verdadera si ambos valores son verdaderos, o ambos son falsos. Hasta aquí implementamos simples neuronas y les pasamos los valores de sus pesos y sesgo a mano; esto es sencillo para los ejemplos; pero en la vida real, si queremos utilizar redes neuronales necesitamos implementar un procesos que vaya actualizando los pesos a medida que la red vaya aprendiendo con el entrenamiento. Este proceso se conoce con el nombre de propagación hacia atrás o backpropagation. Propagación hacia atrás La propagación hacia atrás o backpropagation es un algoritmo que funciona mediante la determinación de la pérdida (o error) en la salida y luego propagándolo de nuevo hacia atrás en la red. De esta forma los pesos se van actualizando para minimizar el error resultante de cada neurona. Este algoritmo es lo que les permite a las redes neuronales aprender. Veamos un ejemplo de como podemos implementar una red neuronal que pueda aprender por sí sola con la ayuda de TensorFlow. Ejemplo de Perceptron multicapa para reconocer dígitos escritos En este ejemplo vamos a construir un peceptron multicapa para clasificar dígitos escritos. Antes de pasar a la construcción del modelo, exploremos un poco el conjunto de datos con el que vamos a trabajar en la clasificación. MNIST dataset MNIST es un simple conjunto de datos para reconocimiento de imágenes por computadora. Se compone de imágenes de dígitos escritos a mano como los siguientes: <img src="https://www.tensorflow.org/versions/r0.8/images/MNIST.png"> Para más información sobre el dataset pueden visitar el siguiente enlace, en donde hacen un análisis detallado del mismo.
# importando el dataset from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
content/notebooks/IntroTensorFlow.ipynb
relopezbriega/mi-python-blog
gpl-2.0
Explorando MNIST dataset
# forma del dataset 55000 imagenes mnist.train.images.shape # cada imagen es un array de 28x28 con cada pixel # definido como escala de grises. digito1 = mnist.train.images[0].reshape((28, 28)) # visualizando el primer digito plt.imshow(digito1, cmap = cm.Greys) plt.show() # valor correcto mnist.train.labels[0].nonzero()[0][0] # visualizando imagenes de 5 en 5 def visualizar_imagenes(dataset, cant_img): img_linea = 5 lineas = int(cant_img / img_linea) imagenes = [] for i in range(lineas): datos = [] for img in dataset[img_linea* i:img_linea* (i+1)]: datos.append(img.reshape((28,28))) imgs = np.hstack(datos) imagenes.append(imgs) data = np.vstack(imagenes) plt.imshow(data, cmap = cm.Greys ) plt.show() # visualizando los primeros 30 dígitos plt.figure(figsize=(8, 8)) visualizar_imagenes(mnist.train.images, 30)
content/notebooks/IntroTensorFlow.ipynb
relopezbriega/mi-python-blog
gpl-2.0
Construyendo el perceptron multicapa Ahora que ya conocemos los datos con los que vamos a trabajar, ya estamos en condiciones de construir el modelo. Vamos a construir un peceptron multicapa que es una de las redes neuronales más simples. El modelo va a tener dos capas ocultas, que se van a activar con la función de activación <a href="https://en.wikipedia.org/wiki/Rectifier_(neural_networks)">ReLU</a> y vamos a optimizar los pesos reduciendo la entropía cruzada utilizando el algoritmo Adam que es un método para optimización estocástica.
# Parametros tasa_aprendizaje = 0.001 epocas = 15 lote = 100 display_step = 1 logs_path = "/tmp/tensorflow_logs/perceptron" # Parametros de la red n_oculta_1 = 256 # 1ra capa de atributos n_oculta_2 = 256 # 2ra capa de atributos n_entradas = 784 # datos de MNIST(forma img: 28*28) n_clases = 10 # Total de clases a clasificar (0-9 digitos) # input para los grafos x = tf.placeholder("float", [None, n_entradas], name='DatosEntrada') y = tf.placeholder("float", [None, n_clases], name='Clases') # Creamos el modelo def perceptron_multicapa(x, pesos, sesgo): # Función de activación de la capa escondida capa_1 = tf.add(tf.matmul(x, pesos['h1']), sesgo['b1']) # activacion relu capa_1 = tf.nn.relu(capa_1) # Función de activación de la capa escondida capa_2 = tf.add(tf.matmul(capa_1, pesos['h2']), sesgo['b2']) # activación relu capa_2 = tf.nn.relu(capa_2) # Salida con activación lineal salida = tf.matmul(capa_2, pesos['out']) + sesgo['out'] return salida # Definimos los pesos y sesgo de cada capa. pesos = { 'h1': tf.Variable(tf.random_normal([n_entradas, n_oculta_1])), 'h2': tf.Variable(tf.random_normal([n_oculta_1, n_oculta_2])), 'out': tf.Variable(tf.random_normal([n_oculta_2, n_clases])) } sesgo = { 'b1': tf.Variable(tf.random_normal([n_oculta_1])), 'b2': tf.Variable(tf.random_normal([n_oculta_2])), 'out': tf.Variable(tf.random_normal([n_clases])) } with tf.name_scope('Modelo'): # Construimos el modelo pred = perceptron_multicapa(x, pesos, sesgo) with tf.name_scope('Costo'): # Definimos la funcion de costo costo = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred, y)) with tf.name_scope('optimizador'): # Algoritmo de optimización optimizar = tf.train.AdamOptimizer( learning_rate=tasa_aprendizaje).minimize(costo) with tf.name_scope('Precision'): # Evaluar el modelo pred_correcta = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1)) # Calcular la precisión Precision = tf.reduce_mean(tf.cast(pred_correcta, "float")) # Inicializamos todas las variables init = tf.initialize_all_variables() # Crear sumarización para controlar el costo tf.scalar_summary("Costo", costo) # Crear sumarización para controlar la precisión tf.scalar_summary("Precision", Precision) # Juntar los resumenes en una sola operación merged_summary_op = tf.merge_all_summaries() # Lanzamos la sesión with tf.Session() as sess: sess.run(init) # op to write logs to Tensorboard summary_writer = tf.train.SummaryWriter( logs_path, graph=tf.get_default_graph()) # Entrenamiento for epoca in range(epocas): avg_cost = 0. lote_total = int(mnist.train.num_examples/lote) for i in range(lote_total): lote_x, lote_y = mnist.train.next_batch(lote) # Optimización por backprop y funcion de costo _, c, summary = sess.run([optimizar, costo, merged_summary_op], feed_dict={x: lote_x, y: lote_y}) # escribir logs en cada iteracion summary_writer.add_summary(summary, epoca * lote_total + i) # perdida promedio avg_cost += c / lote_total # imprimir información de entrenamiento if epoca % display_step == 0: print("Iteración: {0: 04d} costo = {1:.9f}".format(epoca+1, avg_cost)) print("Optimización Terminada!\n") print("Precisión: {0:.2f}".format(Precision.eval({x: mnist.test.images, y: mnist.test.labels}))) print("Ejecutar el comando:\n", "--> tensorboard --logdir=/tmp/tensorflow_logs ", "\nLuego abir https://0.0.0.0:6006/ en el navegador")
content/notebooks/IntroTensorFlow.ipynb
relopezbriega/mi-python-blog
gpl-2.0
Dimensionality reduction Many types of data can contain a massive number of features. Whether this is individual pixels in images, transcripts or proteins in -omics data or word occurrances in text data this bounty of features can bring with it several challenges. Visualizing more than 4 dimensions directly is difficult complicating our data analysis and exploration. In machine learning models we run the risk of overfitting to the data and having a model that does not generalize to new observations. There are two main approaches to handling this situation: Identify important features and discard less important features Transform the data into a lower dimensional space Identify important features Feature selection can be used to choose the most informative features. This can improve the performance of subsequent models, reduce overfitting and have practical advantages when the model is ready to be utilized. For example, RT-qPCR on a small number of transcripts will be faster and cheaper than RNAseq, and similarly targeted mass spectrometry such as MRM on a limited number of proteins will be cheaper, faster and more accurate than data independent acquisition mass spectrometry. There are a variety of approaches for feature selection: Remove uninformative features (same value for all, or nearly all, samples) Remove features that perform poorly at the task when used alone Iteratively remove the weakest features from a model until the desires number is reached
from sklearn.feature_selection import VarianceThreshold X = np.array([[0, 0, 1], [0, 1, 0], [1, 0, 0], [0, 1, 1], [0, 1, 0], [0, 1, 1]]) print(X) sel = VarianceThreshold(threshold=(.8 * (1 - .8))) X_selected = sel.fit_transform(X) print(X_selected) from sklearn.datasets import load_iris from sklearn.feature_selection import SelectKBest from sklearn.feature_selection import chi2 iris = load_iris() X, y = iris.data, iris.target print(X.shape) X_new = SelectKBest(chi2, k=2).fit_transform(X, y) print(X_new.shape)
Wk10-feature_selection_dimension_reduction_clustering/Wk10-dimensionality-reduction-clustering.ipynb
beyondvalence/biof509_wtl
mit
When iteratively removing weak features the choice of model is important. We will discuss the different models available for regression and classification next week but there are a few points relevant to feature selection we will cover here. A linear model is a useful and easily interpreted model, and when used for feature selection L1 regularization should be used. L1 regularization penalizes large coefficients based on their absolute values. This favors a sparse model with weak features having coefficients close to zero. In contrast, L2 regularization penalizes large coefficients based on their squared value, and this has a tendency to favor many small coefficients rather than a smaller set of larger coefficients.
from sklearn import linear_model from sklearn.datasets import load_digits from sklearn.feature_selection import RFE # Load the digits dataset digits = load_digits() X = digits.images.reshape((len(digits.images), -1)) y = digits.target # Create the RFE object and rank each pixel clf = linear_model.LogisticRegression(C=1.0, penalty='l1', tol=1e-6) rfe = RFE(estimator=clf, n_features_to_select=16, step=1) #rfe, recursive feature elimination rfe.fit(X, y) ranking = rfe.ranking_.reshape(digits.images[0].shape) # Plot pixel ranking plt.matshow(ranking) plt.colorbar() plt.title("Ranking of pixels with RFE") plt.show() rfe.ranking_
Wk10-feature_selection_dimension_reduction_clustering/Wk10-dimensionality-reduction-clustering.ipynb
beyondvalence/biof509_wtl
mit
The disadvantage with L1 regularization is that if multiple features are correlated only one of them will have a high coefficient.
from sklearn.linear_model import RandomizedLogisticRegression randomized_logistic = RandomizedLogisticRegression() # Load the digits dataset digits = load_digits() X = digits.images.reshape((len(digits.images), -1)) y = digits.target randomized_logistic.fit(X, y) ranking = randomized_logistic.scores_.reshape(digits.images[0].shape) # Plot pixel ranking plt.matshow(ranking) plt.colorbar() plt.title("Ranking of pixels") plt.show()
Wk10-feature_selection_dimension_reduction_clustering/Wk10-dimensionality-reduction-clustering.ipynb
beyondvalence/biof509_wtl
mit
Also important is to normalize the means and variances of the features before comparing the coefficients. The approaches we covered last week are crucial for feature selection from a linear model. A limitation of linear models is that any interactions must be hand coded. A feature that is poorly predictive overall may actually be very powerful but only in a limited subgroup. This might be missed in a linear model when we would prefer to keep the feature. Any model exposing a coef_ or feature_importances_ attribute can be used with the SelectFromModel class for feature selection. Forests of randomized decision trees handle interactions well and unlike some of the other models do not require careful tuning of parameters to achieve reasonable performance.
from sklearn.ensemble import RandomForestClassifier clf = RandomForestClassifier(n_estimators=100) # Load the digits dataset digits = load_digits() X = digits.images.reshape((len(digits.images), -1)) y = digits.target clf.fit(X, y) ranking = clf.feature_importances_.reshape(digits.images[0].shape) # Plot pixel ranking plt.matshow(ranking) plt.colorbar() plt.title("Ranking of pixels") plt.show()
Wk10-feature_selection_dimension_reduction_clustering/Wk10-dimensionality-reduction-clustering.ipynb
beyondvalence/biof509_wtl
mit
Transformation into lower dimensional space An alternative approach is to transform the data in such a way that the variance observed in the features is maintained while only using a smaller number of dimensions. This approach includes all the features so is not a simpler model when considering the entire process from data acquisition onwards. It can however improve the performance of subsequent algorithms and is a very popular approach for visualization and the initial data analysis phase of a project. The classical method is Principal Components Analysis although other algorithms are also available. Given a set of features usually some will be at least weakly correlated. By performing an orthogonal transformation a reduced number of features that are uncorrelated can be chosen that maintains as much of the variation in the original data as possible. An orthogonal transformation simply means that the data is rotated and reflected about the axes. Scikit-learn has several different implementations of PCA available together with other techniques performing a similar function. Most of these techniques are unsupervised. Linear Discriminant Analysis (LDA) is one algorithm that can include labels and will attempt to create features that account for the greatest amount of variance between classes.
# http://scikit-learn.org/stable/auto_examples/decomposition/plot_pca_vs_lda.html#example-decomposition-plot-pca-vs-lda-py from sklearn import datasets from sklearn.decomposition import PCA from sklearn.discriminant_analysis import LinearDiscriminantAnalysis iris = datasets.load_iris() X = iris.data y = iris.target target_names = iris.target_names pca = PCA(n_components=2) X_r = pca.fit(X).transform(X) lda = LinearDiscriminantAnalysis(n_components=2) X_r2 = lda.fit(X, y).transform(X) # Percentage of variance explained for each components print('explained variance ratio (first two components): %s' % str(pca.explained_variance_ratio_)) plt.figure() for c, i, target_name in zip("rgb", [0, 1, 2], target_names): plt.scatter(X_r[y == i, 0], X_r[y == i, 1], c=c, label=target_name) plt.legend(scatterpoints=1) plt.title('PCA of IRIS dataset') plt.figure() for c, i, target_name in zip("rgb", [0, 1, 2], target_names): plt.scatter(X_r2[y == i, 0], X_r2[y == i, 1], c=c, label=target_name) plt.legend(scatterpoints=1) plt.title('LDA of IRIS dataset') plt.show()
Wk10-feature_selection_dimension_reduction_clustering/Wk10-dimensionality-reduction-clustering.ipynb
beyondvalence/biof509_wtl
mit
Exercises Apply feature selection to the Olivetti faces dataset, identifying the most important 25% of features. Apply PCA and LDA to the digits dataset used above
# Exercise 1 import sklearn.datasets faces = sklearn.datasets.fetch_olivetti_faces() # Load the olivetti faces dataset X = faces.data y = faces.target plt.matshow(X[0].reshape((64,64))) plt.colorbar() plt.title("face1") plt.show() from sklearn.ensemble import RandomForestClassifier clf = RandomForestClassifier(n_estimators=100) clf.fit(X, y) ranking = clf.feature_importances_.reshape(faces.images[0].shape) # Plot pixel ranking plt.matshow(ranking.reshape(64,64)) plt.colorbar() plt.title("ranking") plt.show() p75 = np.percentile(ranking, 75) mask = ranking > p75 # Plot pixel ranking plt.matshow(mask.reshape(64,64)) plt.colorbar() plt.title("mask") plt.show() # apply pca and lda to digits digits = datasets.load_digits() X = digits.data y = digits.target target_names = digits.target_names pca = PCA(n_components=2) X_r = pca.fit(X).transform(X) lda = LinearDiscriminantAnalysis(n_components=2) X_r2 = lda.fit(X, y).transform(X) # Percentage of variance explained for each components print('explained variance ratio (first two components): %s' % str(pca.explained_variance_ratio_)) plt.figure() for c, i, target_name in zip("rgb", [0, 1, 2], target_names): plt.scatter(X_r[y == i, 0], X_r[y == i, 1], c=c, label=target_name) plt.legend(scatterpoints=1) plt.title('PCA of IRIS dataset') plt.figure() for c, i, target_name in zip("rgb", [0, 1, 2], target_names): plt.scatter(X_r2[y == i, 0], X_r2[y == i, 1], c=c, label=target_name) plt.legend(scatterpoints=1) plt.title('LDA of IRIS dataset') plt.show()
Wk10-feature_selection_dimension_reduction_clustering/Wk10-dimensionality-reduction-clustering.ipynb
beyondvalence/biof509_wtl
mit
Clustering In clustering we attempt to group observations in such a way that observations assigned to the same cluster are more similar to each other than to observations in other clusters. Although labels may be known, clustering is usually performed on unlabeled data as a step in exploratory data analysis. Previously we looked at the Otsu thresholding method as a basic example of clustering. This is very closely related to k-means clustering. A variety of other methods are available with different characteristics. The best method to use will vary developing on the particular problem.
import matplotlib import matplotlib.pyplot as plt from skimage.data import camera from skimage.filters import threshold_otsu matplotlib.rcParams['font.size'] = 9 image = camera() thresh = threshold_otsu(image) binary = image > thresh #fig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(8, 2.5)) fig = plt.figure(figsize=(8, 2.5)) ax1 = plt.subplot(1, 3, 1, adjustable='box-forced') ax2 = plt.subplot(1, 3, 2) ax3 = plt.subplot(1, 3, 3, sharex=ax1, sharey=ax1, adjustable='box-forced') ax1.imshow(image, cmap=plt.cm.gray) ax1.set_title('Original') ax1.axis('off') ax2.hist(image) ax2.set_title('Histogram') ax2.axvline(thresh, color='r') ax3.imshow(binary, cmap=plt.cm.gray) ax3.set_title('Thresholded') ax3.axis('off') plt.show()
Wk10-feature_selection_dimension_reduction_clustering/Wk10-dimensionality-reduction-clustering.ipynb
beyondvalence/biof509_wtl
mit
Different clustering algorithms Cluster comparison The following algorithms are provided by scikit-learn K-means Affinity propagation Mean Shift Spectral clustering Ward Agglomerative Clustering DBSCAN Birch K-means clustering divides samples between clusters by attempting to minimize the within-cluster sum of squares. It is an iterative algorithm repeatedly updating the position of the centroids (cluster centers), re-assigning samples to the best cluster and repeating until an optimal solution is reached. The clusters will depend on the starting position of the centroids so k-means is often run multiple times with random initialization and then the best solution chosen. Affinity Propagation operates by passing messages between the samples updating a record of the exemplar samples. These are samples that best represent other samples. The algorithm functions on an affinity matrix that can be eaither user supplied or computed by the algorothm. Two matrices are maintained. One matrix records how well each sample represents other samples in the dataset. When the algorithm finishes the highest scoring samples are chosen to represent the clusters. The second matrix records which other samples best represent each sample so that the entire dataset can be assigned to a cluster when the algorithm terminates. Mean Shift iteratively updates candidate centroids to represent the clusters. The algorithm attempts to find areas of higher density. Spectral clustering operates on an affinity matrix that can be user supplied or computed by the model. The algorithm functions by minimizing the value of the links cut in a graph created from the affinity matrix. By focusing on the relationships between samples this algorithm performs well for non-convex clusters. Ward is a type of agglomerative clustering using minimization of the within-cluster sum of squares to join clusters together until the specified number of clusters remain. Agglomerative clustering starts all the samples in their own cluster and then progressively joins clusters together minimizing some performance measure. In addition to minimizing the variance as seen with Ward other options are, 1) minimizing the average distance between samples in each cluster, and 2) minimizing the maximum distance between observations in each cluster. DBSCAN is another algorithm that attempts to find regions of high density and then expands the clusters from there. Birch is a tree based clustering algorithm assigning samples to nodes on a tree
from sklearn import cluster, datasets dataset, true_labels = datasets.make_blobs(n_samples=200, n_features=2, random_state=0, centers=3, cluster_std=0.1) fig, ax = plt.subplots(1,1) ax.scatter(dataset[:,0], dataset[:,1], c=true_labels) plt.show() # Clustering algorithm can be used as a class means = cluster.KMeans(n_clusters=2) prediction = means.fit_predict(dataset) fig, ax = plt.subplots(1,1) ax.scatter(dataset[:,0], dataset[:,1], c=prediction) plt.show()
Wk10-feature_selection_dimension_reduction_clustering/Wk10-dimensionality-reduction-clustering.ipynb
beyondvalence/biof509_wtl
mit
Model evaluation Several approaches have been developed for evaluating clustering models but are generally limited in requiring the true clusters to be known. In the general use case for clustering this is not known with the goal being exploratory. Ultimately, a model is just a tool to better understand the structure of our data. If we are able to gain insight from using a clustering algorithm then it has served its purpose. The metrics available are Adjusted Rand Index, Mutual Information based scores, Homogeneity, completeness, v-measure, and silhouette coefficient. Of these, only the silhouette coefficient does not require the true clusters to be known. Although the silhouette coefficient can be useful it takes a very similar approach to k-means, favoring convex clusters over more complex, equally valid, clusters. How to determine number of clusters One important use for the model evaluation algorithms is in choosing the number of clusters. The clustering algorithms take as parameters either the number of clusters to partition a dataset into or other scaling factors that ultimately determine the number of clusters. It is left to the user to determine the correct value for these parameters. As the number of clusters increases the fit to the data will always improve until each point is in a cluster by itself. As such, classical optimization algorithms searching for a minimum or maximum score will not work. Often, the goal is to find an inflection point. If the cluster parameter is too low adding an additional cluster will have a large impact on the evaluation score. The gradient will be high at numbers of clusters less than the true value. If the cluster parameter is too high adding an additional cluster will have a small impact on the evaluation score. The gradient will be low at numbers of clusters higher than the true value. At the correct number of clusters the gradient should suddenly change, this is an inflection point.
from sklearn import cluster, datasets, metrics dataset, true_labels = datasets.make_blobs(n_samples=200, n_features=2, random_state=0, centers=[[x,y] for x in range(3) for y in range(3)], cluster_std=0.1) fig, ax = plt.subplots(1,1) ax.scatter(dataset[:,0], dataset[:,1], c=true_labels) plt.show() inertia = [] predictions = [] for i in range(1,20): means = cluster.KMeans(n_clusters=i) prediction = means.fit_predict(dataset) inertia.append(means.inertia_) predictions.append(prediction) plt.plot(range(1,20), inertia) plt.show() plt.scatter(dataset[:,0], dataset[:,1], c=predictions[8]) plt.show()
Wk10-feature_selection_dimension_reduction_clustering/Wk10-dimensionality-reduction-clustering.ipynb
beyondvalence/biof509_wtl
mit
This is an ideal case with clusters that can be clearly distinguished - convex clusters with similar distributions and large gaps between the clusters. Most real world datasets will not be as easy to work with and determining the correct number of clusters will be more challenging. As an example, compare the performance between challenges 1 and 2 (unknown number of clusters) and challenge 3 (known number of clusters) in table 2 of this report on automated FACS. There is a pull request on the scikit-learn repository to add several automated algorithms to identify the correct number of clusters but it has not been integrated. Exercises Using the grid of blobs sample dataset investigate how different cluster shapes and distances alter the plot of inertia with number of clusters. Is the plot still interpretable if the distances are 1/2, 1/4, 1/8, etc? If the variance in the first and second dimensions is unequal is the plot still interpretable? Does using a different algorithm improve performance?
# Exercise 1 dataset, true_labels = datasets.make_blobs(n_samples=200, n_features=2, random_state=0, centers=[[x,y] for x in range(0,20,2) for y in range(2)], cluster_std=0.2) inertia = [] predictions = [] for i in range(1,25): means = cluster.KMeans(n_clusters=i) prediction = means.fit_predict(dataset) inertia.append(means.inertia_) predictions.append(prediction) fig = plt.figure(figsize=(10,10)) ax = plt.subplot(2,1,1) ax.scatter(dataset[:,0], dataset[:,1], c=true_labels) ax.scatter(means.cluster_centers_[:20,0], means.cluster_centers_[:20,1], s=160, alpha=0.5) ax.set_title('Observations and cluster centers') ax1 = plt.subplot(2,1,2) ax1.plot(range(1,25), inertia) ax1.set_title('Inertia for varying cluster sizes') plt.show() means.cluster_centers_[:,0]
Wk10-feature_selection_dimension_reduction_clustering/Wk10-dimensionality-reduction-clustering.ipynb
beyondvalence/biof509_wtl
mit
Orthographic features alone contribute to a relatively high precision but very low recall. This implies that orthographic features alone are not enough to carve out the decision boundary for all the positive instances hence the low recall.However,the decision boundary created is very selective as indicated by the high precision. Orthographic + Morphological Features
import subprocess """ Creates models for each fold and runs evaluation with results """ featureset = "om" entity_name = "adversereaction" for fold in range(1,1): #training has already been done training_data = "../ARFF_Files/%s_ARFF/_%s/_train/%s_train-%i.arff" % (entity_name, featureset, entity_name, fold) os.system("python3 decisiontree.py -tr %s" % (training_data)) for fold in range(1,11): testing_data = "../ARFF_Files/%s_ARFF/_%s/_test/%s_test-%i.arff" % (entity_name, featureset, entity_name, fold) output = subprocess.check_output("python3 evaluate_randomforest.py -te %s" % (testing_data), shell=True) print(output.decode('utf-8'))
NERD/DecisionTreeRandomForestEnsemble/Random Forest Ensemble NER Model Results.ipynb
bmcinnes/VCU-VIP-Nanoinformatics
gpl-3.0
It appears adding in the morphological features greatly increased classifier performance.<br> Below, find the underlying decision tree structure representing the classifier. Orthographic + Morphological + Lexical Features
import subprocess """ Creates models for each fold and runs evaluation with results """ featureset = "omt" entity_name = "adversereaction" for fold in range(1,1): #training has already been done training_data = "../ARFF_Files/%s_ARFF/_%s/_train/%s_train-%i.arff" % (entity_name, featureset, entity_name, fold) os.system("python3 decisiontree.py -tr %s" % (training_data)) for fold in range(1,11): testing_data = "../ARFF_Files/%s_ARFF/_%s/_test/%s_test-%i.arff" % (entity_name, featureset, entity_name, fold) output = subprocess.check_output("python3 evaluate_randomforest.py -te %s" % (testing_data), shell=True) print(output.decode('utf-8'))
NERD/DecisionTreeRandomForestEnsemble/Random Forest Ensemble NER Model Results.ipynb
bmcinnes/VCU-VIP-Nanoinformatics
gpl-3.0
Vertex SDK: Train and deploy an XGBoost model with pre-built containers (formerly hosted runtimes) Installation Install the latest (preview) version of Vertex SDK.
! pip3 install -U google-cloud-aiplatform --user
notebooks/community/migration/UJ9 Custom Training Prebuilt Container XGBoost.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Clients The Vertex SDK works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the server (Vertex). You will use several clients in this tutorial, so set them all up upfront. Model Service for managed models. Endpoint Service for deployment. Job Service for batch jobs and custom training. Prediction Service for serving. Note: Prediction has a different service endpoint.
# client options same for all services client_options = {"api_endpoint": API_ENDPOINT} def create_model_client(): client = aip.ModelServiceClient(client_options=client_options) return client def create_endpoint_client(): client = aip.EndpointServiceClient(client_options=client_options) return client def create_prediction_client(): client = aip.PredictionServiceClient(client_options=client_options) return client def create_job_client(): client = aip.JobServiceClient(client_options=client_options) return client clients = {} clients["model"] = create_model_client() clients["endpoint"] = create_endpoint_client() clients["prediction"] = create_prediction_client() clients["job"] = create_job_client() for client in clients.items(): print(client)
notebooks/community/migration/UJ9 Custom Training Prebuilt Container XGBoost.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Train a model projects.locations.customJobs.create Request
TRAIN_IMAGE = "gcr.io/cloud-aiplatform/training/xgboost-cpu.1-1:latest" JOB_NAME = "custom_job_XGB" + TIMESTAMP WORKER_POOL_SPEC = [ { "replica_count": 1, "machine_spec": {"machine_type": "n1-standard-4"}, "python_package_spec": { "executor_image_uri": TRAIN_IMAGE, "package_uris": ["gs://" + BUCKET_NAME + "/iris.tar.gz"], "python_module": "trainer.task", "args": ["--model-dir=" + "gs://{}/{}".format(BUCKET_NAME, JOB_NAME)], }, } ] training_job = aip.CustomJob( display_name=JOB_NAME, job_spec={"worker_pool_specs": WORKER_POOL_SPEC} ) print( MessageToJson( aip.CreateCustomJobRequest(parent=PARENT, custom_job=training_job).__dict__[ "_pb" ] ) )
notebooks/community/migration/UJ9 Custom Training Prebuilt Container XGBoost.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "parent": "projects/migration-ucaip-training/locations/us-central1", "customJob": { "displayName": "custom_job_XGB20210323142337", "jobSpec": { "workerPoolSpecs": [ { "machineSpec": { "machineType": "n1-standard-4" }, "replicaCount": "1", "pythonPackageSpec": { "executorImageUri": "gcr.io/cloud-aiplatform/training/xgboost-cpu.1-1:latest", "packageUris": [ "gs://migration-ucaip-trainingaip-20210323142337/iris.tar.gz" ], "pythonModule": "trainer.task", "args": [ "--model-dir=gs://migration-ucaip-trainingaip-20210323142337/custom_job_XGB20210323142337" ] } } ] } } } Call
request = clients["job"].create_custom_job(parent=PARENT, custom_job=training_job)
notebooks/community/migration/UJ9 Custom Training Prebuilt Container XGBoost.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "name": "projects/116273516712/locations/us-central1/customJobs/7371064379959148544", "displayName": "custom_job_XGB20210323142337", "jobSpec": { "workerPoolSpecs": [ { "machineSpec": { "machineType": "n1-standard-4" }, "replicaCount": "1", "diskSpec": { "bootDiskType": "pd-ssd", "bootDiskSizeGb": 100 }, "pythonPackageSpec": { "executorImageUri": "gcr.io/cloud-aiplatform/training/xgboost-cpu.1-1:latest", "packageUris": [ "gs://migration-ucaip-trainingaip-20210323142337/iris.tar.gz" ], "pythonModule": "trainer.task", "args": [ "--model-dir=gs://migration-ucaip-trainingaip-20210323142337/custom_job_XGB20210323142337" ] } } ] }, "state": "JOB_STATE_PENDING", "createTime": "2021-03-23T14:23:45.067026Z", "updateTime": "2021-03-23T14:23:45.067026Z" }
# The full unique ID for the custom training job custom_training_id = request.name # The short numeric ID for the custom training job custom_training_short_id = custom_training_id.split("/")[-1] print(custom_training_id)
notebooks/community/migration/UJ9 Custom Training Prebuilt Container XGBoost.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "name": "projects/116273516712/locations/us-central1/customJobs/7371064379959148544", "displayName": "custom_job_XGB20210323142337", "jobSpec": { "workerPoolSpecs": [ { "machineSpec": { "machineType": "n1-standard-4" }, "replicaCount": "1", "diskSpec": { "bootDiskType": "pd-ssd", "bootDiskSizeGb": 100 }, "pythonPackageSpec": { "executorImageUri": "gcr.io/cloud-aiplatform/training/xgboost-cpu.1-1:latest", "packageUris": [ "gs://migration-ucaip-trainingaip-20210323142337/iris.tar.gz" ], "pythonModule": "trainer.task", "args": [ "--model-dir=gs://migration-ucaip-trainingaip-20210323142337/custom_job_XGB20210323142337" ] } } ] }, "state": "JOB_STATE_PENDING", "createTime": "2021-03-23T14:23:45.067026Z", "updateTime": "2021-03-23T14:23:45.067026Z" }
while True: response = clients["job"].get_custom_job(name=custom_training_id) if response.state != aip.PipelineState.PIPELINE_STATE_SUCCEEDED: print("Training job has not completed:", response.state) if response.state == aip.PipelineState.PIPELINE_STATE_FAILED: break else: print("Training Time:", response.end_time - response.start_time) break time.sleep(60) # model artifact output directory on Google Cloud Storage model_artifact_dir = ( response.job_spec.worker_pool_specs[0].python_package_spec.args[0].split("=")[-1] ) print("artifact location " + model_artifact_dir)
notebooks/community/migration/UJ9 Custom Training Prebuilt Container XGBoost.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Deploy the model projects.locations.models.upload Request
DEPLOY_IMAGE = "gcr.io/cloud-aiplatform/prediction/xgboost-cpu.1-1:latest" model = { "display_name": "custom_job_XGB" + TIMESTAMP, "artifact_uri": model_artifact_dir, "container_spec": {"image_uri": DEPLOY_IMAGE, "ports": [{"container_port": 8080}]}, } print(MessageToJson(aip.UploadModelRequest(parent=PARENT, model=model).__dict__["_pb"]))
notebooks/community/migration/UJ9 Custom Training Prebuilt Container XGBoost.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "parent": "projects/migration-ucaip-training/locations/us-central1", "model": { "displayName": "custom_job_XGB20210323142337", "containerSpec": { "imageUri": "gcr.io/cloud-aiplatform/prediction/xgboost-cpu.1-1:latest", "ports": [ { "containerPort": 8080 } ] }, "artifactUri": "gs://migration-ucaip-trainingaip-20210323142337/custom_job_XGB20210323142337" } } Call
request = clients["model"].upload_model(parent=PARENT, model=model)
notebooks/community/migration/UJ9 Custom Training Prebuilt Container XGBoost.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "model": "projects/116273516712/locations/us-central1/models/2093698837704081408" }
# The full unique ID for the model version model_id = result.model
notebooks/community/migration/UJ9 Custom Training Prebuilt Container XGBoost.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Make batch predictions Make a batch prediction file
import json import tensorflow as tf INSTANCES = [[1.4, 1.3, 5.1, 2.8], [1.5, 1.2, 4.7, 2.4]] gcs_input_uri = "gs://" + BUCKET_NAME + "/" + "test.jsonl" with tf.io.gfile.GFile(gcs_input_uri, "w") as f: for i in INSTANCES: f.write(str(i) + "\n") ! gsutil cat $gcs_input_uri
notebooks/community/migration/UJ9 Custom Training Prebuilt Container XGBoost.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: [1.4, 1.3, 5.1, 2.8] [1.5, 1.2, 4.7, 2.4] projects.locations.batchPredictionJobs.create Request
model_parameters = Value( struct_value=Struct( fields={ "confidence_threshold": Value(number_value=0.5), "max_predictions": Value(number_value=10000.0), } ) ) batch_prediction_job = { "display_name": "custom_job_XGB" + TIMESTAMP, "model": model_id, "input_config": { "instances_format": "jsonl", "gcs_source": {"uris": [gcs_input_uri]}, }, "model_parameters": model_parameters, "output_config": { "predictions_format": "jsonl", "gcs_destination": { "output_uri_prefix": "gs://" + f"{BUCKET_NAME}/batch_output/" }, }, "dedicated_resources": { "machine_spec": {"machine_type": "n1-standard-2"}, "starting_replica_count": 1, "max_replica_count": 1, }, } print( MessageToJson( aip.CreateBatchPredictionJobRequest( parent=PARENT, batch_prediction_job=batch_prediction_job ).__dict__["_pb"] ) )
notebooks/community/migration/UJ9 Custom Training Prebuilt Container XGBoost.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "parent": "projects/migration-ucaip-training/locations/us-central1", "batchPredictionJob": { "displayName": "custom_job_XGB20210323142337", "model": "projects/116273516712/locations/us-central1/models/2093698837704081408", "inputConfig": { "instancesFormat": "jsonl", "gcsSource": { "uris": [ "gs://migration-ucaip-trainingaip-20210323142337/test.jsonl" ] } }, "modelParameters": { "max_predictions": 10000.0, "confidence_threshold": 0.5 }, "outputConfig": { "predictionsFormat": "jsonl", "gcsDestination": { "outputUriPrefix": "gs://migration-ucaip-trainingaip-20210323142337/batch_output/" } }, "dedicatedResources": { "machineSpec": { "machineType": "n1-standard-2" }, "startingReplicaCount": 1, "maxReplicaCount": 1 } } } Call
request = clients["job"].create_batch_prediction_job( parent=PARENT, batch_prediction_job=batch_prediction_job )
notebooks/community/migration/UJ9 Custom Training Prebuilt Container XGBoost.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "name": "projects/116273516712/locations/us-central1/batchPredictionJobs/1415053872761667584", "displayName": "custom_job_XGB20210323142337", "model": "projects/116273516712/locations/us-central1/models/2093698837704081408", "inputConfig": { "instancesFormat": "jsonl", "gcsSource": { "uris": [ "gs://migration-ucaip-trainingaip-20210323142337/test.jsonl" ] } }, "modelParameters": { "confidence_threshold": 0.5, "max_predictions": 10000.0 }, "outputConfig": { "predictionsFormat": "jsonl", "gcsDestination": { "outputUriPrefix": "gs://migration-ucaip-trainingaip-20210323142337/batch_output/" } }, "dedicatedResources": { "machineSpec": { "machineType": "n1-standard-2" }, "startingReplicaCount": 1, "maxReplicaCount": 1 }, "manualBatchTuningParameters": {}, "state": "JOB_STATE_PENDING", "createTime": "2021-03-23T14:25:10.582704Z", "updateTime": "2021-03-23T14:25:10.582704Z" }
# The fully qualified ID for the batch job batch_job_id = request.name # The short numeric ID for the batch job batch_job_short_id = batch_job_id.split("/")[-1] print(batch_job_id)
notebooks/community/migration/UJ9 Custom Training Prebuilt Container XGBoost.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "name": "projects/116273516712/locations/us-central1/batchPredictionJobs/1415053872761667584", "displayName": "custom_job_XGB20210323142337", "model": "projects/116273516712/locations/us-central1/models/2093698837704081408", "inputConfig": { "instancesFormat": "jsonl", "gcsSource": { "uris": [ "gs://migration-ucaip-trainingaip-20210323142337/test.jsonl" ] } }, "modelParameters": { "max_predictions": 10000.0, "confidence_threshold": 0.5 }, "outputConfig": { "predictionsFormat": "jsonl", "gcsDestination": { "outputUriPrefix": "gs://migration-ucaip-trainingaip-20210323142337/batch_output/" } }, "dedicatedResources": { "machineSpec": { "machineType": "n1-standard-2" }, "startingReplicaCount": 1, "maxReplicaCount": 1 }, "manualBatchTuningParameters": {}, "state": "JOB_STATE_PENDING", "createTime": "2021-03-23T14:25:10.582704Z", "updateTime": "2021-03-23T14:25:10.582704Z" }
def get_latest_predictions(gcs_out_dir): """ Get the latest prediction subfolder using the timestamp in the subfolder name""" folders = !gsutil ls $gcs_out_dir latest = "" for folder in folders: subfolder = folder.split("/")[-2] if subfolder.startswith("prediction-"): if subfolder > latest: latest = folder[:-1] return latest while True: response = clients["job"].get_batch_prediction_job(name=batch_job_id) if response.state != aip.JobState.JOB_STATE_SUCCEEDED: print("The job has not completed:", response.state) if response.state == aip.JobState.JOB_STATE_FAILED: break else: folder = get_latest_predictions( response.output_config.gcs_destination.output_uri_prefix ) ! gsutil ls $folder/prediction* ! gsutil cat -h $folder/prediction* break time.sleep(60)
notebooks/community/migration/UJ9 Custom Training Prebuilt Container XGBoost.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: ``` ==> gs://migration-ucaip-trainingaip-20210323142337/batch_output/prediction-custom_job_XGB20210323142337-2021_03_23T07_25_10_544Z/prediction.errors_stats-00000-of-00001 <== ==> gs://migration-ucaip-trainingaip-20210323142337/batch_output/prediction-custom_job_XGB20210323142337-2021_03_23T07_25_10_544Z/prediction.results-00000-of-00001 <== {"instance": [1.4, 1.3, 5.1, 2.8], "prediction": 2.0451931953430176} {"instance": [1.5, 1.2, 4.7, 2.4], "prediction": 1.9618644714355469} ``` Make online predictions projects.locations.endpoints.create Request
endpoint = {"display_name": "custom_job_XGB" + TIMESTAMP} print( MessageToJson( aip.CreateEndpointRequest(parent=PARENT, endpoint=endpoint).__dict__["_pb"] ) )
notebooks/community/migration/UJ9 Custom Training Prebuilt Container XGBoost.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "parent": "projects/migration-ucaip-training/locations/us-central1", "endpoint": { "displayName": "custom_job_XGB20210323142337" } } Call
request = clients["endpoint"].create_endpoint(parent=PARENT, endpoint=endpoint)
notebooks/community/migration/UJ9 Custom Training Prebuilt Container XGBoost.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "name": "projects/116273516712/locations/us-central1/endpoints/1733903448723685376" }
# The full unique ID for the endpoint endpoint_id = result.name # The short numeric ID for the endpoint endpoint_short_id = endpoint_id.split("/")[-1] print(endpoint_id)
notebooks/community/migration/UJ9 Custom Training Prebuilt Container XGBoost.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
projects.locations.endpoints.deployModel Request
deployed_model = { "model": model_id, "display_name": "custom_job_XGB" + TIMESTAMP, "dedicated_resources": { "min_replica_count": 1, "max_replica_count": 1, "machine_spec": {"machine_type": "n1-standard-4", "accelerator_count": 0}, }, } print( MessageToJson( aip.DeployModelRequest( endpoint=endpoint_id, deployed_model=deployed_model, traffic_split={"0": 100}, ).__dict__["_pb"] ) )
notebooks/community/migration/UJ9 Custom Training Prebuilt Container XGBoost.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "endpoint": "projects/116273516712/locations/us-central1/endpoints/1733903448723685376", "deployedModel": { "model": "projects/116273516712/locations/us-central1/models/2093698837704081408", "displayName": "custom_job_XGB20210323142337", "dedicatedResources": { "machineSpec": { "machineType": "n1-standard-4" }, "minReplicaCount": 1, "maxReplicaCount": 1 } }, "trafficSplit": { "0": 100 } } Call
request = clients["endpoint"].deploy_model( endpoint=endpoint_id, deployed_model=deployed_model, traffic_split={"0": 100} )
notebooks/community/migration/UJ9 Custom Training Prebuilt Container XGBoost.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0