markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Spatially visualize mean annual ground temperature:
fig2=plt.figure(figsize=(8,4.5)) ax2 = fig2.add_axes([0.05,0.05,0.9,0.85]) m2 = Basemap(llcrnrlon=-145.5,llcrnrlat=1.,urcrnrlon=-2.566,urcrnrlat=46.352,\ rsphere=(6378137.00,6356752.3142),\ resolution='l',area_thresh=1000.,projection='lcc',\ lat_1=50.,lon_0=-107.,ax=ax2) X, Y = m2(LONS, LATS) m2.drawcoastlines(linewidth=1.25) # m.fillcontinents(color='0.8') m2.drawparallels(np.arange(-80,81,20),labels=[1,1,0,0]) m2.drawmeridians(np.arange(0,360,60),labels=[0,0,0,1]) clev = np.linspace(start=-10, stop=0, num =11) cs2 = m2.contourf(X, Y, TTOP, clev, cmap=plt.cm.seismic, extend='both') cbar2 = m2.colorbar(cs2) cbar2.set_label('Ground Temperature ($^\circ$C)') plt.show() # # print x._values["ALT"][:] TTOP2 = np.reshape(TTOP, np.size(TTOP)) TTOP2 = TTOP2[np.where(~np.isnan(TTOP2))] # Hist plot: plt.hist(TTOP2) mask = x._model.mask print np.shape(mask) plt.imshow(mask) print np.nanmin(x._model.tot_percent)
notebooks/Ku_2D.ipynb
permamodel/permamodel
mit
Dummy Classifier http://scikit-learn.org/stable/modules/generated/sklearn.dummy.DummyRegressor.html
from sklearn.dummy import DummyRegressor dummy_regr = DummyRegressor(strategy="median") dummy_regr.fit(X, Y) from sklearn.metrics import mean_squared_error mean_squared_error(Y, dummy_regr.predict(X))
Session-2-Hands-Experience-for-ML/DataScience_Presentation2-LR3.ipynb
msampathkumar/data_science_sessions
mit
Linear Regression http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html
from sklearn import linear_model regr = linear_model.LinearRegression() regr.fit(X, Y) mean_squared_error(Y, regr.predict(X))
Session-2-Hands-Experience-for-ML/DataScience_Presentation2-LR3.ipynb
msampathkumar/data_science_sessions
mit
Linear SVR http://scikit-learn.org/stable/auto_examples/svm/plot_svm_regression.html
from sklearn.svm import LinearSVR # Step1: create an instance class as `regr` # Step2: fit the data into class instance # score mean_squared_error(Y, regr.predict(X))
Session-2-Hands-Experience-for-ML/DataScience_Presentation2-LR3.ipynb
msampathkumar/data_science_sessions
mit
Random Forest Regressor http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html
from sklearn.ensemble import RandomForestRegressor # set random_state as zero # score
Session-2-Hands-Experience-for-ML/DataScience_Presentation2-LR3.ipynb
msampathkumar/data_science_sessions
mit
This Wikipedia article has a nice description of how to calculate the current phase of the moon. In code, that looks like this:
def approximate_moon_visibility(current_date): days_per_synodic_month = 29.530588853 # change this if the moon gets towed away days_since_known_new_moon = (current_date - dt.date(2015, 7, 16)).days phase_fraction = (days_since_known_new_moon % days_per_synodic_month) / days_per_synodic_month return (1 - phase_fraction if phase_fraction > 0.5 else phase_fraction) * 2 def date_string_to_date(date_string): return dt.datetime.strptime(date_string, "%Y%m%d").date()
Moon Phase Correlation Analysis.ipynb
Uberi/zen-and-the-art-of-telemetry
mit
Let's randomly sample 10% of pings for nightly submissions made from 2015-07-05 to 2015-08-05:
pings = get_pings(sc, app="Firefox", channel="nightly", submission_date=("20150705", "20150805"), fraction=0.1, schema="v4")
Moon Phase Correlation Analysis.ipynb
Uberi/zen-and-the-art-of-telemetry
mit
Extract the startup time metrics with their submission date and make sure we only consider one submission per user:
subset = get_pings_properties(pings, ["clientId", "meta/submissionDate", "payload/simpleMeasurements/firstPaint"]) subset = get_one_ping_per_client(subset) cached = subset.cache()
Moon Phase Correlation Analysis.ipynb
Uberi/zen-and-the-art-of-telemetry
mit
Obtain an array of pairs, each containing the moon visibility and the startup time:
pairs = cached.map(lambda p: (approximate_moon_visibility(date_string_to_date(p["meta/submissionDate"])), p["payload/simpleMeasurements/firstPaint"])) pairs = np.asarray(pairs.filter(lambda p: p[1] != None and p[1] < 100000000).collect())
Moon Phase Correlation Analysis.ipynb
Uberi/zen-and-the-art-of-telemetry
mit
Let's see what this data looks like:
plt.figure(figsize=(15, 7)) plt.scatter(pairs.T[0], pairs.T[1]) plt.xlabel("Moon visibility ratio") plt.ylabel("Startup time (ms)") plt.show()
Moon Phase Correlation Analysis.ipynb
Uberi/zen-and-the-art-of-telemetry
mit
The correlation coefficient is now easy to calculate:
np.corrcoef(pairs.T)[0, 1]
Moon Phase Correlation Analysis.ipynb
Uberi/zen-and-the-art-of-telemetry
mit
Unit Tests Overview and Principles Testing is the process by which you exercise your code to determine if it performs as expected. The code you are testing is referred to as the code under test. There are two parts to writing tests. 1. invoking the code under test so that it is exercised in a particular way; 1. evaluating the results of executing code under test to determine if it behaved as expected. The collection of tests performed are referred to as the test cases. The fraction of the code under test that is executed as a result of running the test cases is referred to as test coverage. For dynamical languages such as Python, it's extremely important to have a high test coverage. In fact, you should try to get 100% coverage. This is because little checking is done when the source code is read by the Python interpreter. For example, the code under test might contain a line that has a function that is undefined. This would not be detected until that line of code is executed. Test cases can be of several types. Below are listed some common classifications of test cases. - Smoke test. This is an invocation of the code under test to see if there is an unexpected exception. It's useful as a starting point, but this doesn't tell you anything about the correctness of the results of a computation. - One-shot test. In this case, you call the code under test with arguments for which you know the expected result. - Edge test. The code under test is invoked with arguments that should cause an exception, and you evaluate if the expected exception occurrs. - Pattern test - Based on your knowledge of the calculation (not implementation) of the code under test, you construct a suite of test cases for which the results are known or there are known patterns in these results that are used to evaluate the results returned. Another principle of testing is to limit what is done in a single test case. Generally, a test case should focus on one use of one function. Sometimes, this is a challenge since the function being tested may call other functions that you are testing. This means that bugs in the called functions may cause failures in the tests of the calling functions. Often, you sort this out by knowing the structure of the code and focusing first on failures in lower level tests. In other situations, you may use more advanced techniques called mocking. A discussion of mocking is beyond the scope of this course. A best practice is to develop your tests while you are developing your code. Indeed, one school of thought in software engineering, called test-driven development, advocates that you write the tests before you implement the code under test so that the test cases become a kind of specification for what the code under test should do. Examples of Test Cases This section presents examples of test cases. The code under test is the calculation of entropy. Entropy of a set of probabilities $$ H = -\sum_i p_i \log(p_i) $$ where $\sum_i p_i = 1$.
import numpy as np # Code Under Test def entropy(ps): if any([(p < 0.0) or (p > 1.0) for p in ps]): raise ValueError("Bad input.") if sum(ps) > 1: raise ValueError("Bad input.") items = ps * np.log(ps) new_items = [] for item in items: if np.isnan(item): new_items.append(0) else: new_items.append(item) return np.abs(-np.sum(new_items)) # Smoke test def smoke_test(ps): try: entropy(ps) return True except: return False smoke_test([0.5, 0.5]) # One shot test 0.0 == entropy([1, 0, 0, 0]) # Edge tests def edge_test(ps): try: entropy(ps) except ValueError: return True return False edge_test([-1, 2])
week_4/unit-tests.ipynb
UWSEDS/LectureNotes
bsd-2-clause
Question: What is an example of another one-shot test? (Hint: You need to know the expected result.) One edge test of interest is to provide an input that is not a distribution in that probabilities don't sum to 1.
# Edge test. This is something that should cause an exception. #entropy([-0.5])
week_4/unit-tests.ipynb
UWSEDS/LectureNotes
bsd-2-clause
You see that there are many, many cases to test. So far, we've been writing special codes for each test case. We can do better. Unittest Infrastructure There are several reasons to use a test infrastructure: - If you have many test cases (which you should!), the test infrastructure will save you from writing a lot of code. - The infrastructure provides a uniform way to report test results, and to handle test failures. - A test infrastructure can tell you about coverage so you know what tests to add. We'll be using the unittest framework. This is a separate Python package. Using this infrastructure, requires the following: 1. import the unittest module 1. define a class that inherits from unittest.TestCase 1. write methods that run the code to be tested and check the outcomes. The last item has two subparts. First, we must identify which methods in the class inheriting from unittest.TestCase are tests. You indicate that a method is to be run as a test by having the method name begin with "test". Second, the "test methods" should communicate with the infrastructure the results of evaluating output from the code under test. This is done by using assert statements. For example, self.assertEqual takes two arguments. If these are objects for which == returns True, then the test passes. Otherwise, the test fails.
import unittest # Define a class in which the tests will run class UnitTests(unittest.TestCase): # Each method in the class to execute a test def test_success(self): self.assertEqual(1, 2) def test_success1(self): self.assertTrue(1 == 1) def test_failure(self): self.assertLess(1, 2) suite = unittest.TestLoader().loadTestsFromTestCase(UnitTests) _ = unittest.TextTestRunner().run(suite) # Function the handles test loading #def test_setup(argument ?):
week_4/unit-tests.ipynb
UWSEDS/LectureNotes
bsd-2-clause
If you already have an H2O cluster running that you'd like to connect to (for example, in a multi-node Hadoop environment), then you can specify the IP and port of that cluster as follows:
# This will not actually do anything since it's a fake IP address # h2o.init(ip="123.45.67.89", port=54321)
h2o-py/demos/H2O_tutorial_breast_cancer_classification.ipynb
mathemage/h2o-3
apache-2.0
Download Data The following code downloads a copy of the Wisconsin Diagnostic Breast Cancer dataset. We can import the data directly into H2O using the Python API.
csv_url = "https://h2o-public-test-data.s3.amazonaws.com/smalldata/wisc/wisc-diag-breast-cancer-shuffled.csv" data = h2o.import_file(csv_url)
h2o-py/demos/H2O_tutorial_breast_cancer_classification.ipynb
mathemage/h2o-3
apache-2.0
Explore Data Once we have loaded the data, let's take a quick look. First the dimension of the frame:
data.shape
h2o-py/demos/H2O_tutorial_breast_cancer_classification.ipynb
mathemage/h2o-3
apache-2.0
Now let's take a look at the top of the frame:
data.head()
h2o-py/demos/H2O_tutorial_breast_cancer_classification.ipynb
mathemage/h2o-3
apache-2.0
The first two columns contain an ID and the resposne. The "diagnosis" column is the response. Let's take a look at the column names. The data contains derived features from the medical images of the tumors.
data.columns
h2o-py/demos/H2O_tutorial_breast_cancer_classification.ipynb
mathemage/h2o-3
apache-2.0
To select a subset of the columns to look at, typical Pandas indexing applies:
columns = ["id", "diagnosis", "area_mean"] data[columns].head()
h2o-py/demos/H2O_tutorial_breast_cancer_classification.ipynb
mathemage/h2o-3
apache-2.0
Now let's select a single column, for example -- the response column, and look at the data more closely:
data['diagnosis']
h2o-py/demos/H2O_tutorial_breast_cancer_classification.ipynb
mathemage/h2o-3
apache-2.0
It looks like a binary response, but let's validate that assumption:
data['diagnosis'].unique() data['diagnosis'].nlevels()
h2o-py/demos/H2O_tutorial_breast_cancer_classification.ipynb
mathemage/h2o-3
apache-2.0
We can query the categorical "levels" as well ('B' and 'M' stand for "Benign" and "Malignant" diagnosis):
data['diagnosis'].levels()
h2o-py/demos/H2O_tutorial_breast_cancer_classification.ipynb
mathemage/h2o-3
apache-2.0
Since "diagnosis" column is the response we would like to predict, we may want to check if there are any missing values, so let's look for NAs. To figure out which, if any, values are missing, we can use the isna method on the diagnosis column. The columns in an H2O Frame are also H2O Frames themselves, so all the methods that apply to a Frame also apply to a single column.
data.isna() data['diagnosis'].isna()
h2o-py/demos/H2O_tutorial_breast_cancer_classification.ipynb
mathemage/h2o-3
apache-2.0
The isna method doesn't directly answer the question, "Does the diagnosis column contain any NAs?", rather it returns a 0 if that cell is not missing (Is NA? FALSE == 0) and a 1 if it is missing (Is NA? TRUE == 1). So if there are no missing values, then summing over the whole column should produce a summand equal to 0.0. Let's take a look:
data['diagnosis'].isna().sum()
h2o-py/demos/H2O_tutorial_breast_cancer_classification.ipynb
mathemage/h2o-3
apache-2.0
Great, no missing labels. Out of curiosity, let's see if there is any missing data in this frame:
data.isna().sum()
h2o-py/demos/H2O_tutorial_breast_cancer_classification.ipynb
mathemage/h2o-3
apache-2.0
The next thing I may wonder about in a binary classification problem is the distribution of the response in the training data. Is one of the two outcomes under-represented in the training set? Many real datasets have what's called an "imbalanace" problem, where one of the classes has far fewer training examples than the other class. Let's take a look at the distribution, both visually and numerically.
# TO DO: Insert a bar chart or something showing the proportion of M to B in the response. data['diagnosis'].table()
h2o-py/demos/H2O_tutorial_breast_cancer_classification.ipynb
mathemage/h2o-3
apache-2.0
Ok, the data is not exactly evenly distributed between the two classes -- there are almost twice as many Benign samples as there are Malicious samples. However, this level of imbalance shouldn't be much of an issue for the machine learning algos. (We will revisit this later in the modeling section below).
n = data.shape[0] # Total number of training samples data['diagnosis'].table()['Count']/n
h2o-py/demos/H2O_tutorial_breast_cancer_classification.ipynb
mathemage/h2o-3
apache-2.0
Machine Learning in H2O We will do a quick demo of the H2O software -- trying to predict malignant tumors using various machine learning algorithms. Specify the predictor set and response The response, y, is the 'diagnosis' column, and the predictors, x, are all the columns aside from the first two columns ('id' and 'diagnosis').
y = 'diagnosis' x = data.columns del x[0:1] x
h2o-py/demos/H2O_tutorial_breast_cancer_classification.ipynb
mathemage/h2o-3
apache-2.0
Split H2O Frame into a train and test set
train, test = data.split_frame(ratios=[0.75], seed=1) train.shape test.shape
h2o-py/demos/H2O_tutorial_breast_cancer_classification.ipynb
mathemage/h2o-3
apache-2.0
Train and Test a GBM model
# Import H2O GBM: from h2o.estimators.gbm import H2OGradientBoostingEstimator
h2o-py/demos/H2O_tutorial_breast_cancer_classification.ipynb
mathemage/h2o-3
apache-2.0
We first create a model object of class, "H2OGradientBoostingEstimator". This does not actually do any training, it just sets the model up for training by specifying model parameters.
model = H2OGradientBoostingEstimator(distribution='bernoulli', ntrees=100, max_depth=4, learn_rate=0.1)
h2o-py/demos/H2O_tutorial_breast_cancer_classification.ipynb
mathemage/h2o-3
apache-2.0
The model object, like all H2O estimator objects, has a train method, which will actually perform model training. At this step we specify the training and (optionally) a validation set, along with the response and predictor variables.
model.train(x=x, y=y, training_frame=train, validation_frame=test)
h2o-py/demos/H2O_tutorial_breast_cancer_classification.ipynb
mathemage/h2o-3
apache-2.0
Inspect Model The type of results shown when you print a model, are determined by the following: - Model class of the estimator (e.g. GBM, RF, GLM, DL) - The type of machine learning problem (e.g. binary classification, multiclass classification, regression) - The data you specify (e.g. training_frame only, training_frame and validation_frame, or training_frame and nfolds) Below, we see a GBM Model Summary, as well as training and validation metrics since we supplied a validation_frame. Since this a binary classification task, we are shown the relevant performance metrics, which inclues: MSE, R^2, LogLoss, AUC and Gini. Also, we are shown a Confusion Matrix, where the threshold for classification is chosen automatically (by H2O) as the threshold which maximizes the F1 score. The scoring history is also printed, which shows the performance metrics over some increment such as "number of trees" in the case of GBM and RF. Lastly, for tree-based methods (GBM and RF), we also print variable importance.
print(model)
h2o-py/demos/H2O_tutorial_breast_cancer_classification.ipynb
mathemage/h2o-3
apache-2.0
Model Performance on a Test Set Once a model has been trained, you can also use it to make predictions on a test set. In the case above, we passed the test set as the validation_frame in training, so we have technically already created test set predictions and performance. However, when performing model selection over a variety of model parameters, it is common for users to break their dataset into three pieces: Training, Validation and Test. After training a variety of models using different parameters (and evaluating them on a validation set), the user may choose a single model and then evaluate model performance on a separate test set. This is when the model_performance method, shown below, is most useful.
perf = model.model_performance(test) perf.auc()
h2o-py/demos/H2O_tutorial_breast_cancer_classification.ipynb
mathemage/h2o-3
apache-2.0
Cross-validated Performance To perform k-fold cross-validation, you use the same code as above, but you specify nfolds as an integer greater than 1, or add a "fold_column" to your H2O Frame which indicates a fold ID for each row. Unless you have a specific reason to manually assign the observations to folds, you will find it easiest to simply use the nfolds argument. When performing cross-validation, you can still pass a validation_frame, but you can also choose to use the original dataset that contains all the rows. We will cross-validate a model below using the original H2O Frame which we call data.
cvmodel = H2OGradientBoostingEstimator(distribution='bernoulli', ntrees=100, max_depth=4, learn_rate=0.1, nfolds=5) cvmodel.train(x=x, y=y, training_frame=data)
h2o-py/demos/H2O_tutorial_breast_cancer_classification.ipynb
mathemage/h2o-3
apache-2.0
Grid Search One way of evaluting models with different parameters is to perform a grid search over a set of parameter values. For example, in GBM, here are three model parameters that may be useful to search over: - ntrees: Number of trees - max_depth: Maximum depth of a tree - learn_rate: Learning rate in the GBM We will define a grid as follows:
ntrees_opt = [5,50,100] max_depth_opt = [2,3,5] learn_rate_opt = [0.1,0.2] hyper_params = {'ntrees': ntrees_opt, 'max_depth': max_depth_opt, 'learn_rate': learn_rate_opt}
h2o-py/demos/H2O_tutorial_breast_cancer_classification.ipynb
mathemage/h2o-3
apache-2.0
Define an "H2OGridSearch" object by specifying the algorithm (GBM) and the hyper parameters:
from h2o.grid.grid_search import H2OGridSearch gs = H2OGridSearch(H2OGradientBoostingEstimator, hyper_params = hyper_params)
h2o-py/demos/H2O_tutorial_breast_cancer_classification.ipynb
mathemage/h2o-3
apache-2.0
An "H2OGridSearch" object also has a train method, which is used to train all the models in the grid.
gs.train(x=x, y=y, training_frame=train, validation_frame=test)
h2o-py/demos/H2O_tutorial_breast_cancer_classification.ipynb
mathemage/h2o-3
apache-2.0
Compare Models
print(gs) # print out the auc for all of the models for g in gs: print(g.model_id + " auc: " + str(g.auc())) #TO DO: Compare grid search models
h2o-py/demos/H2O_tutorial_breast_cancer_classification.ipynb
mathemage/h2o-3
apache-2.0
Identifier for storing these features on disk and referring to them later.
feature_list_id = 'wm_intersect'
notebooks/feature-wm-intersect.ipynb
YuriyGuts/kaggle-quora-question-pairs
mit
Load Data Original question datasets.
df_train = pd.read_csv(project.data_dir + 'train.csv').fillna('none') df_test = pd.read_csv(project.data_dir + 'test.csv').fillna('none')
notebooks/feature-wm-intersect.ipynb
YuriyGuts/kaggle-quora-question-pairs
mit
Build features
df_all_pairs = pd.concat([ df_train[['question1', 'question2']], df_test[['question1', 'question2']] ], axis=0).reset_index(drop='index') stops = set(nltk.corpus.stopwords.words('english')) def word_match_share(pair): q1 = str(pair[0]).lower().split() q2 = str(pair[1]).lower().split() q1words = {} q2words = {} for word in q1: if word not in stops: q1words[word] = 1 for word in q2: if word not in stops: q2words[word] = 1 if len(q1words) == 0 or len(q2words) == 0: # The computer-generated chaff includes a few questions that are nothing but stopwords return 0 shared_words_in_q1 = [w for w in q1words.keys() if w in q2words] shared_words_in_q2 = [w for w in q2words.keys() if w in q1words] R = (len(shared_words_in_q1) + len(shared_words_in_q2)) / (len(q1words) + len(q2words)) return R wms = kg.jobs.map_batch_parallel( df_all_pairs[['question1', 'question2']].as_matrix(), item_mapper=word_match_share, batch_size=1000, ) q_dict = defaultdict(dict) for i in progressbar(range(len(wms))): q_dict[df_all_pairs.question1[i]][df_all_pairs.question2[i]] = wms[i] q_dict[df_all_pairs.question2[i]][df_all_pairs.question1[i]] = wms[i] def q1_q2_intersect(row): return len(set(q_dict[row['question1']]).intersection(set(q_dict[row['question2']]))) def q1_q2_wm_ratio(row): q1 = q_dict[row['question1']] q2 = q_dict[row['question2']] inter_keys = set(q1.keys()).intersection(set(q2.keys())) if len(inter_keys) == 0: return 0 inter_wm = 0 total_wm = 0 for q, wm in q1.items(): if q in inter_keys: inter_wm += wm total_wm += wm for q, wm in q2.items(): if q in inter_keys: inter_wm += wm total_wm += wm if total_wm == 0: return 0 return inter_wm / total_wm df_train['q1_q2_wm_ratio'] = df_train.apply(q1_q2_wm_ratio, axis=1, raw=True) df_test['q1_q2_wm_ratio'] = df_test.apply(q1_q2_wm_ratio, axis=1, raw=True) df_train['q1_q2_intersect'] = df_train.apply(q1_q2_intersect, axis=1, raw=True) df_test['q1_q2_intersect'] = df_test.apply(q1_q2_intersect, axis=1, raw=True)
notebooks/feature-wm-intersect.ipynb
YuriyGuts/kaggle-quora-question-pairs
mit
Visualize
plt.figure(figsize=(12, 6)) plt.subplot(1, 2, 1) intersect_counts = df_train.q1_q2_intersect.value_counts() sns.barplot(intersect_counts.index[:20], intersect_counts.values[:20]) plt.subplot(1, 2, 2) df_train['q1_q2_wm_ratio'].plot.hist() plt.figure(figsize=(12, 6)) plt.subplot(1, 2, 1) sns.violinplot(x='is_duplicate', y='q1_q2_wm_ratio', data=df_train) plt.subplot(1, 2, 2) sns.violinplot(x='is_duplicate', y='q1_q2_intersect', data=df_train) df_train.plot.scatter(x='q1_q2_intersect', y='q1_q2_wm_ratio', figsize=(12, 6)) print(df_train[['q1_q2_intersect', 'q1_q2_wm_ratio']].corr())
notebooks/feature-wm-intersect.ipynb
YuriyGuts/kaggle-quora-question-pairs
mit
Build final features
columns_to_keep = [ 'q1_q2_intersect', 'q1_q2_wm_ratio', ] X_train = df_train[columns_to_keep].values X_test = df_test[columns_to_keep].values
notebooks/feature-wm-intersect.ipynb
YuriyGuts/kaggle-quora-question-pairs
mit
Save features
feature_names = columns_to_keep project.save_features(X_train, X_test, feature_names, feature_list_id)
notebooks/feature-wm-intersect.ipynb
YuriyGuts/kaggle-quora-question-pairs
mit
Loading data deleting irrelevant features
kobe = pd.read_csv('data.csv', sep=',') kobe= kobe[np.isfinite(kobe['shot_made_flag'])] del kobe['lat'] del kobe['lon'] del kobe['game_id'] del kobe['team_id'] del kobe['team_name'] kobe_2 = pd.read_csv('data.csv', sep=',') kobe_2= kobe_2[np.isfinite(kobe_2['shot_made_flag'])] del kobe_2['lat'] del kobe_2['lon'] del kobe_2['game_id'] del kobe_2['team_id'] del kobe_2['team_name']
Data Visualization/Project/predictive modal - xgboost/kobe_sim_xgboost.ipynb
vipmunot/Data-Science-Course
mit
encoding catagorical features
mt_up = preprocessing.LabelEncoder() kobe.matchup = mt_up.fit_transform(kobe.matchup ) #kobe_2.matchup = mt_up.fit_transform(kobe.matchup ) opp = preprocessing.LabelEncoder() kobe.opponent = opp.fit_transform(kobe.opponent ) #kobe_2.opponent = opp.fit_transform(kobe.opponent ) dt = preprocessing.LabelEncoder() kobe.game_date = dt.fit_transform(kobe.game_date ) #kobe_2.game_date = dt.fit_transform(kobe.game_date ) at = preprocessing.LabelEncoder() kobe.action_type = at.fit_transform(kobe.action_type ) #kobe_2.action_type = at.fit_transform(kobe.action_type ) cst = preprocessing.LabelEncoder() kobe.combined_shot_type = cst.fit_transform(kobe.combined_shot_type ) #kobe_2.combined_shot_type = cst.fit_transform(kobe.combined_shot_type ) seson = preprocessing.LabelEncoder() kobe.season = seson.fit_transform(kobe.season ) #kobe_2.season = seson.fit_transform(kobe.season ) st = preprocessing.LabelEncoder() kobe.shot_type = st.fit_transform(kobe.shot_type ) #kobe_2.shot_type = st.fit_transform(kobe.shot_type ) sza = preprocessing.LabelEncoder() kobe.shot_zone_area = sza.fit_transform(kobe.shot_zone_area ) #kobe_2.shot_zone_area = sza.fit_transform(kobe.shot_zone_area ) szb = preprocessing.LabelEncoder() kobe.shot_zone_basic = szb.fit_transform(kobe.shot_zone_basic ) #kobe_2.shot_zone_basic = szb.fit_transform(kobe.shot_zone_basic ) szr = preprocessing.LabelEncoder() kobe.shot_zone_range = szr.fit_transform(kobe.shot_zone_range ) #kobe_2.shot_zone_range = szr.fit_transform(kobe.shot_zone_range )
Data Visualization/Project/predictive modal - xgboost/kobe_sim_xgboost.ipynb
vipmunot/Data-Science-Course
mit
splitting data into test and train
from sklearn.cross_validation import train_test_split # Generate the training set. Set random_state to be able to replicate results. train = kobe.sample(frac=0.6, random_state=1) train_2 = kobe_2.sample(frac=0.6, random_state=1) # Select anything not in the training set and put it in the testing set. test = kobe.loc[~kobe.index.isin(train.index)] test_2 = kobe_2.loc[~kobe_2.index.isin(train_2.index)]
Data Visualization/Project/predictive modal - xgboost/kobe_sim_xgboost.ipynb
vipmunot/Data-Science-Course
mit
seperating features and class in both test and train sets
columns = kobe.columns.tolist() columns = [c for c in columns if c not in ["shot_made_flag","team_id","team_name"]] kobe_train_x =train[columns] kobe_test_x =test[columns] kobe_train_y=train['shot_made_flag'] kobe_test_y=test['shot_made_flag'] print(kobe_train_x.shape) print(kobe_test_x.shape) print(kobe_train_y.shape) print(kobe_test_y.shape)
Data Visualization/Project/predictive modal - xgboost/kobe_sim_xgboost.ipynb
vipmunot/Data-Science-Course
mit
getting best parameters do not run this section as the best set of parameters is already found
def optimization(depth, n_est,l_r): maxacc=0 best_depth=0 best_n_est=0 best_l_r=0 for i in range(1,depth): for j in n_est: for k in l_r: gbm = xgb.XGBClassifier(max_depth=i, n_estimators=j, learning_rate=k).fit(kobe_train_x, kobe_train_y) predicted = gbm.predict(kobe_test_x) key=str(i)+"_"+str(j)+"_"+str(k) accu=accuracy_score(kobe_test_y, predicted) if(accu>maxacc): maxacc=accu best_depth=i best_n_est=j best_l_r=k print(maxkey+" "+str(maxacc)) return(best_depth,best_n_est,best_l_r) n_est=[5,10,20,50,100,150,200,250,300,350,400,450,500,550,600,650,700,750,800,850,900,950,1000] depth=10 l_r = [0.0001, 0.001, 0.01,0.05, 0.1, 0.2, 0.3] best_depth,best_n_est,best_l_r=optimization(depth,n_est,l_r)
Data Visualization/Project/predictive modal - xgboost/kobe_sim_xgboost.ipynb
vipmunot/Data-Science-Course
mit
creating model with best parameter combination and reporting metrics
#hard coded the best features gbm = xgb.XGBClassifier(max_depth=4, n_estimators=600, learning_rate=0.01).fit(kobe_train_x, kobe_train_y) predicted = gbm.predict(kobe_test_x) # summarize the fit of the model print(metrics.classification_report(kobe_test_y, predicted)) print("Confusion Matrix") print(metrics.confusion_matrix(kobe_test_y, predicted)) accuracy=accuracy_score(kobe_test_y, predicted) print("Accuracy: %.2f%%" % (accuracy * 100.0))
Data Visualization/Project/predictive modal - xgboost/kobe_sim_xgboost.ipynb
vipmunot/Data-Science-Course
mit
creating a test file with predicted results to visualize
test_2['predicted']=predicted test_2.to_csv(path_or_buf='test_with_predictions.csv', sep=',') test_2.head(10)
Data Visualization/Project/predictive modal - xgboost/kobe_sim_xgboost.ipynb
vipmunot/Data-Science-Course
mit
1) Explore the dataset Numerical exploration Load the csv file into memory using Pandas Describe each attribute is it discrete? is it continuous? is it a number? Identify the target Check if any values are missing Load the csv file into memory using Pandas
df = pd.read_csv('iris-2-classes.csv')
Iris Flowers Workshop.ipynb
Dataweekends/pyladies_intro_to_data_science
mit
What's the content of df ?
df.iloc[[0,1,98,99]]
Iris Flowers Workshop.ipynb
Dataweekends/pyladies_intro_to_data_science
mit
Describe each attribute (is it discrete? is it continuous? is it a number? is it text?)
df.info()
Iris Flowers Workshop.ipynb
Dataweekends/pyladies_intro_to_data_science
mit
Quick stats on the features
df.describe()
Iris Flowers Workshop.ipynb
Dataweekends/pyladies_intro_to_data_science
mit
Identify the target What are we trying to predict? ah, yes... the type of Iris flower!
df['iris_type'].value_counts()
Iris Flowers Workshop.ipynb
Dataweekends/pyladies_intro_to_data_science
mit
Check if any values are missing
df.info()
Iris Flowers Workshop.ipynb
Dataweekends/pyladies_intro_to_data_science
mit
Mental notes so far: Dataset contains 100 entries 1 Target column (iris_type) 4 Numerical Features No missing values Visual exploration Distribution of Sepal Length, influence on target:
df[df['iris_type']=='virginica']['sepal_length_cm'].plot(kind='hist', bins = 10, range = (4,7), alpha = 0.3, color = 'b') df[df['iris_type']=='versicolor']['sepal_length_cm'].plot(kind='hist', bins = 10, range = (4,7), alpha = 0.3, color = 'g') plt.title('Distribution of Sepal Length', size = '20') plt.xlabel('Sepal Length (cm)', size = '20') plt.ylabel('Number of flowers', size = '20') plt.legend(['Virginica', 'Versicolor']) plt.show()
Iris Flowers Workshop.ipynb
Dataweekends/pyladies_intro_to_data_science
mit
Two features combined, scatter plot:
plt.scatter(df[df['iris_type']== 'virginica']['petal_length_cm'].values, df[df['iris_type']== 'virginica']['sepal_length_cm'].values, label = 'Virginica', c = 'b', s = 40) plt.scatter(df[df['iris_type']== 'versicolor']['petal_length_cm'].values, df[df['iris_type']== 'versicolor']['sepal_length_cm'].values, label = 'Versicolor', c = 'r', marker='s',s = 40) plt.legend(['virginica', 'versicolor'], loc = 2) plt.title('Iris Flowers', size = '20') plt.xlabel('Petal Length (cm)', size = '20') plt.ylabel('Sepal Length (cm)', size = '20') plt.show()
Iris Flowers Workshop.ipynb
Dataweekends/pyladies_intro_to_data_science
mit
Ok, so, the flowers seem to have different characteristics Let's build a simple model to test that Define a new target column called target like this: - if iris_type = 'virginica' ===> target = 1 - otherwise target = 0
df['target'] = df['iris_type'].map({'virginica': 1, 'versicolor': 0}) print df[['iris_type', 'target']].head(2) print print df[['iris_type', 'target']].tail(2)
Iris Flowers Workshop.ipynb
Dataweekends/pyladies_intro_to_data_science
mit
Define simplest model as benchmark The simplest model is a model that predicts 0 for everybody, i.e. all versicolor. How good is it?
df['target'].value_counts()
Iris Flowers Workshop.ipynb
Dataweekends/pyladies_intro_to_data_science
mit
If I predict every flower is Versicolor, I'm correct 50% of the time We need to do better than that Define features (X) and target (y) variables
X = df[['sepal_length_cm', 'sepal_width_cm', 'petal_length_cm', 'petal_width_cm']] y = df['target']
Iris Flowers Workshop.ipynb
Dataweekends/pyladies_intro_to_data_science
mit
Initialize a decision Decision Tree model
from sklearn.tree import DecisionTreeClassifier model = DecisionTreeClassifier(random_state=0) model
Iris Flowers Workshop.ipynb
Dataweekends/pyladies_intro_to_data_science
mit
Split the features and the target into a Train and a Test subsets. Ratio should be 70/30
from sklearn.cross_validation import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, random_state=0)
Iris Flowers Workshop.ipynb
Dataweekends/pyladies_intro_to_data_science
mit
Train the model
model.fit(X_train, y_train)
Iris Flowers Workshop.ipynb
Dataweekends/pyladies_intro_to_data_science
mit
Calculate the model score
my_score = model.score(X_test, y_test) print "Classification Score: %0.2f" % my_score
Iris Flowers Workshop.ipynb
Dataweekends/pyladies_intro_to_data_science
mit
Print the confusion matrix
from sklearn.metrics import confusion_matrix y_pred = model.predict(X_test) print "\n=======confusion matrix==========" print confusion_matrix(y_test, y_pred)
Iris Flowers Workshop.ipynb
Dataweekends/pyladies_intro_to_data_science
mit
Problem 2: Round 5.23222 to two decimal places
print round(5.23222,2)
P12Advanced Python Objects - Test.ipynb
jArumugam/python-notes
mit
Advanced Strings Problem 3: Check if every letter in the string s is lower case
s = 'hello how are you Mary, are you feeling okay?' retVal = 1 for word in s.split(): print word for item in word: # print item if not item.islower(): # print item print 'The string has Uppercase characters' retVal = 0 break print retVal s.islower()
P12Advanced Python Objects - Test.ipynb
jArumugam/python-notes
mit
Problem 4: How many times does the letter 'w' show up in the string below?
s = 'twywywtwywbwhsjhwuwshshwuwwwjdjdid' s.count('w')
P12Advanced Python Objects - Test.ipynb
jArumugam/python-notes
mit
Advanced Sets Problem 5: Find the elements in set1 that are not in set2:
set1 = {2,3,1,5,6,8} set2 = {3,1,7,5,6,8} set1.difference(set2)
P12Advanced Python Objects - Test.ipynb
jArumugam/python-notes
mit
Problem 6: Find all elements that are in either set:
set1.intersection(set2)
P12Advanced Python Objects - Test.ipynb
jArumugam/python-notes
mit
Advanced Dictionaries Problem 7: Create this dictionary: {0: 0, 1: 1, 2: 8, 3: 27, 4: 64} using dictionary comprehension.
{ val:val**3 for val in xrange(0,5)}
P12Advanced Python Objects - Test.ipynb
jArumugam/python-notes
mit
Advanced Lists Problem 8: Reverse the list below:
l = [1,2,3,4] l[::-1]
P12Advanced Python Objects - Test.ipynb
jArumugam/python-notes
mit
Problem 9: Sort the list below
l = [3,4,2,5,1] sorted(l)
P12Advanced Python Objects - Test.ipynb
jArumugam/python-notes
mit
Violations of graphical excellence and integrity Find a data-focused visualization on one of the following websites that is a negative example of the principles that Tufte describes in The Visual Display of Quantitative Information. CNN Fox News Time Upload the image for the visualization to this directory and display the image inline in this notebook.
# Add your filename and uncomment the following line: Image(filename='TheoryAndPracticeEx02graph.png')
assignments/assignment04/TheoryAndPracticeEx02.ipynb
jegibbs/phys202-2015-work
mit
Loading the data The dataset we're going to use can be downloaded from Kaggle. It contains data about credit card transactions that occurred during a period of two days, with 492 frauds out of 284,807 transactions. All variables in the dataset are numerical. The data has been transformed using PCA transformation(s) due to privacy reasons. The two features that haven't been changed are Time and Amount. Time contains the seconds elapsed between each transaction and the first transaction in the dataset.
df = pd.read_csv("creditcard.csv")
NEU/Sai_Raghuram_Kothapalli_DL/Creditcard Fraud detection using Autoencoders.ipynb
nikbearbrown/Deep_Learning
mit
Exploration
df.shape
NEU/Sai_Raghuram_Kothapalli_DL/Creditcard Fraud detection using Autoencoders.ipynb
nikbearbrown/Deep_Learning
mit
31 columns, 2 of which are Time and Amount. The rest are output from the PCA transformation. Let's check for missing values:
df.isnull().values.any() count_classes = pd.value_counts(df['Class'], sort = True) count_classes.plot(kind = 'bar', rot=0) plt.title("Transaction class distribution") plt.xticks(range(2), LABELS) plt.xlabel("Class") plt.ylabel("Frequency");
NEU/Sai_Raghuram_Kothapalli_DL/Creditcard Fraud detection using Autoencoders.ipynb
nikbearbrown/Deep_Learning
mit
We have a highly imbalanced dataset on our hands. Normal transactions overwhelm the fraudulent ones by a large margin. Let's look at the two types of transactions:
frauds = df[df.Class == 1] normal = df[df.Class == 0] frauds.shape plt.hist(normal.Amount, bins = 100) plt.xlim([0,20000]) plt.ylim([0,10000]) plt.tight_layout()
NEU/Sai_Raghuram_Kothapalli_DL/Creditcard Fraud detection using Autoencoders.ipynb
nikbearbrown/Deep_Learning
mit
Let's have a more graphical representation:
f, axes = plt.subplots(nrows = 2, ncols = 1, sharex = True) axes[0].hist(normal.Amount, bins = 100) axes[0].set_xlim([0,20000]) axes[0].set_ylim([0,10000]) axes[0].set_title('Normal') axes[1].hist(frauds.Amount, bins = 50) axes[1].set_xlim([0,10000]) axes[1].set_ylim([0,200]) axes[1].set_title('Frauds')
NEU/Sai_Raghuram_Kothapalli_DL/Creditcard Fraud detection using Autoencoders.ipynb
nikbearbrown/Deep_Learning
mit
Autoencoders Lets get started with autoencoders and we optimize the parameters of our Autoencoder model in such way that a special kind of error -Rreconstruction Error is minimized. In practice, the traditional squared error is often used: $$\textstyle L(x,x') = ||\, x - x'||^2$$ Preparing the data First, let's drop the Time column (not going to use it) and use the scikit's StandardScaler on the Amount. The scaler removes the mean and scales the values to unit variance:
from sklearn.preprocessing import StandardScaler data = df.drop(['Time'], axis=1) data['Amount'] = StandardScaler().fit_transform(data['Amount'].values.reshape(-1, 1))
NEU/Sai_Raghuram_Kothapalli_DL/Creditcard Fraud detection using Autoencoders.ipynb
nikbearbrown/Deep_Learning
mit
Training our Autoencoder is gonna be a bit different from what we are used to. Let's say you have a dataset containing a lot of non fraudulent transactions at hand. You want to detect any anomaly on new transactions. We will create this situation by training our model on the normal transactions, only. Reserving the correct class on the test set will give us a way to evaluate the performance of our model. We will reserve 20% of our data for testing:
X_train, X_test = train_test_split(data, test_size=0.2, random_state=RANDOM_SEED) X_train = X_train[X_train.Class == 0] X_train = X_train.drop(['Class'], axis=1) y_test = X_test['Class'] X_test = X_test.drop(['Class'], axis=1) X_train = X_train.values X_test = X_test.values X_train.shape
NEU/Sai_Raghuram_Kothapalli_DL/Creditcard Fraud detection using Autoencoders.ipynb
nikbearbrown/Deep_Learning
mit
Building the model Our Autoencoder uses 4 fully connected layers with 14, 7, 7 and 29 neurons respectively. The first two layers are used for our encoder, the last two go for the decoder. Additionally, L1 regularization will be used during training:
input_dim = X_train.shape[1] encoding_dim = 32 input_layer = Input(shape=(input_dim, )) encoder = Dense(encoding_dim, activation="relu", activity_regularizer=regularizers.l1(10e-5))(input_layer) encoder = Dense(int(encoding_dim / 2), activation="sigmoid")(encoder) decoder = Dense(int(encoding_dim / 2), activation='sigmoid')(encoder) decoder = Dense(input_dim, activation='relu')(decoder) autoencoder = Model(inputs=input_layer, outputs=decoder)
NEU/Sai_Raghuram_Kothapalli_DL/Creditcard Fraud detection using Autoencoders.ipynb
nikbearbrown/Deep_Learning
mit
Let's train our model for 200 epochs with a batch size of 32 samples and save the best performing model to a file. The ModelCheckpoint provided by Keras is really handy for such tasks. Additionally, the training progress will be exported in a format that TensorBoard understands.
import h5py as h5py nb_epoch = 100 batch_size = 32 autoencoder.compile(optimizer='adam', loss='mean_squared_error', metrics=['accuracy']) checkpointer = ModelCheckpoint(filepath="model.h5", verbose=0, save_best_only=True) tensorboard = TensorBoard(log_dir='./logs', histogram_freq=0, write_graph=True, write_images=True) history = autoencoder.fit(X_train, X_train, epochs=nb_epoch, batch_size=batch_size, shuffle=True, validation_data=(X_test, X_test), verbose=1, callbacks=[checkpointer, tensorboard]).history autoencoder = load_model('model.h5')
NEU/Sai_Raghuram_Kothapalli_DL/Creditcard Fraud detection using Autoencoders.ipynb
nikbearbrown/Deep_Learning
mit
Evaluation
plt.plot(history['loss']) plt.plot(history['val_loss']) plt.title('model loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper right');
NEU/Sai_Raghuram_Kothapalli_DL/Creditcard Fraud detection using Autoencoders.ipynb
nikbearbrown/Deep_Learning
mit
The reconstruction error on our training and test data seems to converge nicely. Is it low enough? Let's have a closer look at the error distribution:
predictions = autoencoder.predict(X_test) mse = np.mean(np.power(X_test - predictions, 2), axis=1) error_df = pd.DataFrame({'reconstruction_error': mse, 'true_class': y_test}) error_df.describe()
NEU/Sai_Raghuram_Kothapalli_DL/Creditcard Fraud detection using Autoencoders.ipynb
nikbearbrown/Deep_Learning
mit
Reconstruction error without fraud
fig = plt.figure() ax = fig.add_subplot(111) normal_error_df = error_df[(error_df['true_class']== 0) & (error_df['reconstruction_error'] < 10)] _ = ax.hist(normal_error_df.reconstruction_error.values, bins=10)
NEU/Sai_Raghuram_Kothapalli_DL/Creditcard Fraud detection using Autoencoders.ipynb
nikbearbrown/Deep_Learning
mit
Reconstruction error with fraud
fig = plt.figure() ax = fig.add_subplot(111) fraud_error_df = error_df[error_df['true_class'] == 1] _ = ax.hist(fraud_error_df.reconstruction_error.values, bins=10)
NEU/Sai_Raghuram_Kothapalli_DL/Creditcard Fraud detection using Autoencoders.ipynb
nikbearbrown/Deep_Learning
mit
Prediction Our model is a bit different this time. It doesn't know how to predict new values. But we don't need that. In order to predict whether or not a new/unseen transaction is normal or fraudulent, we'll calculate the reconstruction error from the transaction data itself. If the error is larger than a predefined threshold, we'll mark it as a fraud (since our model should have a low error on normal transactions). Let's pick that value:
threshold = 2.9
NEU/Sai_Raghuram_Kothapalli_DL/Creditcard Fraud detection using Autoencoders.ipynb
nikbearbrown/Deep_Learning
mit
And see how well we're dividing the two types of transactions:
groups = error_df.groupby('true_class') fig, ax = plt.subplots() for name, group in groups: ax.plot(group.index, group.reconstruction_error, marker='o', ms=3.5, linestyle='', label= "Fraud" if name == 1 else "Normal") ax.hlines(threshold, ax.get_xlim()[0], ax.get_xlim()[1], colors="r", zorder=100, label='Threshold') ax.legend() plt.title("Reconstruction error for different classes") plt.ylabel("Reconstruction error") plt.xlabel("Data point index") plt.show();
NEU/Sai_Raghuram_Kothapalli_DL/Creditcard Fraud detection using Autoencoders.ipynb
nikbearbrown/Deep_Learning
mit
That chart might be a bit deceiving. Let's have a look at the confusion matrix:
from sklearn.metrics import (confusion_matrix) y_pred = [1 if e > threshold else 0 for e in error_df.reconstruction_error.values] conf_matrix = confusion_matrix(error_df.true_class, y_pred) plt.figure(figsize=(12, 12)) sns.heatmap(conf_matrix, xticklabels=LABELS, yticklabels=LABELS, annot=True, fmt="d"); plt.title("Confusion matrix") plt.ylabel('True class') plt.xlabel('Predicted class') plt.show()
NEU/Sai_Raghuram_Kothapalli_DL/Creditcard Fraud detection using Autoencoders.ipynb
nikbearbrown/Deep_Learning
mit
What subsets of scientific questions tend to be answered correctly by the same subjects? Mining
from orangecontrib.associate.fpgrowth import * import pandas as pd from numpy import * questions = correctedScientific.columns correctedScientificText = [[] for _ in range(correctedScientific.shape[0])] for q in questions: for index in range(correctedScientific.shape[0]): r = correctedScientific.index[index] if correctedScientific.loc[r, q]: correctedScientificText[index].append(q) #correctedScientificText len(correctedScientificText) # Get frequent itemsets with support > 25% # run time < 1 min support = 0.20 itemsets = frequent_itemsets(correctedScientificText, math.floor(len(correctedScientificText) * support)) #dict(itemsets) # Generate rules according to confidence, confidence > 85 % # run time < 5 min confidence = 0.80 rules = association_rules(dict(itemsets), confidence) #list(rules) # Transform rules generator into a Dataframe rulesDataframe = pd.DataFrame([(ant, cons, supp, conf) for ant, cons, supp, conf in rules]) rulesDataframe.rename(columns = {0:"antecedants", 1:"consequents", 2:"support", 3:"confidence"}, inplace=True) rulesDataframe.head() # Save the mined rules to file rulesDataframe.to_csv("results/associationRulesMiningSupport"+str(support)+"percentsConfidence"+str(confidence)+"percents.csv")
v1.52.2/Data mining/ruleAssociationMining.ipynb
CyberCRI/dataanalysis-herocoli-redmetrics
cc0-1.0
Search for interesting rules Interesting rules are more likely to be the ones with highest confidence, the highest lift or with a bigger consequent set. Pairs can also be especially interesting
# Sort rules by confidence confidenceSortedRules = rulesDataframe.sort_values(by = ["confidence", "support"], ascending=[False, False]) confidenceSortedRules.head(50) # Sort rules by size of consequent set rulesDataframe["consequentSize"] = rulesDataframe["consequents"].apply(lambda x: len(x)) consequentSortedRules = rulesDataframe.sort_values(by = ["consequentSize", "confidence", "support"], ascending=[False, False, False]) consequentSortedRules.head(50) # Select only pairs (rules with antecedent and consequent of size one) # Sort pairs according to confidence rulesDataframe["fusedRule"] = rulesDataframe[["antecedants", "consequents"]].apply(lambda x: frozenset().union(*x), axis=1) rulesDataframe["ruleSize"] = rulesDataframe["fusedRule"].apply(lambda x: len(x)) pairRules = rulesDataframe.sort_values(by=["ruleSize", "confidence", "support"], ascending=[True, False, False]) pairRules.head(30) correctedScientific.columns # Sort questions by number of apparition in consequents for q in scientificQuestions: rulesDataframe[q+"c"] = rulesDataframe["consequents"].apply(lambda x: 1 if q in x else 0) occurenceInConsequents = rulesDataframe.loc[:,scientificQuestions[0]+"c":scientificQuestions[-1]+"c"].sum(axis=0) occurenceInConsequents.sort_values(inplace=True, ascending=False) occurenceInConsequents # Sort questions by number of apparition in antecedants for q in scientificQuestions: rulesDataframe[q+"a"] = rulesDataframe["antecedants"].apply(lambda x: 1 if q in x else 0) occurenceInAntecedants = rulesDataframe.loc[:,scientificQuestions[0]+"a":scientificQuestions[-1]+"a"].sum(axis=0) occurenceInAntecedants.sort_values(inplace=True, ascending=False) occurenceInAntecedants sortedPrePostProgression = pd.read_csv("../../data/sortedPrePostProgression.csv") sortedPrePostProgression.index = sortedPrePostProgression.iloc[:,0] sortedPrePostProgression = sortedPrePostProgression.drop(sortedPrePostProgression.columns[0], axis = 1) del sortedPrePostProgression.index.name sortedPrePostProgression.loc['occ_ant',:] = 0 sortedPrePostProgression.loc['occ_csq',:] = 0 sortedPrePostProgression for questionA, occsA in enumerate(occurenceInAntecedants): questionVariableName = occurenceInAntecedants.index[questionA][:-1] question = globals()[questionVariableName] questionC = questionVariableName + "c" sortedPrePostProgression.loc['occ_ant',question] = occsA occsC = occurenceInConsequents.loc[questionC] sortedPrePostProgression.loc['occ_csq',question] = occsC #print(questionVariableName+"='"+question+"'") #print("\t"+questionVariableName+"a="+str(occsA)+","+questionC+"="+str(occsC)) #print() sortedPrePostProgression.T
v1.52.2/Data mining/ruleAssociationMining.ipynb
CyberCRI/dataanalysis-herocoli-redmetrics
cc0-1.0
Time to build the network Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes. <img src="assets/neural_network.png" width=300px> The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation. We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation. Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$. Below, you have these tasks: 1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function. 2. Implement the forward pass in the train method. 3. Implement the backpropagation algorithm in the train method, including calculating the output error. 4. Implement the forward pass in the run method.
class NeuralNetwork(object): def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate): # Set number of nodes in input, hidden and output layers. self.input_nodes = input_nodes self.hidden_nodes = hidden_nodes self.output_nodes = output_nodes # Initialize weights self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5, (self.input_nodes, self.hidden_nodes)) self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5, (self.hidden_nodes, self.output_nodes)) self.lr = learning_rate #### TODO: Set self.activation_function to your implemented sigmoid function #### # # Note: in Python, you can define a function with a lambda expression, # as shown below. # self.activation_function = sigmoid # Replace 0 with your sigmoid calculation. ### If the lambda code above is not something you're familiar with, # You can uncomment out the following three lines and put your # implementation there instead. # def sigmoid(x): return 1/(1+np.exp(-x)) # Replace 0 with your sigmoid calculation here self.activation_function = sigmoid def train(self, features, targets): ''' Train the network on batch of features and targets. Arguments --------- features: 2D array, each row is one data record, each column is a feature targets: 1D array of target values ''' learnrate = self.lr n_records = features.shape[0] delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape) delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape) for X, y in zip(features, targets): #### Implement the forward pass here #### ### Forward pass ### # TODO: Hidden layer - Replace these values with your calculations. hidden_inputs = np.dot(X, self.weights_input_to_hidden) # signals into hidden layer hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer # TODO: Output layer - Replace these values with your calculations. final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer final_outputs = final_inputs # signals from final output layer, since activation function is f(x) = x #### Implement the backward pass here #### ### Backward pass ### # TODO: Output error - Replace this value with your calculations. error = y-final_outputs # Output layer error is the difference between desired target and actual output. # TODO: Backpropagated error terms - Replace these values with your calculations. output_error_term = error # since derivative for identity function is 1 # TODO: Calculate the hidden layer's contribution to the error hidden_error = np.dot( self.weights_hidden_to_output,output_error_term) hidden_error_term = hidden_error * hidden_outputs * (1 - hidden_outputs) # Weight step (input to hidden) delta_weights_i_h += hidden_error_term * X[:, None] # Weight step (hidden to output) delta_weights_h_o += output_error_term * hidden_outputs[: , None] # TODO: Update the weights - Replace these values with your calculations. self.weights_hidden_to_output += learnrate * delta_weights_h_o / n_records # update hidden-to-output weights with gradient descent step self.weights_input_to_hidden += learnrate * delta_weights_i_h / n_records # update input-to-hidden weights with gradient descent step def run(self, features): ''' Run a forward pass through the network with input features Arguments --------- features: 1D array of feature values ''' #### Implement the forward pass here #### # TODO: Hidden layer - replace these values with the appropriate calculations. hidden_inputs = np.dot(features, self.weights_input_to_hidden) # signals into hidden layer hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer # TODO: Output layer - Replace these values with the appropriate calculations. final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer final_outputs = final_inputs # signals from final output layer return final_outputs def MSE(y, Y): return np.mean((y-Y)**2)
first-neural-network/Your_first_neural_network.ipynb
raoyvn/deep-learning
mit
Training the network Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops. You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later. Choose the number of iterations This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase. Choose the learning rate This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge. Choose the number of hidden nodes The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
import sys ### Set the hyperparameters here ### iterations = 10000 learning_rate = 0.3 hidden_nodes = 12 output_nodes = 1 N_i = train_features.shape[1] network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate) losses = {'train':[], 'validation':[]} for ii in range(iterations): # Go through a random batch of 128 records from the training data set batch = np.random.choice(train_features.index, size=128) X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt'] network.train(X, y) # Printing out the training progress train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values) val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values) sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \ + "% ... Training loss: " + str(train_loss)[:5] \ + " ... Validation loss: " + str(val_loss)[:5]) sys.stdout.flush() losses['train'].append(train_loss) losses['validation'].append(val_loss) plt.plot(losses['train'], label='Training loss') plt.plot(losses['validation'], label='Validation loss') plt.legend() _ = plt.ylim()
first-neural-network/Your_first_neural_network.ipynb
raoyvn/deep-learning
mit
SAPCAR Archive version 2.00 We first create a temporary file and compress it inside an archive file:
with open("some_file", "w") as fd: fd.write("Some string to compress") f0 = SAPCARArchive("archive_file.car", mode="wb", version=SAPCAR_VERSION_200) f0.add_file("some_file")
docs/fileformats/SAPCAR.ipynb
CoreSecurity/pysap
gpl-2.0
The file is comprised of the following main structures: SAPCAR Archive Header
f0._sapcar.canvas_dump()
docs/fileformats/SAPCAR.ipynb
CoreSecurity/pysap
gpl-2.0