markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Now let's write the our training data table to BigQuery for train and eval so that we can export to CSV for TensorFlow.
job_config = bigquery.QueryJobConfig() csv_select_list = "med_sales_price_agg, labels_agg" for step in ["train", "eval"]: if step == "train": selquery = "SELECT {csv_select_list} FROM ({}) WHERE {} < 80".format( query_csv_sub_sequences, sampling_clause) else: selquery = "SELECT {csv_...
blogs/gcp_forecasting/gcp_time_series_forecasting.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Export BigQuery table to CSV in GCS.
dataset_ref = client.dataset( dataset_id=sink_dataset_name, project=client.project) for step in ["train", "eval"]: destination_uri = "gs://{}/{}".format( BUCKET, "forecasting/nyc_real_estate/data/{}*.csv".format(step)) table_name = "nyc_real_estate_{}".format(step) table_ref = dataset_ref.table...
blogs/gcp_forecasting/gcp_time_series_forecasting.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Train TensorFlow on Google Cloud AI Platform.
import os PROJECT = PROJECT # REPLACE WITH YOUR PROJECT ID BUCKET = BUCKET # REPLACE WITH A BUCKET NAME REGION = "us-central1" # REPLACE WITH YOUR REGION e.g. us-central1 # Import os environment variables os.environ["PROJECT"] = PROJECT os.environ["BUCKET"] = BUCKET os.environ["REGION"] = REGION os.environ["TF_VERSIO...
blogs/gcp_forecasting/gcp_time_series_forecasting.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Are you running this notebook on an Amazon EC2 t2.micro instance? (If you are using your own machine, please skip this section) It has been reported that t2.micro instances do not provide sufficient power to complete the conversion in acceptable amount of time. For interest of time, please refrain from running get_nump...
''' produces probablistic estimate for P(y_i = +1 | x_i, w). estimate ranges between 0 and 1. ''' def predict_probability(feature_matrix, coefficients): # Take dot product of feature_matrix and coefficients product = feature_matrix.dot(coefficients) # Compute P(y_i = +1 | x_i, w) using the link funct...
Classification/Week 2/Assignment 2/module-4-linear-classifier-regularization-assignment-blank.ipynb
rashikaranpuria/Machine-Learning-Specialization
mit
Adding L2 penalty Let us now work on extending logistic regression with L2 regularization. As discussed in the lectures, the L2 regularization is particularly useful in preventing overfitting. In this assignment, we will explore L2 regularization in detail. Recall from lecture and the previous assignment that for logi...
def feature_derivative_with_L2(errors, feature, coefficient, l2_penalty, feature_is_constant): # Compute the dot product of errors and feature derivative = sum(feature * errors) # add L2 penalty term for any feature that isn't the intercept. if not feature_is_constant: derivative = deriv...
Classification/Week 2/Assignment 2/module-4-linear-classifier-regularization-assignment-blank.ipynb
rashikaranpuria/Machine-Learning-Specialization
mit
Quiz question: Does the term with L2 regularization increase or decrease $\ell\ell(\mathbf{w})$? The logistic regression function looks almost like the one in the last assignment, with a minor modification to account for the L2 penalty. Fill in the code below to complete this modification.
def logistic_regression_with_L2(feature_matrix, sentiment, initial_coefficients, step_size, l2_penalty, max_iter): coefficients = np.array(initial_coefficients) # make sure it's a numpy array for itr in xrange(max_iter): # Predict P(y_i = +1|x_i,w) using your predict_probability() function predi...
Classification/Week 2/Assignment 2/module-4-linear-classifier-regularization-assignment-blank.ipynb
rashikaranpuria/Machine-Learning-Specialization
mit
Using the coefficients trained with L2 penalty 0, find the 5 most positive words (with largest positive coefficients). Save them to positive_words. Similarly, find the 5 most negative words (with largest negative coefficients) and save them to negative_words. Quiz Question. Which of the following is not listed in eithe...
positive_words = table.topk('coefficients [L2=0]', 5, reverse = False)['word'] negative_words = table.topk('coefficients [L2=0]', 5, reverse = True)['word'] print positive_words print negative_words
Classification/Week 2/Assignment 2/module-4-linear-classifier-regularization-assignment-blank.ipynb
rashikaranpuria/Machine-Learning-Specialization
mit
We have to convert our integer labels to categorical values in a format that Keras accepts.
y_train_k = to_categorical(y_train[:, np.newaxis]) y_test_k = to_categorical(y_test[:, np.newaxis]) y_critical_k = to_categorical(labels[critical][:, np.newaxis])
doc/Programs/IsingModel/neuralnetIsing.ipynb
CompPhysics/MachineLearning
cc0-1.0
We now fit our model for $10$ epochs and validate on the test data.
history = clf.fit( X_train, y_train_k, validation_data=(X_test, y_test_k), epochs=10, batch_size=200, verbose=True )
doc/Programs/IsingModel/neuralnetIsing.ipynb
CompPhysics/MachineLearning
cc0-1.0
We now evaluate the model.
train_accuracy = clf.evaluate(X_train, y_train_k, batch_size=200)[1] test_accuracy = clf.evaluate(X_test, y_test_k, batch_size=200)[1] critical_accuracy = clf.evaluate(data[critical], y_critical_k, batch_size=200)[1] print ("Accuracy on train data: {0}".format(train_accuracy)) print ("Accuracy on test data: {0}".forma...
doc/Programs/IsingModel/neuralnetIsing.ipynb
CompPhysics/MachineLearning
cc0-1.0
Then we plot the ROC curve for three datasets.
fig = plt.figure(figsize=(20, 14)) for (_X, _y), label in zip( [ (X_train, y_train_k), (X_test, y_test_k), (data[critical], y_critical_k) ], ["Train", "Test", "Critical"] ): proba = clf.predict(_X) fpr, tpr, _ = skm.roc_curve(_y[:, 1], proba[:, 1]) roc_auc = skm.auc(fpr,...
doc/Programs/IsingModel/neuralnetIsing.ipynb
CompPhysics/MachineLearning
cc0-1.0
Basic Math Operations
print(3 + 5) print(3 - 5) print(3 * 5) print(3 ** 5) # Observation: this code gives different results for python2 and python3 # because of the behaviour for the division operator print(3 / 5.0) print(3 / 5) # for compatibility, make sure to use the follow statement from __future__ import division print(3 / 5.0) print...
labs/notebooks/basic_tutorials/python_basics.ipynb
LxMLS/lxmls-toolkit
mit
Data Strutures
countries = ['Portugal','Spain','United Kingdom'] print(countries)
labs/notebooks/basic_tutorials/python_basics.ipynb
LxMLS/lxmls-toolkit
mit
Exercise 0.1 Use L[i:j] to return the countries in the Iberian Peninsula.
countries[0:2]
labs/notebooks/basic_tutorials/python_basics.ipynb
LxMLS/lxmls-toolkit
mit
Loops and Indentation
i = 2 while i < 10: print(i) i += 2 for i in range(2,10,2): print(i) a=1 while a <= 3: print(a) a += 1
labs/notebooks/basic_tutorials/python_basics.ipynb
LxMLS/lxmls-toolkit
mit
Exercise 0.2 Can you then predict the output of the following code?:
a=1 while a <= 3: print(a) a += 1
labs/notebooks/basic_tutorials/python_basics.ipynb
LxMLS/lxmls-toolkit
mit
Control Flow
hour = 16 if hour < 12: print('Good morning!') elif hour >= 12 and hour < 20: print('Good afternoon!') else: print('Good evening!')
labs/notebooks/basic_tutorials/python_basics.ipynb
LxMLS/lxmls-toolkit
mit
Functions
def greet(hour): if hour < 12: print('Good morning!') elif hour >= 12 and hour < 20: print('Good afternoon!') else: print('Good evening!')
labs/notebooks/basic_tutorials/python_basics.ipynb
LxMLS/lxmls-toolkit
mit
Exercise 0.3 Note that the previous code allows the hour to be less than 0 or more than 24. Change the code in order to indicate that the hour given as input is invalid. Your output should be something like: greet(50) Invalid hour: it should be between 0 and 24. greet(-5) Invalid hour: it should be between 0 and 24.
greet(50) greet(-5)
labs/notebooks/basic_tutorials/python_basics.ipynb
LxMLS/lxmls-toolkit
mit
Profiling
%prun greet(22)
labs/notebooks/basic_tutorials/python_basics.ipynb
LxMLS/lxmls-toolkit
mit
Debugging in Python
def greet2(hour): if hour < 12: print('Good morning!') elif hour >= 12 and hour < 20: print('Good afternoon!') else: import pdb; pdb.set_trace() print('Good evening!') # try: greet2(22)
labs/notebooks/basic_tutorials/python_basics.ipynb
LxMLS/lxmls-toolkit
mit
Exceptions for a complete list of built-in exceptions, see http://docs.python.org/2/library/exceptions.html
raise ValueError("Invalid input value.") while True: try: x = int(input("Please enter a number: ")) break except ValueError: print("Oops! That was no valid number. Try again...")
labs/notebooks/basic_tutorials/python_basics.ipynb
LxMLS/lxmls-toolkit
mit
Extending basic Functionalities with Modules
import numpy as np np.var? np.random.normal?
labs/notebooks/basic_tutorials/python_basics.ipynb
LxMLS/lxmls-toolkit
mit
Organizing your Code with your own modules See details in guide Matplotlib – Plotting in Python
import numpy as np import matplotlib.pyplot as plt %matplotlib inline X = np.linspace(-4, 4, 1000) plt.plot(X, X**2*np.cos(X**2)) plt.savefig("simple.pdf")
labs/notebooks/basic_tutorials/python_basics.ipynb
LxMLS/lxmls-toolkit
mit
Exercise 0.5 Try running the following on Jupyter, which will introduce you to some of the basic numeric and plotting operations.
# This will import the numpy library # and give it the np abbreviation import numpy as np # This will import the plotting library import matplotlib.pyplot as plt # Linspace will return 1000 points, # evenly spaced between -4 and +4 X = np.linspace(-4, 4, 1000) # Y[i] = X[i]**2 Y = X**2 # Plot using a red line ('r') plt...
labs/notebooks/basic_tutorials/python_basics.ipynb
LxMLS/lxmls-toolkit
mit
Exercise 0.6 Run the following example and lookup the ptp function/method (use the ? functionality in Jupyter)
A = np.arange(100) # These two lines do exactly the same thing print(np.mean(A)) print(A.mean()) np.ptp?
labs/notebooks/basic_tutorials/python_basics.ipynb
LxMLS/lxmls-toolkit
mit
Exercise 0.7 Consider the following approximation to compute an integral \begin{equation} \int_0^1 f(x) dx \approx \sum_{i=0}^{999} \frac{f(i/1000)}{1000} \end{equation} Use numpy to implement this for $f(x) = x^2$. You should not need to use any loops. Note that integer division in Python 2.x returns the floor ...
def f(x): return(x**2) sum([f(x*1./1000)/1000 for x in range(0,1000)])
labs/notebooks/basic_tutorials/python_basics.ipynb
LxMLS/lxmls-toolkit
mit
Exercise 0.8 In the rest of the school we will represent both matrices and vectors as numpy arrays. You can create arrays in different ways, one possible way is to create an array of zeros.
import numpy as np m = 3 n = 2 a = np.zeros([m,n]) print(a)
labs/notebooks/basic_tutorials/python_basics.ipynb
LxMLS/lxmls-toolkit
mit
You can check the shape and the data type of your array using the following commands:
print(a.shape) print(a.dtype.name)
labs/notebooks/basic_tutorials/python_basics.ipynb
LxMLS/lxmls-toolkit
mit
This shows you that “a” is an 3*2 array of type float64. By default, arrays contain 64 bit6 floating point numbers. You can specify the particular array type by using the keyword dtype.
a = np.zeros([m,n],dtype=int) print(a.dtype)
labs/notebooks/basic_tutorials/python_basics.ipynb
LxMLS/lxmls-toolkit
mit
You can also create arrays from lists of numbers:
a = np.array([[2,3],[3,4]]) print(a)
labs/notebooks/basic_tutorials/python_basics.ipynb
LxMLS/lxmls-toolkit
mit
Exercise 0.9 You can multiply two matrices by looping over both indexes and multiplying the individual entries.
a = np.array([[2,3],[3,4]]) b = np.array([[1,1],[1,1]]) a_dim1, a_dim2 = a.shape b_dim1, b_dim2 = b.shape c = np.zeros([a_dim1,b_dim2]) for i in range(a_dim1): for j in range(b_dim2): for k in range(a_dim2): c[i,j] += a[i,k]*b[k,j] print(c)
labs/notebooks/basic_tutorials/python_basics.ipynb
LxMLS/lxmls-toolkit
mit
This is, however, cumbersome and inefficient. Numpy supports matrix multiplication with the dot function:
d = np.dot(a,b) print(d) a = np.array([1,2]) b = np.array([1,1]) np.dot(a,b) np.outer(a,b) I = np.eye(2) x = np.array([2.3, 3.4]) print(I) print(np.dot(I,x)) A = np.array([ [1, 2], [3, 4] ]) print(A) print(A.T)
labs/notebooks/basic_tutorials/python_basics.ipynb
LxMLS/lxmls-toolkit
mit
Vamos a construir un pricer para un bono que nos vamos a inventar. Es un bono que vale 0 mientras no haya pasado 1.5 fracciones de año desde la fecha de firma. En cuanto se supere ese tiempo, el bono vale un 5% más que el strike price.
from datetime import date def newbond(strike, signed, time, daycount): signed = date(*map(int, signed.split('-'))) time = date(*map(int, time.split('-'))) yearfrac = daycount(signed, time) if yearfrac > 1.5: return (1 + 0.05) * strike else: return 0.0
curso/7-Mastr.ipynb
ekergy/jupyter_notebooks
gpl-3.0
Lo mejor es que ahora lo podemos probar, hacer test unitarios... De mastr importaremos el objeto que cuenta fracciones de año.
from mastr.bootstrapping.daycount import DayCounter value = newbond(100, '2016-9-7', '2017-9-10', DayCounter('actual/360')) print('2017-9-10', value) value = newbond(100, '2016-9-7', '2018-9-10', DayCounter('actual/360')) print('2018-9-10', value)
curso/7-Mastr.ipynb
ekergy/jupyter_notebooks
gpl-3.0
Pero mastr está diseñado para ejecutar IDN, los scripts de descripción de carteras y de escenarios.
sandbox.add_pricer(newbond) sandbox.add_object(DayCounter) %cat data/script.json with open('data/script.json') as f: results = sandbox.eval(f.read()) print(results)
curso/7-Mastr.ipynb
ekergy/jupyter_notebooks
gpl-3.0
El intérprete de IDN está preparado para poder evaluar la cartera con múltiples escenarios. En este caso, vamos a evaluar distintos escenarios temporales.
%cat data/scriptdata.json %cat data/data.json
curso/7-Mastr.ipynb
ekergy/jupyter_notebooks
gpl-3.0
Este nuevo fichero JSON, que contiene los datos de distintos escenarios, no es más que un nuevo argumento para el sandbox.
with open('data/scriptdata.json') as f: with open('data/data.json') as g: results = sandbox.eval(f.read(), g.read()) print(results)
curso/7-Mastr.ipynb
ekergy/jupyter_notebooks
gpl-3.0
Finalmente, podemos utilizar las capacidades de representación gráfica para analizar los resultados y la evolucion en los tiempos dados del valor de este bono.
import matplotlib.pyplot as plt import json %matplotlib notebook dates = [ date(2016,9,10), date(2016,12,10), date(2017,9,10), date(2018,9,10), date(2019,9,10) ] fig1 = plt.figure(1) ax = fig1.add_subplot(1,1,1) ax.plot(dates, [r['eval1'] for r in results]) plt.setp(ax.get_xticklabels(), r...
curso/7-Mastr.ipynb
ekergy/jupyter_notebooks
gpl-3.0
Below I'm running images through the VGG network in batches. Exercise: Below, build the VGG network. Also get the codes from the first fully connected layer (make sure you get the ReLUd values).
batch_size = 100 codes_list = [] labels = [] batch = [] codes = None with tf.Session() as sess: # TODO: Build the vgg network here vgg = vgg16.Vgg16() input_ = tf.placeholder(tf.float32, [None, 224, 224, 3]) with tf.name_scope("content_vgg"): vgg.build(input_) for each in cla...
transfer-learning/.ipynb_checkpoints/Transfer_Learning-checkpoint.ipynb
mdiaz236/DeepLearningFoundations
mit
Instead of estimating the expected reward from selecting a particular arm, we may only care about the relative preference of one arm to another.
n_arms = 10 bandit = bd.GaussianBandit(n_arms, mu=4) n_trials = 1000 n_experiments = 500
notebooks/Stochastic Bandits - Preference Estimation.ipynb
bgalbraith/bandits
apache-2.0
Softmax Preference learning uses a Softmax-based policy, where the action estimates are converted to a probability distribution using the softmax function. This is then sampled to produce the chosen arm.
policy = bd.SoftmaxPolicy() agents = [ bd.GradientAgent(bandit, policy, alpha=0.1), bd.GradientAgent(bandit, policy, alpha=0.4), bd.GradientAgent(bandit, policy, alpha=0.1, baseline=False), bd.GradientAgent(bandit, policy, alpha=0.4, baseline=False) ] env = bd.Environment(bandit, agents, 'Gradient Agent...
notebooks/Stochastic Bandits - Preference Estimation.ipynb
bgalbraith/bandits
apache-2.0
Dataset Parameters Let's create the ParameterSet which would be added to the Bundle when calling add_dataset. Later we'll call add_dataset, which will create and attach this ParameterSet for us.
ps, constraints = phoebe.dataset.etv(component='mycomponent') print ps
2.2/tutorials/ETV.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Currently, none of the available etv methods actually compute fluxes. But if one is added that computes a light-curve and actually finds the time of mid-eclipse, then the passband-dependend parameters will be added here. For information on these passband-dependent parameters, see the section on the lc dataset Ns
print ps['Ns']
2.2/tutorials/ETV.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
time_ephems NOTE: this parameter will be constrained when added through add_dataset
print ps['time_ephems']
2.2/tutorials/ETV.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
time_ecls
print ps['time_ecls']
2.2/tutorials/ETV.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
etvs NOTE: this parameter will be constrained when added through add_dataset
print ps['etvs']
2.2/tutorials/ETV.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Compute Options Let's look at the compute options (for the default PHOEBE 2 backend) that relate to the ETV dataset. Other compute options are covered elsewhere: * parameters related to dynamics are explained in the section on the orb dataset
ps_compute = phoebe.compute.phoebe() print ps_compute
2.2/tutorials/ETV.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
etv_method
print ps_compute['etv_method']
2.2/tutorials/ETV.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
etv_tol
print ps_compute['etv_tol']
2.2/tutorials/ETV.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Synthetics
b.add_dataset('etv', Ns=np.linspace(0,10,11), dataset='etv01') b.add_compute() b.run_compute() b['etv@model'].twigs print b['time_ephems@primary@etv@model'] print b['time_ecls@primary@etv@model'] print b['etvs@primary@etv@model']
2.2/tutorials/ETV.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Plotting By default, ETV datasets plot as etv vs time_ephem. Of course, a simple binary with no companion or apsidal motion won't show much of a signal (this is essentially flat with some noise). To see more ETV examples see: Apsidal Motion Minimial Hierarchical Triple LTTE ETVs in a Hierarchical Triple
axs, artists = b['etv@model'].plot()
2.2/tutorials/ETV.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Alternatively, especially when overplotting with a light curve, its sometimes handy to just plot ticks at each of the eclipse times. This can easily be done by passing a single value for 'y'. For other examples with light curves as well see: * Apsidal Motion * LTTE ETVs in a Hierarchical Triple
axs, artists = b['etv@model'].plot(x='time_ecls', y=2)
2.2/tutorials/ETV.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Training the classifier using Majority voting Now we use the cleaned and conditioned training set to create a facies classifier. As mentioned above, we will use a type of machine learning model known as a Majority voting. We trained classifier on four models KNeighbours, Random forest, logistic regression and Gradient...
from sklearn import neighbors clf = neighbors.KNeighborsClassifier(n_neighbors=20, weights='distance', algorithm='kd_tree', leaf_size=30, metric='minkowski', ...
SHandPR/MajorityVoting.ipynb
seg/2016-ml-contest
apache-2.0
As noted above, the boundaries between the facies classes are not all sharp, and some of them blend into one another. The error within these 'adjacent facies' can also be calculated. We define an array to represent the facies adjacent to each other. For facies label i, adjacent_facies[i] is an array of the adjacent ...
adjacent_facies = np.array([[1], [0,2], [1], [4], [3,5], [4,6,7], [5,7], [5,6,8], [6,7]]) def accuracy_adjacent(conf, adjacent_facies): nb_classes = conf.shape[0] total_correct = 0. for i in np.arange(0,nb_classes): total_correct += conf[i][i] for j in adjacent_facies[i]: total_...
SHandPR/MajorityVoting.ipynb
seg/2016-ml-contest
apache-2.0
Using Voting classifier Now The voting classifier is now used to vote and classify models
from sklearn.ensemble import VotingClassifier vtclf = VotingClassifier(estimators=[ ('KNN', clf), ('RFC', RFC), ('GBM', gbModel),('LR',lgr)], voting='hard', weights=[2,2,1,1]) vtclf.fit(X_train,y_train) vtclfpredicted_labels = vtclf.predict(X_tes...
SHandPR/MajorityVoting.ipynb
seg/2016-ml-contest
apache-2.0
reading
import csv import sys with open('data.csv', 'rt') as f: reader = csv.reader(f) for row in reader: print(row)
DataPersistence/csv.ipynb
gaufung/PythonStandardLibrary
mit
Dialect
import csv print(csv.list_dialects())
DataPersistence/csv.ipynb
gaufung/PythonStandardLibrary
mit
Creating Dialect
import csv csv.register_dialect('pipes', delimiter='|') with open('testdata.pipes', 'r') as f: reader = csv.reader(f, dialect='pipes') for row in reader: print(row)
DataPersistence/csv.ipynb
gaufung/PythonStandardLibrary
mit
Plotly Setup
# plotly validate with credentials with open('../_credentials/plotly.txt', 'r') as infile: user, pw = infile.read().strip().split(', ') plotly.tools.set_credentials_file(username=user, api_key=pw) text_color = 'rgb(107, 107, 107)' colors_dict = {'grey':'rgb(189, 195, 199)', 'aqua':'rgb( 54, 215, 183)', 'navy'...
charts.ipynb
bryantbiggs/movie_torrents
mit
Load Cleaned Data from S3
# aws keys stored in ini file in same path os.environ['AWS_CONFIG_FILE'] = 'aws_config.ini' s3 = S3FileSystem(anon=False) key = 'data.csv' bucket = 'luther-02' df = pd.read_csv(s3.open('{}/{}'.format(bucket, key),mode='rb')) # update dates to datetime objects df['Released'] = pd.to_datetime(df['Released']) df['Year'...
charts.ipynb
bryantbiggs/movie_torrents
mit
Number of Torrent Titles by Release Year
# number of titles per year in dataset df_yr = df['Year'].value_counts().reset_index() df_yr.columns = ['Year','Count'] # create plotly data trace trace = go.Bar(x=df_yr['Year'], y=df_yr['Count'], marker=dict(color=colors_dict['red'])) def bar_plot_data(_dataframe, _label, color): df_temp = _dataframe[_label].val...
charts.ipynb
bryantbiggs/movie_torrents
mit
Trim Dataset by Years of Interest/Relevance Due to the low number of titles for the years below 1995, these torrents were removed from the dataset. Also, since the current year (2016) is only partially completed, films released in 2016 were removed from the dataset as well.
def df_year_limit(start, stop, df): mask = (df['Year'] >= start) & (df['Year'] <= stop) df = df.loc[mask] return df # get count of records before trimming by year cutoff yr_before = len(df) print('{0} records in dataframe before trimming by year cutoff'.format(yr_before)) yr_start, yr_stop = (1995, 2015) ...
charts.ipynb
bryantbiggs/movie_torrents
mit
Quantity Genre Classifications
# split genre strings into a numpy array def split_to_array(ser): split_array = np.array(ser.strip().replace(',','').split(' ')) return pd.Series(split_array) # turn numpy array into count of genre occurances genres = df['Genre'].apply(split_to_array) genres = pd.Series(genres.values.ravel()).dropna() genres =...
charts.ipynb
bryantbiggs/movie_torrents
mit
Most Dominant Genre out of Genres Given per Title
def convert_frequency(ser, genres=genres): split_array = np.array(ser.strip().replace(',','').split(' ')) genre = genres.loc[split_array].argmax() return genre # add new column to dataframe classifying genre list as single genre of significance df['Genre_Single'] = df['Genre'].apply(convert_frequency) # l...
charts.ipynb
bryantbiggs/movie_torrents
mit
Dominant Genre Quantities per Year
def get_stackedBar_trace(x_category, y_counts, _name, ind): ''' x_category -- category from feature set y_counts -- count of x_category in feature set _name -- _name of x_category ind -- number indices for color list Return: Plotly data trace for bar chart ''' return go.Bar(x=x_category...
charts.ipynb
bryantbiggs/movie_torrents
mit
Remove Films Not Rated - PG-13, PG, G, or R
# get count of records before trimming by year cutoff rated_before = len(df) print('{0} records in dataframe before trimming by rating'.format(rated_before)) ratings = ['PG-13', 'PG', 'G', 'R'] df = df.loc[df['Rated'].isin(ratings)] rated_after = len(df) print('{0} entries lost ({1}%) due to limiting to only {2} rat...
charts.ipynb
bryantbiggs/movie_torrents
mit
Log Transform Scatter Matrix
df['Log_Prod_Bud'] = np.log(df['Prod_Budget']) df['Log_Runtime'] = np.log(df['Runtime']) df['Log_Ttl_Tor'] = np.log(df['Total_Torrents']) colors_scat = colors_lst[:-2][::-1] df_scat = df[['Log_Ttl_Tor', 'Log_Prod_Bud', 'Log_Runtime', 'Gen_Rat_Bud', 'Gen_Rat_Run', 'Gen_Sin']] fig = FF.create_scatterplotmatrix(df_scat,...
charts.ipynb
bryantbiggs/movie_torrents
mit
Drama Only
df_drama = df[df['Genre_Single'] == 'Drama'].reset_index() df_drama = df_drama.drop('index',axis=1) df_drama['Log_Bud_Rated'] = df['Log_Prod_Bud'].apply(lambda x: str(x)) + ' ' + df['Rated'] #df_scat = df_drama[['Log_Ttl_Tor', 'Log_Prod_Bud', 'Log_Runtime', 'Log_Bud_Rated', 'Gen_Sin']] df_scat = df_drama[['Total_Torre...
charts.ipynb
bryantbiggs/movie_torrents
mit
Log Transform
df.columns df_sub['log_budg']=np.log(df_sub.Prod_Budget) #df_sub['log_year']=np.log(df_sub.Year) #df_sub['log_run']=np.log(df_sub.Runtime) df_sub['log_tor']=np.log(df_sub.Total_Torrents) trans = df_sub[['log_budg', 'Year', 'log_tor']] plt.rcParams['figure.figsize'] = (15, 15) pd.tools.plotting.scatter_matrix(trans) ...
charts.ipynb
bryantbiggs/movie_torrents
mit
Streaming with tweepy The Twitter streaming API is used to download twitter messages in real time. We use streaming api instead of rest api because, the REST api is used to pull data from twitter but the streaming api pushes messages to a persistent session. This allows the streaming api to download more data in real t...
# Tweet listner class which subclasses from tweepy.StreamListener class TweetListner(tweepy.StreamListener): """Twitter stream listner""" def __init__(self, csocket): self.clientSocket = csocket def dataProcessing(self, data): """Process the data, before sending to spark stream...
TweetAnalysis/Final/Q6/Dalon_4_RTD_MiniPro_Tweepy_Q6.ipynb
dalonlobo/GL-Mini-Projects
mit
Drawbacks of twitter streaming API The major drawback of the Streaming API is that Twitter’s Steaming API provides only a sample of tweets that are occurring. The actual percentage of total tweets users receive with Twitter’s Streaming API varies heavily based on the criteria users request and the current traffic. Stud...
if __name__ == "__main__": try: api, auth = connectToTwitter() # connecting to twitter # Global information is available by using 1 as the WOEID # woeid = getWOEIDForTrendsAvailable(api, "Worldwide") # get the woeid of the worldwide host = "localhost" port = 8500 ...
TweetAnalysis/Final/Q6/Dalon_4_RTD_MiniPro_Tweepy_Q6.ipynb
dalonlobo/GL-Mini-Projects
mit
SystemML Build information Following code will show SystemML information which is installed in the environment.
from systemml import MLContext ml = MLContext(sc) print ("SystemML Built-Time:"+ ml.buildTime()) print(ml.info()) # Workaround for Python 2.7.13 to avoid certificate validation issue while downloading any file. import ssl try: _create_unverified_https_context = ssl._create_unverified_context except AttributeErro...
samples/jupyter-notebooks/Image_Classify_Using_VGG_19_Transfer_Learning.ipynb
dusenberrymw/incubator-systemml
apache-2.0
Download model, proto files and convert them to SystemML format. Download Caffe Model (VGG-19), proto files (deployer, network and solver) and label file. Convert the Caffe model into SystemML input format.
# Download caffemodel and proto files def downloadAndConvertModel(downloadDir='.', trained_vgg_weights='trained_vgg_weights'): # Step 1: Download the VGG-19 model and other files. import errno import os import urllib # Create directory, if exists don't error out try: os.makedirs...
samples/jupyter-notebooks/Image_Classify_Using_VGG_19_Transfer_Learning.ipynb
dusenberrymw/incubator-systemml
apache-2.0
Classify images This function classify images from images specified through urls. Input Parameters: urls: List of urls printTokKData (default False): Whether to print top K indices and probabilities topK: Top K elements to be displayed.
import numpy as np import urllib from systemml.mllearn import Caffe2DML import systemml as sml def classifyImages(urls,img_shape=(3, 224, 224), printTokKData=False, topK=5, downloadDir='.', trained_vgg_weights='trained_vgg_weights'): size = (img_shape[1], img_shape[2]) vgg = Caffe2DML(sqlCtx, solver=os....
samples/jupyter-notebooks/Image_Classify_Using_VGG_19_Transfer_Learning.ipynb
dusenberrymw/incubator-systemml
apache-2.0
Sample code to retrain the model and use it to classify image through two different way There are couple of parameters to set based on what you are looking for. 1. printTopKData (default False): If this parameter gets set to True, then top K results (probabilities and indices) will be displayed. 2. topK (default 5): H...
# ImageNet specific parameters img_shape = (3, 224, 224) # Setting other than current directory causes "network file not found" issue, as network file # location is defined in solver file which does not have a path, so it searches in current dir. downloadDir = '.' # /home/asurve/caffe_models' trained_vgg_weights = 't...
samples/jupyter-notebooks/Image_Classify_Using_VGG_19_Transfer_Learning.ipynb
dusenberrymw/incubator-systemml
apache-2.0
Let's open a project with a UNIQUE name. This will be the name used in the DB so make sure it is new and not too short. Opening a project will always create a non-existing project and reopen an exising one. You cannot chose between opening types as you would with a file. This is a precaution to not accidentally delete ...
# Project.delete('test') project = Project('test')
examples/rp/test_worker.ipynb
thempel/adaptivemd
lgpl-2.1
Again we name it pyemma for later reference. Add generators to project Next step is to add these to the project for later usage. We pick the .generators store and just add it. Consider a store to work like a set() in python. It contains objects only once and is not ordered. Therefore we need a name to find the objects ...
project.generators.add(engine) project.generators.add(modeller) project.files.one sc = WorkerScheduler(project.resource) sc.enter(project) t = engine.task_run_trajectory(project.new_trajectory(pdb_file, 100, restart=True)). extend(50).extend(100) sc(t) import radical.pilot as rp rp.TRANSFER sc.advance() for f in...
examples/rp/test_worker.ipynb
thempel/adaptivemd
lgpl-2.1
Simple Convolutional Neural Network for CIFAR-10 The CIFAR-10 problem is best solved using a Convolutional Neural Network (CNN). We can quickly start off by defining all of the classes and functions we will need in this example.
# Simple CNN model for CIFAR-10 import numpy from keras.datasets import cifar10 from keras.models import Sequential from keras.layers import Dense from keras.layers import Dropout from keras.layers import Flatten from keras.constraints import maxnorm from keras.optimizers import SGD from keras.layers.convolutional impo...
NEU/Sai_Raghuram_Kothapalli_DL/CIFAR_10-Keras.ipynb
nikbearbrown/Deep_Learning
mit
As is good practice, we next initialize the random number seed with a constant to ensure the results are reproducible.
# fix random seed for reproducibility seed = 7 numpy.random.seed(seed)
NEU/Sai_Raghuram_Kothapalli_DL/CIFAR_10-Keras.ipynb
nikbearbrown/Deep_Learning
mit
Next we can load the CIFAR-10 dataset.
# load data (X_train, y_train), (X_test, y_test) = cifar10.load_data()
NEU/Sai_Raghuram_Kothapalli_DL/CIFAR_10-Keras.ipynb
nikbearbrown/Deep_Learning
mit
The pixel values are in the range of 0 to 255 for each of the red, green and blue channels. It is good practice to work with normalized data. Because the input values are well understood, we can easily normalize to the range 0 to 1 by dividing each value by the maximum observation which is 255. Note, the data is loaded...
# normalize inputs from 0-255 to 0.0-1.0 X_train = X_train.astype('float32') X_test = X_test.astype('float32') X_train = X_train / 255.0 X_test = X_test / 255.0
NEU/Sai_Raghuram_Kothapalli_DL/CIFAR_10-Keras.ipynb
nikbearbrown/Deep_Learning
mit
The output variables are defined as a vector of integers from 0 to 1 for each class. We can use a one hot encoding to transform them into a binary matrix in order to best model the classification problem. We know there are 10 classes for this problem, so we can expect the binary matrix to have a width of 10.
# one hot encode outputs y_train = np_utils.to_categorical(y_train) y_test = np_utils.to_categorical(y_test) num_classes = y_test.shape[1]
NEU/Sai_Raghuram_Kothapalli_DL/CIFAR_10-Keras.ipynb
nikbearbrown/Deep_Learning
mit
Let’s start off by defining a simple CNN structure as a baseline and evaluate how well it performs on the problem. We will use a structure with two convolutional layers followed by max pooling and a flattening out of the network to fully connected layers to make predictions. Our baseline network structure can be summar...
# Create the model model = Sequential() model.add(Conv2D(32, (3, 3), input_shape=(3, 32, 32), padding='same', activation='relu', kernel_constraint=maxnorm(3))) model.add(Dropout(0.2)) model.add(Conv2D(32, (3, 3), activation='relu', padding='same', kernel_constraint=maxnorm(3))) model.add(MaxPooling2D(pool_size=(2, 2)))...
NEU/Sai_Raghuram_Kothapalli_DL/CIFAR_10-Keras.ipynb
nikbearbrown/Deep_Learning
mit
We can fit this model with 10 epochs and a batch size of 32. A small number of epochs was chosen to help keep this tutorial moving. Normally the number of epochs would be one or two orders of magnitude larger for this problem. Once the model is fit, we evaluate it on the test dataset and print out the classification ac...
# Fit the model model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=epochs, batch_size=32) # Final evaluation of the model scores = model.evaluate(X_test, y_test, verbose=0) print("Accuracy: %.2f%%" % (scores[1]*100))
NEU/Sai_Raghuram_Kothapalli_DL/CIFAR_10-Keras.ipynb
nikbearbrown/Deep_Learning
mit
We can improve the accuracy significantly by creating a much deeper network. Larger Convolutional Neural Network for CIFAR-10 We have seen that a simple CNN performs poorly on this complex problem. In this section we look at scaling up the size and complexity of our model. Let’s design a deep version of the simple CNN ...
# Create the model model = Sequential() model.add(Conv2D(32, (3, 3), input_shape=(3, 32, 32), activation='relu', padding='same')) model.add(Dropout(0.2)) model.add(Conv2D(32, (3, 3), activation='relu', padding='same')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(64, (3, 3), activation='relu', padding='s...
NEU/Sai_Raghuram_Kothapalli_DL/CIFAR_10-Keras.ipynb
nikbearbrown/Deep_Learning
mit
We can fit and evaluate this model using the same a procedure above and the same number of epochs but a larger batch size of 64, found through some minor experimentation.
numpy.random.seed(seed) model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=epochs, batch_size=64) # Final evaluation of the model scores = model.evaluate(X_test, y_test, verbose=0) print("Accuracy: %.2f%%" % (scores[1]*100))
NEU/Sai_Raghuram_Kothapalli_DL/CIFAR_10-Keras.ipynb
nikbearbrown/Deep_Learning
mit
This workbook shows a example derived from the EDA exercise in Chapter 2 of Doing Data Science, by o'Neil abd Schutt
clicks = Table.read_table("http://stat.columbia.edu/~rachel/datasets/nyt1.csv") clicks
Clicks.ipynb
deculler/DataScienceTableDemos
bsd-2-clause
Well. Half a million rows. That would be painful in excel. Add a column of 1's, so that a sum will count people.
age_upper_bounds = [18, 25, 35, 45, 55, 65] def age_range(n): if n == 0: return '0' lower = 1 for upper in age_upper_bounds: if lower <= n < upper: return str(lower) + '-' + str(upper-1) lower = upper return str(lower) + '+' # a little test np.unique([age_range(n) f...
Clicks.ipynb
deculler/DataScienceTableDemos
bsd-2-clause
Now we can group the table by Age Range and count how many clicks come from each range.
clicks_by_age = clicks.group('Age Range', sum) clicks_by_age clicks_by_age.select(['Age Range', 'Clicks sum', 'Impressions sum', 'Person sum']).barh('Age Range')
Clicks.ipynb
deculler/DataScienceTableDemos
bsd-2-clause
Now we can do some other interesting summaries of these categories
clicks_by_age['Gender Mix'] = clicks_by_age['Gender sum'] / clicks_by_age['Person sum'] clicks_by_age["CTR"] = clicks_by_age['Clicks sum'] / clicks_by_age['Impressions sum'] clicks_by_age.select(['Age Range', 'Person sum', 'Gender Mix', 'CTR']) # Format some columns as percent with limited precision clicks_by_age.set_...
Clicks.ipynb
deculler/DataScienceTableDemos
bsd-2-clause
We might want to do the click rate calculation a little more carefully. We don't care about clicks where there are zero impressions or missing age/gender information. So let's filter those out of our data set.
impressed = clicks.where(clicks['Age'] > 0).where('Impressions') impressed # Impressions by age and gender impressed.pivot(rows='Gender', columns='Age Range', values='Impressions', collect=sum) impressed.pivot("Age Range", "Gender", "Clicks",sum) impressed.pivot_hist('Age Range','Impressions') distributions = impre...
Clicks.ipynb
deculler/DataScienceTableDemos
bsd-2-clause
Group returns a new table. If we wanted to specify the formats on columns of this table, assign it to a name.
# How does gender and clicks vary with age? gi = impressed.group('Age Range', np.mean).select(['Age Range', 'Gender mean', 'Clicks mean']) gi.set_format(['Gender mean', 'Clicks mean'], PercentFormatter) gi
Clicks.ipynb
deculler/DataScienceTableDemos
bsd-2-clause
Implement Preprocess Functions Normalize In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
from sklearn.preprocessing import MinMaxScaler def normalize(x): """ Normalize a list of sample image data in the range of 0 to 1 : x: List of image data. The image shape is (32, 32, 3) : return: Numpy array of normalize data """ # TODO: Implement Function # MinMaxScaler().fit_transform(x)...
image-classification/dlnd_image_classification.ipynb
kvr777/deep-learning
mit
One-hot encode Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 t...
from sklearn.preprocessing import LabelBinarizer def one_hot_encode(x): """ One hot encode a list of sample labels. Return a one-hot encoded vector for each label. : x: List of sample Labels : return: Numpy array of one-hot encoded labels """ #fit encoder to 10 dimension encoder = LabelBinar...
image-classification/dlnd_image_classification.ipynb
kvr777/deep-learning
mit
Convolution and Max Pooling Layer Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling: * Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor. * Apply a convolution to x_tensor...
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides): """ Apply convolution then max pooling to x_tensor :param x_tensor: TensorFlow Tensor :param conv_num_outputs: Number of outputs for the convolutional layer :param conv_ksize: kernal size 2-D Tuple fo...
image-classification/dlnd_image_classification.ipynb
kvr777/deep-learning
mit
Fully-Connected Layer Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packag...
def fully_conn(x_tensor, num_outputs): """ Apply a fully connected layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second dimension is num_out...
image-classification/dlnd_image_classification.ipynb
kvr777/deep-learning
mit
Output Layer Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages. Note: Act...
def output(x_tensor, num_outputs): """ Apply a output layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second dimension is num_outputs. """...
image-classification/dlnd_image_classification.ipynb
kvr777/deep-learning
mit
Create Convolutional Model Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model: Apply 1, 2, or 3 Convolution and Max Pool layers Apply a Flatten Layer Apply 1, 2, or 3 Full...
def conv_net(x, keep_prob): """ Create a convolutional neural network model : x: Placeholder tensor that holds image data. : keep_prob: Placeholder tensor that hold dropout keep probability. : return: Tensor that represents logits """ # TODO: Apply 1, 2, or 3 Convolution and Max Pool layers ...
image-classification/dlnd_image_classification.ipynb
kvr777/deep-learning
mit
Show Stats Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
def print_stats(session, feature_batch, label_batch, cost, accuracy): """ Print information about loss and validation accuracy : session: Current TensorFlow session : feature_batch: Batch of Numpy image data : label_batch: Batch of Numpy label data : cost: TensorFlow cost function : accuracy...
image-classification/dlnd_image_classification.ipynb
kvr777/deep-learning
mit