markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Now let's write the our training data table to BigQuery for train and eval so that we can export to CSV for TensorFlow.
job_config = bigquery.QueryJobConfig() csv_select_list = "med_sales_price_agg, labels_agg" for step in ["train", "eval"]: if step == "train": selquery = "SELECT {csv_select_list} FROM ({}) WHERE {} < 80".format( query_csv_sub_sequences, sampling_clause) else: selquery = "SELECT {csv_select_list} FROM ({}) WHERE {} >= 80".format( query_csv_sub_sequences, sampling_clause) # Set the destination table table_name = "nyc_real_estate_{}".format(step) table_ref = client.dataset(sink_dataset_name).table(table_name) job_config.destination = table_ref job_config.write_disposition = "WRITE_TRUNCATE" # Start the query, passing in the extra configuration. query_job = client.query( query=selquery, # Location must match that of the dataset(s) referenced in the query # and of the destination table. location="US", job_config=job_config) # API request - starts the query query_job.result() # Waits for the query to finish print("Query results loaded to table {}".format(table_ref.path))
blogs/gcp_forecasting/gcp_time_series_forecasting.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Export BigQuery table to CSV in GCS.
dataset_ref = client.dataset( dataset_id=sink_dataset_name, project=client.project) for step in ["train", "eval"]: destination_uri = "gs://{}/{}".format( BUCKET, "forecasting/nyc_real_estate/data/{}*.csv".format(step)) table_name = "nyc_real_estate_{}".format(step) table_ref = dataset_ref.table(table_name) extract_job = client.extract_table( table_ref, destination_uri, # Location must match that of the source table. location="US", ) # API request extract_job.result() # Waits for job to complete. print("Exported {}:{}.{} to {}".format( client.project, sink_dataset_name, table_name, destination_uri)) !gsutil -m cp gs://asl-testing-bucket/forecasting/nyc_real_estate/data/*.csv . !head train*.csv
blogs/gcp_forecasting/gcp_time_series_forecasting.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Train TensorFlow on Google Cloud AI Platform.
import os PROJECT = PROJECT # REPLACE WITH YOUR PROJECT ID BUCKET = BUCKET # REPLACE WITH A BUCKET NAME REGION = "us-central1" # REPLACE WITH YOUR REGION e.g. us-central1 # Import os environment variables os.environ["PROJECT"] = PROJECT os.environ["BUCKET"] = BUCKET os.environ["REGION"] = REGION os.environ["TF_VERSION"] = "1.13" os.environ["SEQ_LEN"] = str(WINDOW_SIZE) %%bash OUTDIR=gs://$BUCKET/forecasting/nyc_real_estate/trained_model JOBNAME=nyc_real_estate$(date -u +%y%m%d_%H%M%S) echo $OUTDIR $REGION $JOBNAME gsutil -m rm -rf $OUTDIR gcloud ai-platform jobs submit training $JOBNAME \ --region=$REGION \ --module-name=trainer.task \ --package-path=$PWD/tf_module/trainer \ --job-dir=$OUTDIR \ --staging-bucket=gs://$BUCKET \ --scale-tier=STANDARD_1 \ --runtime-version=$TF_VERSION \ -- \ --train_file_pattern="gs://asl-testing-bucket/forecasting/nyc_real_estate/data/train*.csv" \ --eval_file_pattern="gs://asl-testing-bucket/forecasting/nyc_real_estate/data/eval*.csv" \ --output_dir=$OUTDIR \ --job-dir=./tmp \ --seq_len=$SEQ_LEN \ --train_batch_size=32 \ --eval_batch_size=32 \ --train_steps=1000 \ --learning_rate=0.01 \ --start_delay_secs=60 \ --throttle_secs=60 \ --lstm_hidden_units="32,16,8"
blogs/gcp_forecasting/gcp_time_series_forecasting.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Are you running this notebook on an Amazon EC2 t2.micro instance? (If you are using your own machine, please skip this section) It has been reported that t2.micro instances do not provide sufficient power to complete the conversion in acceptable amount of time. For interest of time, please refrain from running get_numpy_data function. Instead, download the binary file containing the four NumPy arrays you'll need for the assignment. To load the arrays, run the following commands: arrays = np.load('module-4-assignment-numpy-arrays.npz') feature_matrix_train, sentiment_train = arrays['feature_matrix_train'], arrays['sentiment_train'] feature_matrix_valid, sentiment_valid = arrays['feature_matrix_valid'], arrays['sentiment_valid'] Building on logistic regression with no L2 penalty assignment Let us now build on Module 3 assignment. Recall from lecture that the link function for logistic regression can be defined as: $$ P(y_i = +1 | \mathbf{x}_i,\mathbf{w}) = \frac{1}{1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))}, $$ where the feature vector $h(\mathbf{x}_i)$ is given by the word counts of important_words in the review $\mathbf{x}_i$. We will use the same code as in this past assignment to make probability predictions since this part is not affected by the L2 penalty. (Only the way in which the coefficients are learned is affected by the addition of a regularization term.)
''' produces probablistic estimate for P(y_i = +1 | x_i, w). estimate ranges between 0 and 1. ''' def predict_probability(feature_matrix, coefficients): # Take dot product of feature_matrix and coefficients product = feature_matrix.dot(coefficients) # Compute P(y_i = +1 | x_i, w) using the link function predictions = 1 / (1 + np.exp(-product)) return predictions
Classification/Week 2/Assignment 2/module-4-linear-classifier-regularization-assignment-blank.ipynb
rashikaranpuria/Machine-Learning-Specialization
mit
Adding L2 penalty Let us now work on extending logistic regression with L2 regularization. As discussed in the lectures, the L2 regularization is particularly useful in preventing overfitting. In this assignment, we will explore L2 regularization in detail. Recall from lecture and the previous assignment that for logistic regression without an L2 penalty, the derivative of the log likelihood function is: $$ \frac{\partial\ell}{\partial w_j} = \sum_{i=1}^N h_j(\mathbf{x}_i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right) $$ Adding L2 penalty to the derivative It takes only a small modification to add a L2 penalty. All terms indicated in red refer to terms that were added due to an L2 penalty. Recall from the lecture that the link function is still the sigmoid: $$ P(y_i = +1 | \mathbf{x}_i,\mathbf{w}) = \frac{1}{1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))}, $$ We add the L2 penalty term to the per-coefficient derivative of log likelihood: $$ \frac{\partial\ell}{\partial w_j} = \sum_{i=1}^N h_j(\mathbf{x}_i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right) \color{red}{-2\lambda w_j } $$ The per-coefficient derivative for logistic regression with an L2 penalty is as follows: $$ \frac{\partial\ell}{\partial w_j} = \sum_{i=1}^N h_j(\mathbf{x}i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right) \color{red}{-2\lambda w_j } $$ and for the intercept term, we have $$ \frac{\partial\ell}{\partial w_0} = \sum{i=1}^N h_0(\mathbf{x}_i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right) $$ Note: As we did in the Regression course, we do not apply the L2 penalty on the intercept. A large intercept does not necessarily indicate overfitting because the intercept is not associated with any particular feature. Write a function that computes the derivative of log likelihood with respect to a single coefficient $w_j$. Unlike its counterpart in the last assignment, the function accepts five arguments: * errors vector containing $(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w}))$ for all $i$ * feature vector containing $h_j(\mathbf{x}_i)$ for all $i$ * coefficient containing the current value of coefficient $w_j$. * l2_penalty representing the L2 penalty constant $\lambda$ * feature_is_constant telling whether the $j$-th feature is constant or not.
def feature_derivative_with_L2(errors, feature, coefficient, l2_penalty, feature_is_constant): # Compute the dot product of errors and feature derivative = sum(feature * errors) # add L2 penalty term for any feature that isn't the intercept. if not feature_is_constant: derivative = derivative - (2 * l2_penalty * coefficient) return derivative
Classification/Week 2/Assignment 2/module-4-linear-classifier-regularization-assignment-blank.ipynb
rashikaranpuria/Machine-Learning-Specialization
mit
Quiz question: Does the term with L2 regularization increase or decrease $\ell\ell(\mathbf{w})$? The logistic regression function looks almost like the one in the last assignment, with a minor modification to account for the L2 penalty. Fill in the code below to complete this modification.
def logistic_regression_with_L2(feature_matrix, sentiment, initial_coefficients, step_size, l2_penalty, max_iter): coefficients = np.array(initial_coefficients) # make sure it's a numpy array for itr in xrange(max_iter): # Predict P(y_i = +1|x_i,w) using your predict_probability() function predictions = predict_probability(feature_matrix, coefficients) # Compute indicator value for (y_i = +1) indicator = (sentiment==+1) # Compute the errors as indicator - predictions errors = indicator - predictions for j in xrange(len(coefficients)): # loop over each coefficient is_intercept = (j == 0) # Recall that feature_matrix[:,j] is the feature column associated with coefficients[j]. # Compute the derivative for coefficients[j]. Save it in a variable called derivative derivative = feature_derivative_with_L2(errors, feature_matrix[:,j], coefficients[j], l2_penalty, is_intercept) # add the step size times the derivative to the current coefficient coefficients[j] = coefficients[j] + (step_size * derivative) # Checking whether log likelihood is increasing if itr <= 15 or (itr <= 100 and itr % 10 == 0) or (itr <= 1000 and itr % 100 == 0) \ or (itr <= 10000 and itr % 1000 == 0) or itr % 10000 == 0: lp = compute_log_likelihood_with_L2(feature_matrix, sentiment, coefficients, l2_penalty) print 'iteration %*d: log likelihood of observed labels = %.8f' % \ (int(np.ceil(np.log10(max_iter))), itr, lp) return coefficients
Classification/Week 2/Assignment 2/module-4-linear-classifier-regularization-assignment-blank.ipynb
rashikaranpuria/Machine-Learning-Specialization
mit
Using the coefficients trained with L2 penalty 0, find the 5 most positive words (with largest positive coefficients). Save them to positive_words. Similarly, find the 5 most negative words (with largest negative coefficients) and save them to negative_words. Quiz Question. Which of the following is not listed in either positive_words or negative_words?
positive_words = table.topk('coefficients [L2=0]', 5, reverse = False)['word'] negative_words = table.topk('coefficients [L2=0]', 5, reverse = True)['word'] print positive_words print negative_words
Classification/Week 2/Assignment 2/module-4-linear-classifier-regularization-assignment-blank.ipynb
rashikaranpuria/Machine-Learning-Specialization
mit
We have to convert our integer labels to categorical values in a format that Keras accepts.
y_train_k = to_categorical(y_train[:, np.newaxis]) y_test_k = to_categorical(y_test[:, np.newaxis]) y_critical_k = to_categorical(labels[critical][:, np.newaxis])
doc/Programs/IsingModel/neuralnetIsing.ipynb
CompPhysics/MachineLearning
cc0-1.0
We now fit our model for $10$ epochs and validate on the test data.
history = clf.fit( X_train, y_train_k, validation_data=(X_test, y_test_k), epochs=10, batch_size=200, verbose=True )
doc/Programs/IsingModel/neuralnetIsing.ipynb
CompPhysics/MachineLearning
cc0-1.0
We now evaluate the model.
train_accuracy = clf.evaluate(X_train, y_train_k, batch_size=200)[1] test_accuracy = clf.evaluate(X_test, y_test_k, batch_size=200)[1] critical_accuracy = clf.evaluate(data[critical], y_critical_k, batch_size=200)[1] print ("Accuracy on train data: {0}".format(train_accuracy)) print ("Accuracy on test data: {0}".format(test_accuracy)) print ("Accuracy on critical data: {0}".format(critical_accuracy))
doc/Programs/IsingModel/neuralnetIsing.ipynb
CompPhysics/MachineLearning
cc0-1.0
Then we plot the ROC curve for three datasets.
fig = plt.figure(figsize=(20, 14)) for (_X, _y), label in zip( [ (X_train, y_train_k), (X_test, y_test_k), (data[critical], y_critical_k) ], ["Train", "Test", "Critical"] ): proba = clf.predict(_X) fpr, tpr, _ = skm.roc_curve(_y[:, 1], proba[:, 1]) roc_auc = skm.auc(fpr, tpr) print ("Keras AUC ({0}): {1}".format(label, roc_auc)) plt.plot(fpr, tpr, label="{0} (AUC = {1})".format(label, roc_auc), linewidth=4.0) plt.plot([0, 1], [0, 1], "--", label="Guessing (AUC = 0.5)", linewidth=4.0) plt.title(r"The ROC curve for Keras", fontsize=18) plt.xlabel(r"False positive rate", fontsize=18) plt.ylabel(r"True positive rate", fontsize=18) plt.axis([-0.01, 1.01, -0.01, 1.01]) plt.xticks(fontsize=18) plt.yticks(fontsize=18) plt.legend(loc="best", fontsize=18) plt.show()
doc/Programs/IsingModel/neuralnetIsing.ipynb
CompPhysics/MachineLearning
cc0-1.0
Basic Math Operations
print(3 + 5) print(3 - 5) print(3 * 5) print(3 ** 5) # Observation: this code gives different results for python2 and python3 # because of the behaviour for the division operator print(3 / 5.0) print(3 / 5) # for compatibility, make sure to use the follow statement from __future__ import division print(3 / 5.0) print(3 / 5)
labs/notebooks/basic_tutorials/python_basics.ipynb
LxMLS/lxmls-toolkit
mit
Data Strutures
countries = ['Portugal','Spain','United Kingdom'] print(countries)
labs/notebooks/basic_tutorials/python_basics.ipynb
LxMLS/lxmls-toolkit
mit
Exercise 0.1 Use L[i:j] to return the countries in the Iberian Peninsula.
countries[0:2]
labs/notebooks/basic_tutorials/python_basics.ipynb
LxMLS/lxmls-toolkit
mit
Loops and Indentation
i = 2 while i < 10: print(i) i += 2 for i in range(2,10,2): print(i) a=1 while a <= 3: print(a) a += 1
labs/notebooks/basic_tutorials/python_basics.ipynb
LxMLS/lxmls-toolkit
mit
Exercise 0.2 Can you then predict the output of the following code?:
a=1 while a <= 3: print(a) a += 1
labs/notebooks/basic_tutorials/python_basics.ipynb
LxMLS/lxmls-toolkit
mit
Control Flow
hour = 16 if hour < 12: print('Good morning!') elif hour >= 12 and hour < 20: print('Good afternoon!') else: print('Good evening!')
labs/notebooks/basic_tutorials/python_basics.ipynb
LxMLS/lxmls-toolkit
mit
Functions
def greet(hour): if hour < 12: print('Good morning!') elif hour >= 12 and hour < 20: print('Good afternoon!') else: print('Good evening!')
labs/notebooks/basic_tutorials/python_basics.ipynb
LxMLS/lxmls-toolkit
mit
Exercise 0.3 Note that the previous code allows the hour to be less than 0 or more than 24. Change the code in order to indicate that the hour given as input is invalid. Your output should be something like: greet(50) Invalid hour: it should be between 0 and 24. greet(-5) Invalid hour: it should be between 0 and 24.
greet(50) greet(-5)
labs/notebooks/basic_tutorials/python_basics.ipynb
LxMLS/lxmls-toolkit
mit
Profiling
%prun greet(22)
labs/notebooks/basic_tutorials/python_basics.ipynb
LxMLS/lxmls-toolkit
mit
Debugging in Python
def greet2(hour): if hour < 12: print('Good morning!') elif hour >= 12 and hour < 20: print('Good afternoon!') else: import pdb; pdb.set_trace() print('Good evening!') # try: greet2(22)
labs/notebooks/basic_tutorials/python_basics.ipynb
LxMLS/lxmls-toolkit
mit
Exceptions for a complete list of built-in exceptions, see http://docs.python.org/2/library/exceptions.html
raise ValueError("Invalid input value.") while True: try: x = int(input("Please enter a number: ")) break except ValueError: print("Oops! That was no valid number. Try again...")
labs/notebooks/basic_tutorials/python_basics.ipynb
LxMLS/lxmls-toolkit
mit
Extending basic Functionalities with Modules
import numpy as np np.var? np.random.normal?
labs/notebooks/basic_tutorials/python_basics.ipynb
LxMLS/lxmls-toolkit
mit
Organizing your Code with your own modules See details in guide Matplotlib – Plotting in Python
import numpy as np import matplotlib.pyplot as plt %matplotlib inline X = np.linspace(-4, 4, 1000) plt.plot(X, X**2*np.cos(X**2)) plt.savefig("simple.pdf")
labs/notebooks/basic_tutorials/python_basics.ipynb
LxMLS/lxmls-toolkit
mit
Exercise 0.5 Try running the following on Jupyter, which will introduce you to some of the basic numeric and plotting operations.
# This will import the numpy library # and give it the np abbreviation import numpy as np # This will import the plotting library import matplotlib.pyplot as plt # Linspace will return 1000 points, # evenly spaced between -4 and +4 X = np.linspace(-4, 4, 1000) # Y[i] = X[i]**2 Y = X**2 # Plot using a red line ('r') plt.plot(X, Y, 'r') # arange returns integers ranging from -4 to +4 # (the upper argument is excluded!) Ints = np.arange(-4,5) # We plot these on top of the previous plot # using blue circles (o means a little circle) plt.plot(Ints, Ints**2, 'bo') # You may notice that the plot is tight around the line # Set the display limits to see better plt.xlim(-4.5,4.5) plt.ylim(-1,17) plt.show() import matplotlib.pyplot as plt import numpy as np X = np.linspace(0, 4 * np.pi, 1000) C = np.cos(X) S = np.sin(X) plt.plot(X, C) plt.plot(X, S) plt.show()
labs/notebooks/basic_tutorials/python_basics.ipynb
LxMLS/lxmls-toolkit
mit
Exercise 0.6 Run the following example and lookup the ptp function/method (use the ? functionality in Jupyter)
A = np.arange(100) # These two lines do exactly the same thing print(np.mean(A)) print(A.mean()) np.ptp?
labs/notebooks/basic_tutorials/python_basics.ipynb
LxMLS/lxmls-toolkit
mit
Exercise 0.7 Consider the following approximation to compute an integral \begin{equation} \int_0^1 f(x) dx \approx \sum_{i=0}^{999} \frac{f(i/1000)}{1000} \end{equation} Use numpy to implement this for $f(x) = x^2$. You should not need to use any loops. Note that integer division in Python 2.x returns the floor division (use floats – e.g. 5.0/2.0 – to obtain rationals). The exact value is 1/3. How close is the approximation?
def f(x): return(x**2) sum([f(x*1./1000)/1000 for x in range(0,1000)])
labs/notebooks/basic_tutorials/python_basics.ipynb
LxMLS/lxmls-toolkit
mit
Exercise 0.8 In the rest of the school we will represent both matrices and vectors as numpy arrays. You can create arrays in different ways, one possible way is to create an array of zeros.
import numpy as np m = 3 n = 2 a = np.zeros([m,n]) print(a)
labs/notebooks/basic_tutorials/python_basics.ipynb
LxMLS/lxmls-toolkit
mit
You can check the shape and the data type of your array using the following commands:
print(a.shape) print(a.dtype.name)
labs/notebooks/basic_tutorials/python_basics.ipynb
LxMLS/lxmls-toolkit
mit
This shows you that “a” is an 3*2 array of type float64. By default, arrays contain 64 bit6 floating point numbers. You can specify the particular array type by using the keyword dtype.
a = np.zeros([m,n],dtype=int) print(a.dtype)
labs/notebooks/basic_tutorials/python_basics.ipynb
LxMLS/lxmls-toolkit
mit
You can also create arrays from lists of numbers:
a = np.array([[2,3],[3,4]]) print(a)
labs/notebooks/basic_tutorials/python_basics.ipynb
LxMLS/lxmls-toolkit
mit
Exercise 0.9 You can multiply two matrices by looping over both indexes and multiplying the individual entries.
a = np.array([[2,3],[3,4]]) b = np.array([[1,1],[1,1]]) a_dim1, a_dim2 = a.shape b_dim1, b_dim2 = b.shape c = np.zeros([a_dim1,b_dim2]) for i in range(a_dim1): for j in range(b_dim2): for k in range(a_dim2): c[i,j] += a[i,k]*b[k,j] print(c)
labs/notebooks/basic_tutorials/python_basics.ipynb
LxMLS/lxmls-toolkit
mit
This is, however, cumbersome and inefficient. Numpy supports matrix multiplication with the dot function:
d = np.dot(a,b) print(d) a = np.array([1,2]) b = np.array([1,1]) np.dot(a,b) np.outer(a,b) I = np.eye(2) x = np.array([2.3, 3.4]) print(I) print(np.dot(I,x)) A = np.array([ [1, 2], [3, 4] ]) print(A) print(A.T)
labs/notebooks/basic_tutorials/python_basics.ipynb
LxMLS/lxmls-toolkit
mit
Vamos a construir un pricer para un bono que nos vamos a inventar. Es un bono que vale 0 mientras no haya pasado 1.5 fracciones de año desde la fecha de firma. En cuanto se supere ese tiempo, el bono vale un 5% más que el strike price.
from datetime import date def newbond(strike, signed, time, daycount): signed = date(*map(int, signed.split('-'))) time = date(*map(int, time.split('-'))) yearfrac = daycount(signed, time) if yearfrac > 1.5: return (1 + 0.05) * strike else: return 0.0
curso/7-Mastr.ipynb
ekergy/jupyter_notebooks
gpl-3.0
Lo mejor es que ahora lo podemos probar, hacer test unitarios... De mastr importaremos el objeto que cuenta fracciones de año.
from mastr.bootstrapping.daycount import DayCounter value = newbond(100, '2016-9-7', '2017-9-10', DayCounter('actual/360')) print('2017-9-10', value) value = newbond(100, '2016-9-7', '2018-9-10', DayCounter('actual/360')) print('2018-9-10', value)
curso/7-Mastr.ipynb
ekergy/jupyter_notebooks
gpl-3.0
Pero mastr está diseñado para ejecutar IDN, los scripts de descripción de carteras y de escenarios.
sandbox.add_pricer(newbond) sandbox.add_object(DayCounter) %cat data/script.json with open('data/script.json') as f: results = sandbox.eval(f.read()) print(results)
curso/7-Mastr.ipynb
ekergy/jupyter_notebooks
gpl-3.0
El intérprete de IDN está preparado para poder evaluar la cartera con múltiples escenarios. En este caso, vamos a evaluar distintos escenarios temporales.
%cat data/scriptdata.json %cat data/data.json
curso/7-Mastr.ipynb
ekergy/jupyter_notebooks
gpl-3.0
Este nuevo fichero JSON, que contiene los datos de distintos escenarios, no es más que un nuevo argumento para el sandbox.
with open('data/scriptdata.json') as f: with open('data/data.json') as g: results = sandbox.eval(f.read(), g.read()) print(results)
curso/7-Mastr.ipynb
ekergy/jupyter_notebooks
gpl-3.0
Finalmente, podemos utilizar las capacidades de representación gráfica para analizar los resultados y la evolucion en los tiempos dados del valor de este bono.
import matplotlib.pyplot as plt import json %matplotlib notebook dates = [ date(2016,9,10), date(2016,12,10), date(2017,9,10), date(2018,9,10), date(2019,9,10) ] fig1 = plt.figure(1) ax = fig1.add_subplot(1,1,1) ax.plot(dates, [r['eval1'] for r in results]) plt.setp(ax.get_xticklabels(), rotation=30)
curso/7-Mastr.ipynb
ekergy/jupyter_notebooks
gpl-3.0
Below I'm running images through the VGG network in batches. Exercise: Below, build the VGG network. Also get the codes from the first fully connected layer (make sure you get the ReLUd values).
batch_size = 100 codes_list = [] labels = [] batch = [] codes = None with tf.Session() as sess: # TODO: Build the vgg network here vgg = vgg16.Vgg16() input_ = tf.placeholder(tf.float32, [None, 224, 224, 3]) with tf.name_scope("content_vgg"): vgg.build(input_) for each in classes: print("Starting {} images".format(each)) class_path = data_dir + each files = os.listdir(class_path) for ii, file in enumerate(files, 1): # Add images to the current batch # utils.load_image crops the input images for us, from the center img = utils.load_image(os.path.join(class_path, file)) batch.append(img.reshape((1, 224, 224, 3))) labels.append(each) # Running the batch through the network to get the codes if ii % batch_size == 0 or ii == len(files): # Image batch to pass to VGG network images = np.concatenate(batch) # TODO: Get the values from the relu6 layer of the VGG network feed_dict = {input_: images} codes_batch = sess.run(vgg.relu6, feed_dict=feed_dict) # Here I'm building an array of the codes if codes is None: codes = codes_batch else: codes = np.concatenate((codes, codes_batch)) # Reset to start building the next batch batch = [] print('{} images processed'.format(ii)) # write codes to file with open('codes', 'w') as f: codes.tofile(f) # write labels to file import csv with open('labels', 'w') as f: writer = csv.writer(f, delimiter='\n') writer.writerow(labels)
transfer-learning/.ipynb_checkpoints/Transfer_Learning-checkpoint.ipynb
mdiaz236/DeepLearningFoundations
mit
Instead of estimating the expected reward from selecting a particular arm, we may only care about the relative preference of one arm to another.
n_arms = 10 bandit = bd.GaussianBandit(n_arms, mu=4) n_trials = 1000 n_experiments = 500
notebooks/Stochastic Bandits - Preference Estimation.ipynb
bgalbraith/bandits
apache-2.0
Softmax Preference learning uses a Softmax-based policy, where the action estimates are converted to a probability distribution using the softmax function. This is then sampled to produce the chosen arm.
policy = bd.SoftmaxPolicy() agents = [ bd.GradientAgent(bandit, policy, alpha=0.1), bd.GradientAgent(bandit, policy, alpha=0.4), bd.GradientAgent(bandit, policy, alpha=0.1, baseline=False), bd.GradientAgent(bandit, policy, alpha=0.4, baseline=False) ] env = bd.Environment(bandit, agents, 'Gradient Agents') scores, optimal = env.run(n_trials, n_experiments) env.plot_results(scores, optimal)
notebooks/Stochastic Bandits - Preference Estimation.ipynb
bgalbraith/bandits
apache-2.0
Dataset Parameters Let's create the ParameterSet which would be added to the Bundle when calling add_dataset. Later we'll call add_dataset, which will create and attach this ParameterSet for us.
ps, constraints = phoebe.dataset.etv(component='mycomponent') print ps
2.2/tutorials/ETV.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Currently, none of the available etv methods actually compute fluxes. But if one is added that computes a light-curve and actually finds the time of mid-eclipse, then the passband-dependend parameters will be added here. For information on these passband-dependent parameters, see the section on the lc dataset Ns
print ps['Ns']
2.2/tutorials/ETV.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
time_ephems NOTE: this parameter will be constrained when added through add_dataset
print ps['time_ephems']
2.2/tutorials/ETV.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
time_ecls
print ps['time_ecls']
2.2/tutorials/ETV.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
etvs NOTE: this parameter will be constrained when added through add_dataset
print ps['etvs']
2.2/tutorials/ETV.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Compute Options Let's look at the compute options (for the default PHOEBE 2 backend) that relate to the ETV dataset. Other compute options are covered elsewhere: * parameters related to dynamics are explained in the section on the orb dataset
ps_compute = phoebe.compute.phoebe() print ps_compute
2.2/tutorials/ETV.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
etv_method
print ps_compute['etv_method']
2.2/tutorials/ETV.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
etv_tol
print ps_compute['etv_tol']
2.2/tutorials/ETV.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Synthetics
b.add_dataset('etv', Ns=np.linspace(0,10,11), dataset='etv01') b.add_compute() b.run_compute() b['etv@model'].twigs print b['time_ephems@primary@etv@model'] print b['time_ecls@primary@etv@model'] print b['etvs@primary@etv@model']
2.2/tutorials/ETV.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Plotting By default, ETV datasets plot as etv vs time_ephem. Of course, a simple binary with no companion or apsidal motion won't show much of a signal (this is essentially flat with some noise). To see more ETV examples see: Apsidal Motion Minimial Hierarchical Triple LTTE ETVs in a Hierarchical Triple
axs, artists = b['etv@model'].plot()
2.2/tutorials/ETV.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Alternatively, especially when overplotting with a light curve, its sometimes handy to just plot ticks at each of the eclipse times. This can easily be done by passing a single value for 'y'. For other examples with light curves as well see: * Apsidal Motion * LTTE ETVs in a Hierarchical Triple
axs, artists = b['etv@model'].plot(x='time_ecls', y=2)
2.2/tutorials/ETV.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Training the classifier using Majority voting Now we use the cleaned and conditioned training set to create a facies classifier. As mentioned above, we will use a type of machine learning model known as a Majority voting. We trained classifier on four models KNeighbours, Random forest, logistic regression and Gradient boosting.
from sklearn import neighbors clf = neighbors.KNeighborsClassifier(n_neighbors=20, weights='distance', algorithm='kd_tree', leaf_size=30, metric='minkowski', p=1) clf.fit(X_train,y_train) predicted_labels = clf.predict(X_test)
SHandPR/MajorityVoting.ipynb
seg/2016-ml-contest
apache-2.0
As noted above, the boundaries between the facies classes are not all sharp, and some of them blend into one another. The error within these 'adjacent facies' can also be calculated. We define an array to represent the facies adjacent to each other. For facies label i, adjacent_facies[i] is an array of the adjacent facies labels.
adjacent_facies = np.array([[1], [0,2], [1], [4], [3,5], [4,6,7], [5,7], [5,6,8], [6,7]]) def accuracy_adjacent(conf, adjacent_facies): nb_classes = conf.shape[0] total_correct = 0. for i in np.arange(0,nb_classes): total_correct += conf[i][i] for j in adjacent_facies[i]: total_correct += conf[i][j] return total_correct / sum(sum(conf)) print('Facies classification accuracy = %f' % accuracy(conf)) print('Adjacent facies classification accuracy = %f' % accuracy_adjacent(conf, adjacent_facies)) #Now do random forest from sklearn.ensemble import RandomForestClassifier RFC = RandomForestClassifier(n_estimators=200, criterion='gini', max_features='auto', max_depth=None, min_samples_split=7, min_samples_leaf=1, min_weight_fraction_leaf=0, max_leaf_nodes=None, min_impurity_split=1e-07, bootstrap=True, oob_score=False, random_state=None, verbose=0, warm_start=False, class_weight=None ) # n_estimators=150, # min_samples_leaf= 50,class_weight="balanced",oob_score=True,random_state=50 RFC.fit(X_train,y_train) rfpredicted_labels = RFC.predict(X_test) RFconf = confusion_matrix(y_test, rfpredicted_labels) display_cm(RFconf, facies_labels, hide_zeros=True) print('Facies classification accuracy = %f' % accuracy(RFconf)) print('Adjacent facies classification accuracy = %f' % accuracy_adjacent(RFconf, adjacent_facies)) #Now do Gradient Boosting seed = 123 np.random.seed(seed) from sklearn.ensemble import GradientBoostingClassifier gbModel = GradientBoostingClassifier(loss='deviance', learning_rate=0.1, n_estimators=200, max_depth=2, min_samples_split=25, min_samples_leaf=5, max_features=None, max_leaf_nodes=None, random_state=seed, verbose=0) gbModel.fit(X_train,y_train) gbpredicted_labels = gbModel.predict(X_test) gbconf = confusion_matrix(y_test, gbpredicted_labels) display_cm(gbconf, facies_labels, hide_zeros=True) print('Facies classification accuracy = %f' % accuracy(gbconf)) print('Adjacent facies classification accuracy = %f' % accuracy_adjacent(gbconf, adjacent_facies)) from sklearn import linear_model lgr = linear_model.LogisticRegression(penalty='l2', dual=False, C=1e6, fit_intercept=True, intercept_scaling=1, class_weight=None, max_iter=100, random_state=seed, solver='newton-cg', tol=1e-04, multi_class='multinomial', warm_start=False, verbose=0) # class_weight='balanced',multi_class='ovr',solver='sag',max_iter=1000,random_state=40,C=1e5) lgr.fit(X_train,y_train) lgrpredicted_labels = lgr.predict(X_test) lgrconf = confusion_matrix(y_test, lgrpredicted_labels) display_cm(lgrconf, facies_labels, hide_zeros=True) print('Facies classification accuracy = %f' % accuracy(lgrconf)) print('Adjacent facies classification accuracy = %f' % accuracy_adjacent(lgrconf, adjacent_facies))
SHandPR/MajorityVoting.ipynb
seg/2016-ml-contest
apache-2.0
Using Voting classifier Now The voting classifier is now used to vote and classify models
from sklearn.ensemble import VotingClassifier vtclf = VotingClassifier(estimators=[ ('KNN', clf), ('RFC', RFC), ('GBM', gbModel),('LR',lgr)], voting='hard', weights=[2,2,1,1]) vtclf.fit(X_train,y_train) vtclfpredicted_labels = vtclf.predict(X_test) vtclfconf = confusion_matrix(y_test, vtclfpredicted_labels) display_cm(vtclfconf, facies_labels, hide_zeros=True) print('Facies classification accuracy = %f' % accuracy(vtclfconf)) print('Adjacent facies classification accuracy = %f' % accuracy_adjacent(vtclfconf, adjacent_facies))
SHandPR/MajorityVoting.ipynb
seg/2016-ml-contest
apache-2.0
reading
import csv import sys with open('data.csv', 'rt') as f: reader = csv.reader(f) for row in reader: print(row)
DataPersistence/csv.ipynb
gaufung/PythonStandardLibrary
mit
Dialect
import csv print(csv.list_dialects())
DataPersistence/csv.ipynb
gaufung/PythonStandardLibrary
mit
Creating Dialect
import csv csv.register_dialect('pipes', delimiter='|') with open('testdata.pipes', 'r') as f: reader = csv.reader(f, dialect='pipes') for row in reader: print(row)
DataPersistence/csv.ipynb
gaufung/PythonStandardLibrary
mit
Plotly Setup
# plotly validate with credentials with open('../_credentials/plotly.txt', 'r') as infile: user, pw = infile.read().strip().split(', ') plotly.tools.set_credentials_file(username=user, api_key=pw) text_color = 'rgb(107, 107, 107)' colors_dict = {'grey':'rgb(189, 195, 199)', 'aqua':'rgb( 54, 215, 183)', 'navy':'rgb( 31, 58, 147)', 'purple':'rgb(142, 68, 173)', 'blue':'rgb( 25, 181, 254)', 'green':'rgb( 46, 204, 113)', 'yellow':'rgb(253, 231, 76)', 'orange':'rgb(250, 121, 33)', 'red':'rgb(242, 38, 19)'} colors_lst = [colors_dict['yellow'], colors_dict['orange'], colors_dict['red'], colors_dict['green'], colors_dict['blue'], colors_dict['purple'], colors_dict['navy'], colors_dict['aqua'], colors_dict['grey']]
charts.ipynb
bryantbiggs/movie_torrents
mit
Load Cleaned Data from S3
# aws keys stored in ini file in same path os.environ['AWS_CONFIG_FILE'] = 'aws_config.ini' s3 = S3FileSystem(anon=False) key = 'data.csv' bucket = 'luther-02' df = pd.read_csv(s3.open('{}/{}'.format(bucket, key),mode='rb')) # update dates to datetime objects df['Released'] = pd.to_datetime(df['Released']) df['Year'] = pd.DatetimeIndex(df['Released']).year df['Year_Int'] = pd.to_numeric(df['Year']) df['Month'] = pd.DatetimeIndex(df['Released']).month # year extremities yr_start = df['Year'].min(axis=0) yr_stop = df['Year'].max(axis=0)
charts.ipynb
bryantbiggs/movie_torrents
mit
Number of Torrent Titles by Release Year
# number of titles per year in dataset df_yr = df['Year'].value_counts().reset_index() df_yr.columns = ['Year','Count'] # create plotly data trace trace = go.Bar(x=df_yr['Year'], y=df_yr['Count'], marker=dict(color=colors_dict['red'])) def bar_plot_data(_dataframe, _label, color): df_temp = _dataframe[_label].value_counts().reset_index() df_temp.columns = [_label,'Count'] # create plotly data trace trace = go.Bar(x=df_temp[_label], y=df_temp['Count'], marker=dict(color=colors_dict[color])) data = [trace] layout = go.Layout( title='Quantity of Torrent Titles by Year Released ({0}-{1})'.format(yr_start, yr_stop), xaxis=dict( title='Release Year', tickfont=dict(size=14, color=text_color)), yaxis=dict( title='Number of Titles', titlefont=dict(size=16, color=text_color), tickfont=dict(size=14, color=text_color)), barmode='group', bargap=0.15, bargroupgap=0.1) fig = go.Figure(data=data, layout=layout) return py.iplot(fig, filename='luther_titles_annually({0}-{1})'.format(yr_start, yr_stop)) bar_plot_data(df, 'Year', 'red') data = [trace] layout = go.Layout( title='Quantity of Torrent Titles by Year Released ({0}-{1})'.format(yr_start, yr_stop), xaxis=dict( title='Release Year', tickfont=dict(size=14, color=text_color)), yaxis=dict( title='Number of Titles', titlefont=dict(size=16, color=text_color), tickfont=dict(size=14, color=text_color)), barmode='group', bargap=0.15, bargroupgap=0.1) fig = go.Figure(data=data, layout=layout) py.iplot(fig, filename='luther_titles_annually({0}-{1})'.format(yr_start, yr_stop))
charts.ipynb
bryantbiggs/movie_torrents
mit
Trim Dataset by Years of Interest/Relevance Due to the low number of titles for the years below 1995, these torrents were removed from the dataset. Also, since the current year (2016) is only partially completed, films released in 2016 were removed from the dataset as well.
def df_year_limit(start, stop, df): mask = (df['Year'] >= start) & (df['Year'] <= stop) df = df.loc[mask] return df # get count of records before trimming by year cutoff yr_before = len(df) print('{0} records in dataframe before trimming by year cutoff'.format(yr_before)) yr_start, yr_stop = (1995, 2015) # trim by year cutoff df = df_year_limit(yr_start, yr_stop, df) yr_after = len(df) print('{0} entries lost ({1}%) due to date cutoff between {2} and {3}'.format(yr_before-yr_after, round((yr_before - yr_after)/yr_before *100, 2), yr_start, yr_stop)) # number of titles per year in dataset df_yr = df['Year'].value_counts().reset_index() df_yr.columns = ['Year','Count'] trace = go.Bar(x=df_yr['Year'], y=df_yr['Count'], marker=dict(color=colors_dict['blue'])) data = [trace] layout = go.Layout( title='Number of Torrent Titles by Release Year ({0}-{1})'.format(yr_start, yr_stop), xaxis=dict( title='Release Year', tickfont=dict(size=14,color=text_color)), yaxis=dict( title='Number of Titles', titlefont=dict(size=16, color=text_color), tickfont=dict(size=14, color=text_color)), barmode='group', bargap=0.15, bargroupgap=0.1) fig = go.Figure(data=data, layout=layout) py.iplot(fig, filename='luther_films_annually({0}-{1})'.format(yr_start, yr_stop))
charts.ipynb
bryantbiggs/movie_torrents
mit
Quantity Genre Classifications
# split genre strings into a numpy array def split_to_array(ser): split_array = np.array(ser.strip().replace(',','').split(' ')) return pd.Series(split_array) # turn numpy array into count of genre occurances genres = df['Genre'].apply(split_to_array) genres = pd.Series(genres.values.ravel()).dropna() genres = genres.value_counts().sort_values(ascending=False) # convert series to dataframe for plotting genre_ser = genres.reset_index() genre_ser.columns = ['Genre', 'Count'] # bar chart of each genre in dataset trace = go.Bar(x=genre_ser['Genre'], y=genre_ser['Count'], marker=dict(color=colors_dict['yellow'])) data = [trace] layout = go.Layout( title='Count of Genre Classifications ({0}-{1})'.format(yr_start, yr_stop), xaxis=dict( title='Genre', tickfont=dict(size=14, color=text_color)), yaxis=dict( title='Number of Classifications', titlefont=dict(size=16, color=text_color), tickfont=dict(size=14, color=text_color)), barmode='group', bargap=0.15, bargroupgap=0.1) fig = go.Figure(data=data, layout=layout) py.iplot(fig, filename='luther_genre_quantity({0}-{1})'.format(yr_start, yr_stop))
charts.ipynb
bryantbiggs/movie_torrents
mit
Most Dominant Genre out of Genres Given per Title
def convert_frequency(ser, genres=genres): split_array = np.array(ser.strip().replace(',','').split(' ')) genre = genres.loc[split_array].argmax() return genre # add new column to dataframe classifying genre list as single genre of significance df['Genre_Single'] = df['Genre'].apply(convert_frequency) # look at number of single genre counts after extraction df_count = df['Genre_Single'].value_counts().reset_index() df_count.columns = ['Genre_Single', 'Count'] # bar chart of significant single genre in dataset trace = go.Bar(x=df_count['Genre_Single'], y=df_count['Count'], marker=dict(color=colors_dict['yellow'])) data = [trace] layout = go.Layout( title='Quantity of Dominant Genre Classifications ({0}-{1})'.format(yr_start, yr_stop), xaxis=dict( title='Genre', tickfont=dict(size=14, color=text_color)), yaxis=dict( title='Quantity of Classifications', titlefont=dict(size=16, color=text_color), tickfont=dict(size=14, color=text_color)), barmode='group', bargap=0.15, bargroupgap=0.1) fig = go.Figure(data=data, layout=layout) py.iplot(fig, filename='luther_dominant_genres({0}-{1})'.format(yr_start, yr_stop)) def df_genre_limit(df, genres): df = df[~df['Genre_Single'].isin(genres)] return df # get count of records before trimming by dominant genre cutoff genre_before = len(df) print('{0} records in dataframe before trimming by genres'.format(genre_before)) # trim by dominant genres cut_genres = ['Romance', 'Western'] df = df_genre_limit(df, cut_genres) genre_after = len(df) str_genres = ', '.join(cut_genres) print('{0} entries lost ({1}%) due to droppping dominant genres {2}'.format(genre_before-genre_after, round((genre_before - genre_after)/genre_before *100, 2), str_genres))
charts.ipynb
bryantbiggs/movie_torrents
mit
Dominant Genre Quantities per Year
def get_stackedBar_trace(x_category, y_counts, _name, ind): ''' x_category -- category from feature set y_counts -- count of x_category in feature set _name -- _name of x_category ind -- number indices for color list Return: Plotly data trace for bar chart ''' return go.Bar(x=x_category, y=y_counts, name=_name, marker=dict(color=colors_lst[ind]), opacity=0.8) def get_stackedBar_traces(df, feature, count_label): traces = [] for ind, _feat in enumerate(df[feature].unique().tolist()): temp_df = df[df[feature] == _feat] _value_counts = temp_df[count_label].value_counts() temp_dict = _value_counts.to_dict() temp_dict = sorted(temp_dict.items()) feature_lst = [ft for ft, ct in temp_dict] count_lst = [ct for ft,ct in temp_dict] traces.append(get_stackedBar_trace(feature_lst, count_lst, _feat, ind)) return traces def get_stackedBar(_dataframe, feature, count_label, _title, _x_title, _y_title, _filename='stackedBar'): date = get_stackedBar_traces(_dataframe, feature, count_label) layout = go.Layout( title=_title, xaxis=dict( title=_x_title, tickfont=dict(size=14, color='rgb(107, 107, 107)') ), yaxis=dict( title=_y_title, titlefont=dict(size=16, color='rgb(107, 107, 107)'), tickfont=dict(size=14, color='rgb(107, 107, 107)'), dtick=20, ), barmode='stack',) fig = go.Figure(data=data, layout=layout) return py.iplot(fig, filename=_filename) _title = 'Genres Annually ({0}-{1})'.format(yr_start, yr_stop) _x_title = 'Year' _y_title = 'Number of Films' _filename = 'luther_stackedGenres_years({0}-{1})'.format(yr_start, yr_stop) get_stackedBar(df, 'Genre_Single', 'Year', _title, _x_title, _y_title, _filename='stackedBar') traces = [] for i,genre in enumerate(df['Genre_Single'].unique().tolist()): _genre_df = df[df['Genre_Single'] == genre] _value_counts = _genre_df['Year'].value_counts() gen = _value_counts.to_dict() gen = sorted(gen.items()) year_lst = [yr for yr,ct in gen] count_lst = [ct for yr,ct in gen] traces.append(make_bar_trace(year_lst, count_lst, genre, i)) data = traces[::-1] layout = go.Layout( title='Genres Annually ({0}-{1})'.format(yr_start, yr_stop), xaxis=dict( title='Year', tickfont=dict(size=14, color='rgb(107, 107, 107)') ), yaxis=dict( title='Number of Films', titlefont=dict(size=16, color='rgb(107, 107, 107)'), tickfont=dict(size=14, color='rgb(107, 107, 107)'), dtick=20, ), barmode='stack',) fig = go.Figure(data=data, layout=layout) py.iplot(fig, filename='stacked-bar') # ratings df_rated = df['Rated'].value_counts().reset_index() df_rated.columns = ['Rated', 'Count'] df_rated def df_genre_limit(df, ratings): df = df[~df['Rated'].isin(ratings)] return df ratings_remove = ['NOT RATED', 'X', 'TV-14', 'NC-17'] df = df_genre_limit(df, ratings_remove) df_rated = df['Rated'].value_counts().reset_index() df_rated.columns = ['Rated', 'Count'] # bar chart of ratings rated_traces = go.Bar(x=df_rated['Rated'], y=df_count['Count'], marker=dict(color=colors_dict['blue'])) data = [rated_traces] layout = go.Layout( title='Quantity of Dominant Genre Classifications ({0}-{1})'.format(yr_start, yr_stop), xaxis=dict( title='Genre', tickfont=dict(size=14, color=text_color)), yaxis=dict( title='Quantity of Classifications', titlefont=dict(size=16, color=text_color), tickfont=dict(size=14, color=text_color)), barmode='group', bargap=0.15, bargroupgap=0.1) fig = go.Figure(data=data, layout=layout) py.iplot(fig, filename='luther_dominant_genres({0}-{1})'.format(yr_start, yr_stop))
charts.ipynb
bryantbiggs/movie_torrents
mit
Remove Films Not Rated - PG-13, PG, G, or R
# get count of records before trimming by year cutoff rated_before = len(df) print('{0} records in dataframe before trimming by rating'.format(rated_before)) ratings = ['PG-13', 'PG', 'G', 'R'] df = df.loc[df['Rated'].isin(ratings)] rated_after = len(df) print('{0} entries lost ({1}%) due to limiting to only {2} ratings'.format(rated_before-rated_after, round((rated_before-rated_after)/rated_before *100, 2), ', '.join(ratings))) print('{0} entries lost total ({1}%)'.format(yr_before-rated_after, round((yr_before-rated_after)/yr_before *100, 2))) # Combine Genre_Single and Rating as a new label df['Genre_Rated'] = df['Genre_Single'] + ' ' + df['Rated'] df['Gen_Rat_Run'] = df['Genre_Rated'] + ' ' + df['Runtime'].apply(lambda x: str(x)) df['Gen_Rat_Bud'] = df['Genre_Rated'] + ' ' + df['Prod_Budget'].apply(lambda x: str(x)) df['Gen_Sin'] = df['Genre_Single'] df.columns colors_scat = colors_lst[:-2][::-1] df_scat = df[['Prod_Budget', 'Runtime', 'Gen_Rat_Bud', 'Gen_Rat_Run', 'Gen_Sin']] fig = FF.create_scatterplotmatrix(df_scat, diag='histogram', index='Gen_Sin', height=1000, width=1000, colormap=colors_scat[::-1]) py.iplot(fig, filename='Luther Scatterplot Matrix')
charts.ipynb
bryantbiggs/movie_torrents
mit
Log Transform Scatter Matrix
df['Log_Prod_Bud'] = np.log(df['Prod_Budget']) df['Log_Runtime'] = np.log(df['Runtime']) df['Log_Ttl_Tor'] = np.log(df['Total_Torrents']) colors_scat = colors_lst[:-2][::-1] df_scat = df[['Log_Ttl_Tor', 'Log_Prod_Bud', 'Log_Runtime', 'Gen_Rat_Bud', 'Gen_Rat_Run', 'Gen_Sin']] fig = FF.create_scatterplotmatrix(df_scat, diag='histogram', index='Gen_Sin', height=1000, width=1000, colormap=colors_scat[::-1]) _ = py.iplot(fig, filename='Log Luther Scatterplot Matrix')
charts.ipynb
bryantbiggs/movie_torrents
mit
Drama Only
df_drama = df[df['Genre_Single'] == 'Drama'].reset_index() df_drama = df_drama.drop('index',axis=1) df_drama['Log_Bud_Rated'] = df['Log_Prod_Bud'].apply(lambda x: str(x)) + ' ' + df['Rated'] #df_scat = df_drama[['Log_Ttl_Tor', 'Log_Prod_Bud', 'Log_Runtime', 'Log_Bud_Rated', 'Gen_Sin']] df_scat = df_drama[['Total_Torrents', 'Prod_Budget', 'Runtime', 'Rated', 'Gen_Sin']] fig = FF.create_scatterplotmatrix(df_scat, diag='histogram', index='Gen_Sin', height=1000, width=1000, colormap=colors_scat[::-1]) _ = py.iplot(fig, filename='Log Drama Luther Scatterplot Matrix') from patsy import dmatrices patsy_formula = 'Total_Torrents ~ Prod_Budget + Year + Month + Runtime + Genre_Single' y, x = dmatrices(patsy_formula, data=df_sub, return_type='dataframe') import statsmodels.api as sm model = sm.OLS(y, x) results = model.fit() results.summary() from sklearn.linear_model import LinearRegression model = LinearRegression() model.fit(x, y) mod_lr_score = model.score(x, y) mod_lr_coef = model.coef_ model.results from sklearn import cross_validation as cv from sklearn import metrics x_train, x_test, y_train, y_test = cv.train_test_split(x,y,test_size=0.20,random_state=1234) model = LinearRegression().fit(x_train, y_train) # store results mean_sq_err = metrics.mean_squared_error(y_train,model.predict(x_train)) cv_mod_score = model.score(x_train, y_train) # reset x, y otherwise errors occur y, x = dmatrices(patsy_formula, data=df_sub, return_type='dataframe') from sklearn.cross_validation import KFold kf = KFold(len(df_sub), n_folds=10, shuffle=True) for train_index, test_index in kf: x_train, x_test = x.iloc[train_index], x.iloc[test_index] y_train, y_test = y.iloc[train_index], y.iloc[test_index] clf2 = LinearRegression().fit(x.iloc[train_index], y.iloc[train_index]) # store results mean_sq_errKf = metrics.mean_squared_error(y_train,model.predict(x_train)) cvKf_mod_score = clf2.score(x,y) #NORMAL RESULTS print('Model Linear Regression Score = {0}'.format(mod_lr_score)) print(' Mean Square Error = {0}'.format(mean_sq_err)) print(' Cross Validation Model Score = {0}'.format(cv_mod_score)) print(' Mean Squred Error K-Fold = {0}'.format(mean_sq_errKf)) print('Cross Val. K-Fold Model Score = {0}'.format(cvKf_mod_score)) fig = plt.figure(figsize=(12,8)) fig = sm.graphics.plot_regress_exog(results,'Prod_Budget', fig=fig)
charts.ipynb
bryantbiggs/movie_torrents
mit
Log Transform
df.columns df_sub['log_budg']=np.log(df_sub.Prod_Budget) #df_sub['log_year']=np.log(df_sub.Year) #df_sub['log_run']=np.log(df_sub.Runtime) df_sub['log_tor']=np.log(df_sub.Total_Torrents) trans = df_sub[['log_budg', 'Year', 'log_tor']] plt.rcParams['figure.figsize'] = (15, 15) pd.tools.plotting.scatter_matrix(trans) log_patsy_formula = 'log_tor ~ log_budg + Year + Month' y, x = dmatrices(log_patsy_formula, data=df_sub, return_type='dataframe') import plotly.plotly as py from plotly.tools import FigureFactory as FF df_a = df_sub[['log_budg', 'Year', 'Month', 'log_tor']] fig = FF.create_scatterplotmatrix(df_a, diag='histogram', index='Month', height=800, width=800) py.iplot(fig, filename='Histograms along Diagonal Subplots') import statsmodels.formula.api as smf results = smf.ols(formula=log_patsy_formula, data=df_sub,).fit() results.summary() from sklearn.linear_model import LinearRegression model = LinearRegression() model.fit(x, y) # store results log_mod_lr_score = model.score(x,y) from sklearn import cross_validation as cv from sklearn import metrics x_train, x_test, y_train, y_test = cv.train_test_split(x,y,test_size=0.20,random_state=1234) model = LinearRegression().fit(x_train, y_train) # store results log_mean_sq_err = metrics.mean_squared_error(y_train,model.predict(x_train)) log_cv_mod_score = model.score(x_train, y_train) # reset x, y otherwise errors occur y, x = dmatrices(log_patsy_formula, data=df_sub, return_type='dataframe') from sklearn.cross_validation import KFold kf = KFold(len(df_sub), n_folds=10, shuffle=True) for train_index, test_index in kf: x_train, x_test = x.iloc[train_index], x.iloc[test_index] y_train, y_test = y.iloc[train_index], y.iloc[test_index] clf2 = LinearRegression().fit(x.iloc[train_index], y.iloc[train_index]) # store results log_mean_sq_errKf = metrics.mean_squared_error(y_train,model.predict(x_train)) log_cvKf_mod_score = clf2.score(x,y) #LOG RESULTS print('Log Model Linear Regression Score = {0}'.format(log_mod_lr_score)) print(' Log Mean Square Error = {0}'.format(log_mean_sq_err)) print(' Log Cross Validation Model Score = {0}'.format(log_cv_mod_score)) print(' Log Mean Squred Error K-Fold = {0}'.format(log_mean_sq_errKf)) print('Log Cross Val. K-Fold Model Score = {0}'.format(log_cvKf_mod_score)) df_TEST = pd.read_csv('data/test_data2.csv', encoding='latin-1') df_TEST['log_budg']=np.log(df_TEST.Prod_Budget) df_TEST['log_run']=np.log(df_TEST.Runtime) df_TEST['log_tor']=np.log(df_TEST.Total_Torrents) def split_to_array(ser): split_array = np.array(ser.strip().replace(',','').split(' ')) return pd.Series(split_array) genres = df_yr.Genre.apply(split_to_array) genres = pd.Series(genres.values.ravel()).dropna() genres = genres.value_counts().sort_values(ascending=False) def convert_frequency(ser, genres=genres): split_array = np.array(ser.strip().replace(',','').split(' ')) genre = genres.loc[split_array].argmax() return genre df_TEST['Genre_Single'] = df_TEST.Genre.apply(convert_frequency) log_patsy_formula_test = 'log_tor ~ log_budg + Year + Month + log_run + Genre_Single' y, x = dmatrices(log_patsy_formula_test, data=df_TEST, return_type='dataframe') print(clf2.score(x_test, y_test)) print(metrics.mean_squared_error(y_test,model.predict(x_test))) _ = plt.plot(y, model.predict(x), 'bo') plt.figure(figsize=(25,10)) ind = np.arange(len(yr_dict)) width = 0.35 bar_year = [year for year, count in yr_lst] bar_count = [count for year, count in yr_lst] plt.bar(ind, bar_count, width, color='r') plt.ylabel('Count') plt.xlabel('Year') plt.title('Number of Torrents per Year') plt.xticks(ind + width/2., (bar_year), rotation='vertical') plt.yticks(np.arange(0, 91, 5)) plt.show() #log_tor ~ log_budg + Year + Month + log_run + Genre_Single' fig = plt.figure(figsize=(12,8)) fig = sm.graphics.plot_regress_exog(results,'log_budg', fig=fig) fig = plt.figure(figsize=(12,8)) fig = sm.graphics.plot_regress_exog(results,'Year', fig=fig) fig = plt.figure(figsize=(12,8)) fig = sm.graphics.plot_regress_exog(results,'Month', fig=fig)
charts.ipynb
bryantbiggs/movie_torrents
mit
Streaming with tweepy The Twitter streaming API is used to download twitter messages in real time. We use streaming api instead of rest api because, the REST api is used to pull data from twitter but the streaming api pushes messages to a persistent session. This allows the streaming api to download more data in real time than could be done using the REST API. In Tweepy, an instance of tweepy.Stream establishes a streaming session and routes messages to StreamListener instance. The on_data method of a stream listener receives all messages and calls functions according to the message type. But the on_data method is only a stub, so we need to implement the functionality by subclassing StreamListener. Using the streaming api has three steps. Create a class inheriting from StreamListener Using that class create a Stream object Connect to the Twitter API using the Stream.
# Tweet listner class which subclasses from tweepy.StreamListener class TweetListner(tweepy.StreamListener): """Twitter stream listner""" def __init__(self, csocket): self.clientSocket = csocket def dataProcessing(self, data): """Process the data, before sending to spark streaming """ sendData = {} # data that is sent to spark streamer user = data.get("user", {}) name = user.get("name", "undefined").encode('utf-8') lang = user.get("lang", "undefined").encode('utf-8') sendData["name"] = name sendData["lang"] = lang #data_string = "{}:{}".format(name, followersCount) self.clientSocket.send(json.dumps(sendData) + u"\n") # append new line character, so that spark recognizes it logging.debug(json.dumps(sendData)) def on_data(self, raw_data): """ Called when raw data is received from connection. return False to stop stream and close connection. """ try: data = json.loads(raw_data) self.dataProcessing(data) #self.clientSocket.send(json.dumps(sendData) + u"\n") # Because the connection was breaking return True except Exception as e: logging.error("An unhandled exception has occured, check your data processing") logging.error(e) raise e def on_error(self, status_code): """Called when a non-200 status code is returned""" logging.error("A non-200 status code is returned: {}".format(status_code)) return True # Creating a proxy socket def createProxySocket(host, port): """ Returns a socket which can be used to connect to spark. """ try: s = socket.socket() # initialize socket instance s.bind((host, port)) # bind to the given host and port s.listen(5) # Enable a server to accept connections. logging.info("Listening on the port {}".format(port)) cSocket, address = s.accept() # waiting for a connection logging.info("Received Request from: {}".format(address)) return cSocket except socket.error as e: if e.errno == socket.errno.EADDRINUSE: # Address in use logging.error("The given host:port {}:{} is already in use"\ .format(host, port)) logging.info("Trying on port: {}".format(port + 1)) return createProxySocket(host, port + 1)
TweetAnalysis/Final/Q6/Dalon_4_RTD_MiniPro_Tweepy_Q6.ipynb
dalonlobo/GL-Mini-Projects
mit
Drawbacks of twitter streaming API The major drawback of the Streaming API is that Twitter’s Steaming API provides only a sample of tweets that are occurring. The actual percentage of total tweets users receive with Twitter’s Streaming API varies heavily based on the criteria users request and the current traffic. Studies have estimated that using Twitter’s Streaming API users can expect to receive anywhere from 1% of the tweets to over 40% of tweets in near real-time. The reason that you do not receive all of the tweets from the Twitter Streaming API is simply because Twitter doesn’t have the current infrastructure to support it, and they don’t want to; hence, the Twitter Firehose. Ref So we will use a hack i.e. get the top trending topics and use that to filter data. Problem with retweet count Maybe you're looking in the wrong place for the value. The Streaming API is in real time. When tweets are created and streamed, their retweet_count is always zero. The only time you'll see a non-zero retweet_count in the Streaming API is for when you're streamed a tweet that represents a retweet. Those tweets have a child node called "retweeted_status" that contains the original tweet that was retweeted embedded within it. The retweet_count value attached to that node represents, roughly, the number of times that original tweet has been retweeted as of some time near when you were streamed the tweet. Retweets themselves are currently not retweetable, so should not have a non-zero retweet_count. Source: here This is quite normal as it is expected when you are using streaming api endpoint, its because you receive the tweets as they are posted live on twitter platform, by the time you receive the tweet no other user had a chance to retweet it so retweet_count will always be 0. If you want to find out the retweet_count you have to refetch this particular tweet some time later using the rest api then you can see the retweet_count will contain the number of retweets happened till this particular point in time. Source: here
if __name__ == "__main__": try: api, auth = connectToTwitter() # connecting to twitter # Global information is available by using 1 as the WOEID # woeid = getWOEIDForTrendsAvailable(api, "Worldwide") # get the woeid of the worldwide host = "localhost" port = 8500 cSocket = createProxySocket(host, port) # Creating a socket while True: try: # Connect/reconnect the stream tweetStream = tweepy.Stream(auth, TweetListner(cSocket)) # Stream the twitter data # DON'T run this approach async or you'll just create a ton of streams! tweetStream.filter(track=["iphone", "iPhone", "iphoneX", "iphonex"]) # Filter on trending topics except IncompleteRead: # Oh well, reconnect and keep trucking continue except KeyboardInterrupt: # Or however you want to exit this loop tweetStream.disconnect() break except Exception as e: logging.error("Unhandled exception has occured") logging.error(e) continue except KeyboardInterrupt: # Keyboard interrupt called logging.error("KeyboardInterrupt was hit") except Exception as e: logging.error("Unhandled exception has occured") logging.error(e)
TweetAnalysis/Final/Q6/Dalon_4_RTD_MiniPro_Tweepy_Q6.ipynb
dalonlobo/GL-Mini-Projects
mit
SystemML Build information Following code will show SystemML information which is installed in the environment.
from systemml import MLContext ml = MLContext(sc) print ("SystemML Built-Time:"+ ml.buildTime()) print(ml.info()) # Workaround for Python 2.7.13 to avoid certificate validation issue while downloading any file. import ssl try: _create_unverified_https_context = ssl._create_unverified_context except AttributeError: # Legacy Python that doesn't verify HTTPS certificates by default pass else: # Handle target environment that doesn't support HTTPS verification ssl._create_default_https_context = _create_unverified_https_context # Create label.txt file def createLabelFile(fileName): file = open(fileName, 'w') file.write('1,"Cat" \n') file.write('2,"Dog" \n') file.close()
samples/jupyter-notebooks/Image_Classify_Using_VGG_19_Transfer_Learning.ipynb
dusenberrymw/incubator-systemml
apache-2.0
Download model, proto files and convert them to SystemML format. Download Caffe Model (VGG-19), proto files (deployer, network and solver) and label file. Convert the Caffe model into SystemML input format.
# Download caffemodel and proto files def downloadAndConvertModel(downloadDir='.', trained_vgg_weights='trained_vgg_weights'): # Step 1: Download the VGG-19 model and other files. import errno import os import urllib # Create directory, if exists don't error out try: os.makedirs(os.path.join(downloadDir,trained_vgg_weights)) except OSError as exc: # Python >2.5 if exc.errno == errno.EEXIST and os.path.isdir(trained_vgg_weights): pass else: raise # Download deployer, network, solver proto and label files. urllib.urlretrieve('https://raw.githubusercontent.com/apache/systemml/master/scripts/nn/examples/caffe2dml/models/imagenet/vgg19/VGG_ILSVRC_19_layers_deploy.proto', os.path.join(downloadDir,'VGG_ILSVRC_19_layers_deploy.proto')) urllib.urlretrieve('https://raw.githubusercontent.com/apache/systemml/master/scripts/nn/examples/caffe2dml/models/imagenet/vgg19/VGG_ILSVRC_19_layers_network.proto',os.path.join(downloadDir,'VGG_ILSVRC_19_layers_network.proto')) #TODO: After downloading network file (VGG_ILSVRC_19_layers_network.proto) , change num_output from 1000 to 2 urllib.urlretrieve('https://raw.githubusercontent.com/apache/systemml/master/scripts/nn/examples/caffe2dml/models/imagenet/vgg19/VGG_ILSVRC_19_layers_solver.proto',os.path.join(downloadDir,'VGG_ILSVRC_19_layers_solver.proto')) # TODO: set values as descrived below in VGG_ILSVRC_19_layers_solver.proto (Possibly through APIs whenever available) # test_iter: 100 # stepsize: 40 # max_iter: 200 # Create labels for data ### 1,"Cat" ### 2,"Dog" createLabelFile(os.path.join(downloadDir, trained_vgg_weights, 'labels.txt')) # TODO: Following line commented as its 500MG file, if u need to download it please uncomment it and run. # urllib.urlretrieve('http://www.robots.ox.ac.uk/~vgg/software/very_deep/caffe/VGG_ILSVRC_19_layers.caffemodel', os.path.join(downloadDir,'VGG_ILSVRC_19_layers.caffemodel')) # Step 2: Convert the caffemodel to trained_vgg_weights directory import systemml as sml sml.convert_caffemodel(sc, os.path.join(downloadDir,'VGG_ILSVRC_19_layers_deploy.proto'), os.path.join(downloadDir,'VGG_ILSVRC_19_layers.caffemodel'), os.path.join(downloadDir,trained_vgg_weights)) return
samples/jupyter-notebooks/Image_Classify_Using_VGG_19_Transfer_Learning.ipynb
dusenberrymw/incubator-systemml
apache-2.0
Classify images This function classify images from images specified through urls. Input Parameters: urls: List of urls printTokKData (default False): Whether to print top K indices and probabilities topK: Top K elements to be displayed.
import numpy as np import urllib from systemml.mllearn import Caffe2DML import systemml as sml def classifyImages(urls,img_shape=(3, 224, 224), printTokKData=False, topK=5, downloadDir='.', trained_vgg_weights='trained_vgg_weights'): size = (img_shape[1], img_shape[2]) vgg = Caffe2DML(sqlCtx, solver=os.path.join(downloadDir,'VGG_ILSVRC_19_layers_solver.proto'), input_shape=img_shape) vgg.load(trained_vgg_weights) for url in urls: outFile = 'inputTest.jpg' urllib.urlretrieve(url, outFile) from IPython.display import Image, display display(Image(filename=outFile)) print ("Prediction of above image to ImageNet Class using"); ## Do image classification through SystemML processing from PIL import Image input_image = sml.convertImageToNumPyArr(Image.open(outFile), img_shape=img_shape , color_mode='BGR', mean=sml.getDatasetMean('VGG_ILSVRC_19_2014')) print ("Image preprocessed through SystemML :: ", vgg.predict(input_image)[0]) if(printTopKData == True): sysml_proba = vgg.predict_proba(input_image) printTopK(sysml_proba, 'SystemML BGR', topK) from pyspark.ml.linalg import Vectors import os import systemml as sml def getLabelFeatures(filename, train_dir, img_shape): from PIL import Image vec = Vectors.dense(sml.convertImageToNumPyArr(Image.open(os.path.join(train_dir, filename)), img_shape=img_shape)[0,:]) if filename.lower().startswith('cat'): return (1, vec) elif filename.lower().startswith('dog'): return (2, vec) else: raise ValueError('Expected the filename to start with either cat or dog') from pyspark.sql.functions import rand import os def createTrainingDF(train_dir, train_data_file, img_shape): list_jpeg_files = os.listdir(train_dir) # 10 files per partition train_df = sc.parallelize(list_jpeg_files, int(len(list_jpeg_files)/10)).map(lambda filename : getLabelFeatures(filename, train_dir, img_shape)).toDF(['label', 'features']).orderBy(rand()) # Optional: but helps seperates conversion-related from training # train_df.write.parquet(train_data_file) # 'kaggle-cats-dogs.parquet' return train_df def readTrainingDF(train_dir, train_data_file): train_df = sqlContext.read.parquet(train_data_file) return train_df # downloadAndConvertModel(downloadDir, trained_vgg_weights) # TODO: Take "TODO" actions mentioned in the downloadAndConvertModel() function after calling downloadAndConvertModel() function. def retrainModel(img_shape, downloadDir, trained_vgg_weights, train_dir, train_data_file, vgg_new_model): # Let downloadAndConvertModel() functon be commented out, as it needs to be called separately (which is done in cell above) and manual action to be taken after calling it. # downloadAndConvertModel(downloadDir, trained_vgg_weights) # TODO: Take "TODO" actions mentioned in the downloadAndConvertModel() function after calling that function. train_df = createTrainingDF(train_dir, train_data_file, img_shape) ## Write from input files OR read if its already written/converted # train_df = readTrainingDF(train_dir, train_data_file) # Load the model vgg = Caffe2DML(sqlCtx, solver=os.path.join(downloadDir,'VGG_ILSVRC_19_layers_solver.proto'), input_shape=img_shape) vgg.load(weights=os.path.join(downloadDir,trained_vgg_weights), ignore_weights=['fc8']) vgg.set(debug=True).setExplain(True) # Train the model using new data vgg.fit(train_df) # Save the trained model vgg.save(vgg_new_model) return vgg import numpy as np import urllib from systemml.mllearn import Caffe2DML import systemml as sml def classifyImagesWTransfLearning(urls, model, img_shape=(3, 224, 224), printTokKData=False, topK=5): size = (img_shape[1], img_shape[2]) # vgg.load(trained_vgg_weights) for url in urls: outFile = 'inputTest.jpg' urllib.urlretrieve(url, outFile) from IPython.display import Image, display display(Image(filename=outFile)) print ("Prediction of above image to ImageNet Class using"); ## Do image classification through SystemML processing from PIL import Image input_image = sml.convertImageToNumPyArr(Image.open(outFile), img_shape=img_shape , color_mode='BGR', mean=sml.getDatasetMean('VGG_ILSVRC_19_2014')) print ("Image preprocessed through SystemML :: ", model.predict(input_image)[0]) if(printTopKData == True): sysml_proba = model.predict_proba(input_image) printTopK(sysml_proba, 'SystemML BGR', topK)
samples/jupyter-notebooks/Image_Classify_Using_VGG_19_Transfer_Learning.ipynb
dusenberrymw/incubator-systemml
apache-2.0
Sample code to retrain the model and use it to classify image through two different way There are couple of parameters to set based on what you are looking for. 1. printTopKData (default False): If this parameter gets set to True, then top K results (probabilities and indices) will be displayed. 2. topK (default 5): How many entities (K) to be displayed. 3. Directories, data file name, model name and directory where data has donwloaded.
# ImageNet specific parameters img_shape = (3, 224, 224) # Setting other than current directory causes "network file not found" issue, as network file # location is defined in solver file which does not have a path, so it searches in current dir. downloadDir = '.' # /home/asurve/caffe_models' trained_vgg_weights = 'trained_vgg_weights' train_dir = '/home/asurve/data/keggle/dogs_vs_cats_2/train' train_data_file = 'kaggle-cats-dogs.parquet' vgg_new_model = 'kaggle-cats-dogs-model_2' printTopKData=True topK=5 urls = ['http://cdn3-www.dogtime.com/assets/uploads/gallery/goldador-dog-breed-pictures/puppy-1.jpg','https://lh3.googleusercontent.com/-YdeAa1Ff4Ac/VkUnQ4vuZGI/AAAAAAAAAEg/nBiUn4pp6aE/w800-h800/images-6.jpeg','https://upload.wikimedia.org/wikipedia/commons/thumb/5/58/MountainLion.jpg/312px-MountainLion.jpg'] vgg = retrainModel(img_shape, downloadDir, trained_vgg_weights, train_dir, train_data_file, vgg_new_model) classifyImagesWTransfLearning(urls, vgg, img_shape, printTopKData, topK) img_shape = (3, 224, 224) printTopKData=True topK=5 # Setting other than current directory causes "network file not found" issue, as network file # location is defined in solver file which does not have a path, so it searches in current dir. downloadDir = '.' # /home/asurve/caffe_models' trained_vgg_weights = 'kaggle-cats-dogs-model_2' urls = ['http://cdn3-www.dogtime.com/assets/uploads/gallery/goldador-dog-breed-pictures/puppy-1.jpg','https://lh3.googleusercontent.com/-YdeAa1Ff4Ac/VkUnQ4vuZGI/AAAAAAAAAEg/nBiUn4pp6aE/w800-h800/images-6.jpeg','https://upload.wikimedia.org/wikipedia/commons/thumb/5/58/MountainLion.jpg/312px-MountainLion.jpg'] classifyImages(urls,img_shape, printTopKData, topK, downloadDir, trained_vgg_weights)
samples/jupyter-notebooks/Image_Classify_Using_VGG_19_Transfer_Learning.ipynb
dusenberrymw/incubator-systemml
apache-2.0
Let's open a project with a UNIQUE name. This will be the name used in the DB so make sure it is new and not too short. Opening a project will always create a non-existing project and reopen an exising one. You cannot chose between opening types as you would with a file. This is a precaution to not accidentally delete your project.
# Project.delete('test') project = Project('test')
examples/rp/test_worker.ipynb
thempel/adaptivemd
lgpl-2.1
Again we name it pyemma for later reference. Add generators to project Next step is to add these to the project for later usage. We pick the .generators store and just add it. Consider a store to work like a set() in python. It contains objects only once and is not ordered. Therefore we need a name to find the objects later. Of course you can always iterate over all objects, but the order is not given. To be precise there is an order in the time of creation of the object, but it is only accurate to seconds and it really is the time it was created and not stored.
project.generators.add(engine) project.generators.add(modeller) project.files.one sc = WorkerScheduler(project.resource) sc.enter(project) t = engine.task_run_trajectory(project.new_trajectory(pdb_file, 100, restart=True)). extend(50).extend(100) sc(t) import radical.pilot as rp rp.TRANSFER sc.advance() for f in project.trajectories: print f.basename, f.length, DT(f.created).time for t in project.tasks: print t.stderr.objs['worker'] print project.generators t1 = engine.task_run_trajectory(project.new_trajectory(pdb_file, 100, restart=True)) t2 = t1.extend(100) t2.trajectory.restart project.tasks.add(t2) for f in project.trajectories: print f.drive, f.basename, len(f), f.created, f.__time__, f.exists, hex(f.__uuid__) for f in project.files: print f.drive, f.path, f.created, f.__time__, f.exists, hex(f.__uuid__) w = project.workers.last print w.state print w.command for t in project.tasks: print t.state, t.worker.hostname if t.worker else 'None' sc.advance() t1 = engine.task_run_trajectory(project.new_trajectory(pdb_file, 100)) t2 = t1.extend(100) project.tasks.add(t2) # from adaptivemd.engine import Trajectory # t3 = engine.task_run_trajectory(Trajectory('staging:///trajs/0.dcd', pdb_file, 100)).extend(100) # t3.dependencies = [] # def get_created_files(t, s): # if t.is_done(): # print 'done', s # return s - set(t.added_files) # else: # adds = set(t.added_files) # rems = set(s.required[0] for s in t._pre_stage) # print '+', adds # print '-', rems # q = set(s) - adds | rems # if t.dependencies is not None: # for d in t.dependencies: # q = get_created_files(d, q) # return q # get_created_files(t3, {}) for w in project.workers: print w.hostname, w.state w = project.workers.last print w.state print w.command w.command = 'shutdown' for t in project.tasks: print t.state, t.worker.hostname if t.worker else 'None' for f in project.trajectories: print f.drive, f.basename, len(f), f.created, f.__time__, f.exists, hex(f.__uuid__) project.trajectories.one[0] t = engine.task_run_trajectory(project.new_trajectory(project.trajectories.one[0], 100)) project.tasks.add(t) print project.files print project.tasks t = modeller.execute(list(project.trajectories)) project.tasks.add(t) from uuid import UUID project.storage.tasks._document.find_one({'_dict': {'generator' : { '_dict': }}}) genlist = ['openmm'] scheduler = sc prefetch = 1 while True: scheduler.advance() if scheduler.is_idle: for _ in range(prefetch): tasklist = scheduler(project.storage.tasks.consume_one()) if len(tasklist) == 0: break time.sleep(2.0)
examples/rp/test_worker.ipynb
thempel/adaptivemd
lgpl-2.1
Simple Convolutional Neural Network for CIFAR-10 The CIFAR-10 problem is best solved using a Convolutional Neural Network (CNN). We can quickly start off by defining all of the classes and functions we will need in this example.
# Simple CNN model for CIFAR-10 import numpy from keras.datasets import cifar10 from keras.models import Sequential from keras.layers import Dense from keras.layers import Dropout from keras.layers import Flatten from keras.constraints import maxnorm from keras.optimizers import SGD from keras.layers.convolutional import Conv2D from keras.layers.convolutional import MaxPooling2D from keras.utils import np_utils from keras import backend as K K.set_image_dim_ordering('th')
NEU/Sai_Raghuram_Kothapalli_DL/CIFAR_10-Keras.ipynb
nikbearbrown/Deep_Learning
mit
As is good practice, we next initialize the random number seed with a constant to ensure the results are reproducible.
# fix random seed for reproducibility seed = 7 numpy.random.seed(seed)
NEU/Sai_Raghuram_Kothapalli_DL/CIFAR_10-Keras.ipynb
nikbearbrown/Deep_Learning
mit
Next we can load the CIFAR-10 dataset.
# load data (X_train, y_train), (X_test, y_test) = cifar10.load_data()
NEU/Sai_Raghuram_Kothapalli_DL/CIFAR_10-Keras.ipynb
nikbearbrown/Deep_Learning
mit
The pixel values are in the range of 0 to 255 for each of the red, green and blue channels. It is good practice to work with normalized data. Because the input values are well understood, we can easily normalize to the range 0 to 1 by dividing each value by the maximum observation which is 255. Note, the data is loaded as integers, so we must cast it to floating point values in order to perform the division.
# normalize inputs from 0-255 to 0.0-1.0 X_train = X_train.astype('float32') X_test = X_test.astype('float32') X_train = X_train / 255.0 X_test = X_test / 255.0
NEU/Sai_Raghuram_Kothapalli_DL/CIFAR_10-Keras.ipynb
nikbearbrown/Deep_Learning
mit
The output variables are defined as a vector of integers from 0 to 1 for each class. We can use a one hot encoding to transform them into a binary matrix in order to best model the classification problem. We know there are 10 classes for this problem, so we can expect the binary matrix to have a width of 10.
# one hot encode outputs y_train = np_utils.to_categorical(y_train) y_test = np_utils.to_categorical(y_test) num_classes = y_test.shape[1]
NEU/Sai_Raghuram_Kothapalli_DL/CIFAR_10-Keras.ipynb
nikbearbrown/Deep_Learning
mit
Let’s start off by defining a simple CNN structure as a baseline and evaluate how well it performs on the problem. We will use a structure with two convolutional layers followed by max pooling and a flattening out of the network to fully connected layers to make predictions. Our baseline network structure can be summarized as follows: Convolutional input layer, 32 feature maps with a size of 3×3, a rectifier activation function and a weight constraint of max norm set to 3. Dropout set to 20%. Convolutional layer, 32 feature maps with a size of 3×3, a rectifier activation function and a weight constraint of max norm set to 3. Max Pool layer with size 2×2. Flatten layer. Fully connected layer with 512 units and a rectifier activation function. Dropout set to 50%. Fully connected output layer with 10 units and a softmax activation function. A logarithmic loss function is used with the stochastic gradient descent optimization algorithm configured with a large momentum and weight decay start with a learning rate of 0.01.
# Create the model model = Sequential() model.add(Conv2D(32, (3, 3), input_shape=(3, 32, 32), padding='same', activation='relu', kernel_constraint=maxnorm(3))) model.add(Dropout(0.2)) model.add(Conv2D(32, (3, 3), activation='relu', padding='same', kernel_constraint=maxnorm(3))) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Flatten()) model.add(Dense(512, activation='relu', kernel_constraint=maxnorm(3))) model.add(Dropout(0.5)) model.add(Dense(num_classes, activation='softmax')) # Compile model epochs = 10 lrate = 0.01 decay = lrate/epochs sgd = SGD(lr=lrate, momentum=0.9, decay=decay, nesterov=False) model.compile(loss='categorical_crossentropy', optimizer='adagrad', metrics=['categorical_accuracy']) print(model.summary())
NEU/Sai_Raghuram_Kothapalli_DL/CIFAR_10-Keras.ipynb
nikbearbrown/Deep_Learning
mit
We can fit this model with 10 epochs and a batch size of 32. A small number of epochs was chosen to help keep this tutorial moving. Normally the number of epochs would be one or two orders of magnitude larger for this problem. Once the model is fit, we evaluate it on the test dataset and print out the classification accuracy.
# Fit the model model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=epochs, batch_size=32) # Final evaluation of the model scores = model.evaluate(X_test, y_test, verbose=0) print("Accuracy: %.2f%%" % (scores[1]*100))
NEU/Sai_Raghuram_Kothapalli_DL/CIFAR_10-Keras.ipynb
nikbearbrown/Deep_Learning
mit
We can improve the accuracy significantly by creating a much deeper network. Larger Convolutional Neural Network for CIFAR-10 We have seen that a simple CNN performs poorly on this complex problem. In this section we look at scaling up the size and complexity of our model. Let’s design a deep version of the simple CNN above. We can introduce an additional round of convolutions with many more feature maps. We will use the same pattern of Convolutional, Dropout, Convolutional and Max Pooling layers. This pattern will be repeated 3 times with 32, 64, and 128 feature maps. The effect be an increasing number of feature maps with a smaller and smaller size given the max pooling layers. Finally an additional and larger Dense layer will be used at the output end of the network in an attempt to better translate the large number feature maps to class values. We can summarize a new network architecture as follows: Convolutional input layer, 32 feature maps with a size of 3×3 and a rectifier activation function. Dropout layer at 20%. Convolutional layer, 32 feature maps with a size of 3×3 and a rectifier activation function. Max Pool layer with size 2×2. Convolutional layer, 64 feature maps with a size of 3×3 and a rectifier activation function. Dropout layer at 20%. Convolutional layer, 64 feature maps with a size of 3×3 and a rectifier activation function. Max Pool layer with size 2×2. Convolutional layer, 128 feature maps with a size of 3×3 and a rectifier activation function. Dropout layer at 20%. Convolutional layer,128 feature maps with a size of 3×3 and a rectifier activation function. Max Pool layer with size 2×2. Flatten layer. Dropout layer at 20%. Fully connected layer with 1024 units and a rectifier activation function. Dropout layer at 20%. Fully connected layer with 512 units and a rectifier activation function. Dropout layer at 20%. Fully connected output layer with 10 units and a softmax activation function. We can very easily define this network topology in Keras, as follows:
# Create the model model = Sequential() model.add(Conv2D(32, (3, 3), input_shape=(3, 32, 32), activation='relu', padding='same')) model.add(Dropout(0.2)) model.add(Conv2D(32, (3, 3), activation='relu', padding='same')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(64, (3, 3), activation='relu', padding='same')) model.add(Dropout(0.2)) model.add(Conv2D(64, (3, 3), activation='relu', padding='same')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(128, (3, 3), activation='relu', padding='same')) model.add(Dropout(0.2)) model.add(Conv2D(128, (3, 3), activation='relu', padding='same')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Flatten()) model.add(Dropout(0.2)) model.add(Dense(1024, activation='relu', kernel_constraint=maxnorm(3))) model.add(Dropout(0.2)) model.add(Dense(512, activation='relu', kernel_constraint=maxnorm(3))) model.add(Dropout(0.2)) model.add(Dense(num_classes, activation='softmax')) # Compile model epochs = 10 lrate = 0.01 decay = lrate/epochs sgd = SGD(lr=lrate, momentum=0.9, decay=decay, nesterov=False) model.compile(loss='categorical_crossentropy', optimizer='adagrad', metrics=['categorical_accuracy']) print(model.summary())
NEU/Sai_Raghuram_Kothapalli_DL/CIFAR_10-Keras.ipynb
nikbearbrown/Deep_Learning
mit
We can fit and evaluate this model using the same a procedure above and the same number of epochs but a larger batch size of 64, found through some minor experimentation.
numpy.random.seed(seed) model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=epochs, batch_size=64) # Final evaluation of the model scores = model.evaluate(X_test, y_test, verbose=0) print("Accuracy: %.2f%%" % (scores[1]*100))
NEU/Sai_Raghuram_Kothapalli_DL/CIFAR_10-Keras.ipynb
nikbearbrown/Deep_Learning
mit
This workbook shows a example derived from the EDA exercise in Chapter 2 of Doing Data Science, by o'Neil abd Schutt
clicks = Table.read_table("http://stat.columbia.edu/~rachel/datasets/nyt1.csv") clicks
Clicks.ipynb
deculler/DataScienceTableDemos
bsd-2-clause
Well. Half a million rows. That would be painful in excel. Add a column of 1's, so that a sum will count people.
age_upper_bounds = [18, 25, 35, 45, 55, 65] def age_range(n): if n == 0: return '0' lower = 1 for upper in age_upper_bounds: if lower <= n < upper: return str(lower) + '-' + str(upper-1) lower = upper return str(lower) + '+' # a little test np.unique([age_range(n) for n in range(100)]) clicks["Age Range"] = clicks.apply(age_range, 'Age') clicks["Person"] = 1 clicks
Clicks.ipynb
deculler/DataScienceTableDemos
bsd-2-clause
Now we can group the table by Age Range and count how many clicks come from each range.
clicks_by_age = clicks.group('Age Range', sum) clicks_by_age clicks_by_age.select(['Age Range', 'Clicks sum', 'Impressions sum', 'Person sum']).barh('Age Range')
Clicks.ipynb
deculler/DataScienceTableDemos
bsd-2-clause
Now we can do some other interesting summaries of these categories
clicks_by_age['Gender Mix'] = clicks_by_age['Gender sum'] / clicks_by_age['Person sum'] clicks_by_age["CTR"] = clicks_by_age['Clicks sum'] / clicks_by_age['Impressions sum'] clicks_by_age.select(['Age Range', 'Person sum', 'Gender Mix', 'CTR']) # Format some columns as percent with limited precision clicks_by_age.set_format('Gender Mix', PercentFormatter(1)) clicks_by_age.set_format('CTR', PercentFormatter(2)) clicks_by_age
Clicks.ipynb
deculler/DataScienceTableDemos
bsd-2-clause
We might want to do the click rate calculation a little more carefully. We don't care about clicks where there are zero impressions or missing age/gender information. So let's filter those out of our data set.
impressed = clicks.where(clicks['Age'] > 0).where('Impressions') impressed # Impressions by age and gender impressed.pivot(rows='Gender', columns='Age Range', values='Impressions', collect=sum) impressed.pivot("Age Range", "Gender", "Clicks",sum) impressed.pivot_hist('Age Range','Impressions') distributions = impressed.pivot_bin('Age Range','Impressions') distributions impressed['Gen'] = [['Male','Female'][i] for i in impressed['Gender']] impressed
Clicks.ipynb
deculler/DataScienceTableDemos
bsd-2-clause
Group returns a new table. If we wanted to specify the formats on columns of this table, assign it to a name.
# How does gender and clicks vary with age? gi = impressed.group('Age Range', np.mean).select(['Age Range', 'Gender mean', 'Clicks mean']) gi.set_format(['Gender mean', 'Clicks mean'], PercentFormatter) gi
Clicks.ipynb
deculler/DataScienceTableDemos
bsd-2-clause
Implement Preprocess Functions Normalize In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
from sklearn.preprocessing import MinMaxScaler def normalize(x): """ Normalize a list of sample image data in the range of 0 to 1 : x: List of image data. The image shape is (32, 32, 3) : return: Numpy array of normalize data """ # TODO: Implement Function # MinMaxScaler().fit_transform(x) # x_max = np.max(x) # x_min = np.min(x) # return (x-x_min.astype(np.float32))/(x_max-x_min).astype(np.float32) return (x/255.).astype(np.float32) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_normalize(normalize)
image-classification/dlnd_image_classification.ipynb
kvr777/deep-learning
mit
One-hot encode Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function. Hint: Don't reinvent the wheel.
from sklearn.preprocessing import LabelBinarizer def one_hot_encode(x): """ One hot encode a list of sample labels. Return a one-hot encoded vector for each label. : x: List of sample Labels : return: Numpy array of one-hot encoded labels """ #fit encoder to 10 dimension encoder = LabelBinarizer(neg_label=0, pos_label=1, sparse_output=False) encoder.fit(np.array([[0, 0, 0,0,0,0,0,0,0,0]])) #encode input encoded_x = encoder.transform(x) # Change to float32, so it can be multiplied against the features in TensorFlow, which are float32 encoded_x = encoded_x.astype(np.float32) return encoded_x """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_one_hot_encode(one_hot_encode)
image-classification/dlnd_image_classification.ipynb
kvr777/deep-learning
mit
Convolution and Max Pooling Layer Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling: * Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor. * Apply a convolution to x_tensor using weight and conv_strides. * We recommend you use same padding, but you're welcome to use any padding. * Add bias * Add a nonlinear activation to the convolution. * Apply Max Pooling using pool_ksize and pool_strides. * We recommend you use same padding, but you're welcome to use any padding. Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides): """ Apply convolution then max pooling to x_tensor :param x_tensor: TensorFlow Tensor :param conv_num_outputs: Number of outputs for the convolutional layer :param conv_ksize: kernal size 2-D Tuple for the convolutional layer :param conv_strides: Stride 2-D Tuple for convolution :param pool_ksize: kernal size 2-D Tuple for pool :param pool_strides: Stride 2-D Tuple for pool : return: A tensor that represents convolution and max pooling of x_tensor """ # TODO: Implement Function #Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor W = tf.Variable(tf.truncated_normal([*conv_ksize, x_tensor.get_shape().as_list()[-1], conv_num_outputs],mean=0.0,stddev = 0.1)) b = tf.Variable(tf.random_normal([conv_num_outputs],mean=0.0,stddev = 0.1)) #Apply a convolution to x_tensor using weight and conv_strides cnn = tf.nn.conv2d(input = x_tensor, W, strides=[1,conv_strides[0],conv_strides[1],1], padding='SAME') #Add bias cnn = tf.nn.bias_add(cnn, b) #Add a nonlinear activation to the convolution. tf.nn.relu(cnn) #Apply Max Pooling using pool_ksize and pool_strides. cnn = tf.nn.max_pool(cnn, ksize=[1,pool_ksize[0],pool_ksize[1],1], strides=[1,pool_strides[0],pool_strides[1],1], padding='SAME') return cnn """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_con_pool(conv2d_maxpool)
image-classification/dlnd_image_classification.ipynb
kvr777/deep-learning
mit
Fully-Connected Layer Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
def fully_conn(x_tensor, num_outputs): """ Apply a fully connected layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second dimension is num_outputs. """ # TODO: Implement Function # return tf.contrib.layers.fully_connected(inputs=x_tensor,num_outputs= num_outputs) return tf.layers.dense(inputs=x_tensor, units=num_outputs, activation=tf.nn.relu) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_fully_conn(fully_conn)
image-classification/dlnd_image_classification.ipynb
kvr777/deep-learning
mit
Output Layer Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages. Note: Activation, softmax, or cross entropy should not be applied to this.
def output(x_tensor, num_outputs): """ Apply a output layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second dimension is num_outputs. """ # TODO: Implement Function return tf.layers.dense(inputs=x_tensor, units=num_outputs) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_output(output)
image-classification/dlnd_image_classification.ipynb
kvr777/deep-learning
mit
Create Convolutional Model Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model: Apply 1, 2, or 3 Convolution and Max Pool layers Apply a Flatten Layer Apply 1, 2, or 3 Fully Connected Layers Apply an Output Layer Return the output Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.
def conv_net(x, keep_prob): """ Create a convolutional neural network model : x: Placeholder tensor that holds image data. : keep_prob: Placeholder tensor that hold dropout keep probability. : return: Tensor that represents logits """ # TODO: Apply 1, 2, or 3 Convolution and Max Pool layers # Play around with different number of outputs, kernel size and stride # Function Definition from Above: # conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides) conv_ksize = (2,2) conv_strides = (1,1) pool_ksize = (2,2) pool_strides = (1,1) conv_num_outputs1 = 32 conv_num_outputs2 = 64 fc_num_outputs =256 num_outputs = 10 nn = conv2d_maxpool(x, conv_num_outputs1, conv_ksize, conv_strides, pool_ksize, pool_strides) nn = conv2d_maxpool(nn, conv_num_outputs2, conv_ksize, conv_strides, pool_ksize, pool_strides) # TODO: Apply a Flatten Layer # Function Definition from Above: # flatten(x_tensor) nn = flatten(nn) # TODO: Apply 1, 2, or 3 Fully Connected Layers # Play around with different number of outputs # Function Definition from Above: # fully_conn(x_tensor, num_outputs) nn = fully_conn(nn, fc_num_outputs) nn = tf.nn.dropout(nn, keep_prob=keep_prob) # TODO: Apply an Output Layer # Set this to the number of classes # Function Definition from Above: # output(x_tensor, num_outputs) nn = output(nn, num_outputs) # TODO: return output return nn """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ ############################## ## Build the Neural Network ## ############################## # Remove previous weights, bias, inputs, etc.. tf.reset_default_graph() # Inputs x = neural_net_image_input((32, 32, 3)) y = neural_net_label_input(10) keep_prob = neural_net_keep_prob_input() # Model logits = conv_net(x, keep_prob) # Name logits Tensor, so that is can be loaded from disk after training logits = tf.identity(logits, name='logits') # Loss and Optimizer cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y)) optimizer = tf.train.AdamOptimizer().minimize(cost) # Accuracy correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy') tests.test_conv_net(conv_net)
image-classification/dlnd_image_classification.ipynb
kvr777/deep-learning
mit
Show Stats Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
def print_stats(session, feature_batch, label_batch, cost, accuracy): """ Print information about loss and validation accuracy : session: Current TensorFlow session : feature_batch: Batch of Numpy image data : label_batch: Batch of Numpy label data : cost: TensorFlow cost function : accuracy: TensorFlow accuracy function """ # TODO: Implement Function trainig_loss = session.run(cost, feed_dict={ x: feature_batch, y: label_batch, keep_prob: 1.}) validation_acc = sess.run(accuracy, feed_dict={ x: valid_features, y: valid_labels, keep_prob: 1.}) training_acc = session.run(accuracy, feed_dict={ x: feature_batch, y: label_batch, keep_prob: 1.}) print('Loss: {:>10.4f} Tr Acc: {:.6f} Valid Acc: {:.6f}'.format( trainig_loss, training_acc, validation_acc))
image-classification/dlnd_image_classification.ipynb
kvr777/deep-learning
mit