markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Por último, también podemos pedir que se nos muestre la solución del problema dual, al que estamos resolviendo. Para ello, simplemente tendremos que incluir --dual, al final de nuestra ejecución: python SimplexSolver.py --input file.txt --output out.txt --dual o bien, python SimplexSolver.py --input file.txt --dual A...
%run ..\PySimplex\SimplexSolver.py --input ..\Files\file.txt --dual
Documentation/Tutorial SimplexSolver con Python.ipynb
carlosclavero/PySimplex
gpl-3.0
Se ha incluído también una ayuda que permite, recordar cuál es el significado de cada uno de los parámetros de entrada, así como, cuál es la forma de ejecución del programa. Para visualizar esta ayuda, simplemente se debe ejecutar python SimplexSolver.py --help A continuación se puede visualizar dicha ayuda:
%run ..\PySimplex\SimplexSolver.py --help
Documentation/Tutorial SimplexSolver con Python.ipynb
carlosclavero/PySimplex
gpl-3.0
Load in house sales data For this notebook, we use a subset of the King County housing dataset created by randomly selecting 40% of the houses in the full dataset.
sales = graphlab.SFrame('kc_house_data_small.gl/')
machine_learning/2_regression/assignment/week6/week-6-local-regression-assignment-exercise.ipynb
tuanavu/coursera-university-of-washington
mit
Import useful functions from previous notebooks To efficiently compute pairwise distances among data points, we will convert the SFrame into a 2D Numpy array. First import the numpy library and then copy and paste get_numpy_data() from the second notebook of Week 2.
import numpy as np # note this allows us to refer to numpy as np instead def get_numpy_data(data_sframe, features, output): data_sframe['constant'] = 1 # this is how you add a constant column to an SFrame # add the column 'constant' to the front of the features list so that we can extract it along with the oth...
machine_learning/2_regression/assignment/week6/week-6-local-regression-assignment-exercise.ipynb
tuanavu/coursera-university-of-washington
mit
We will also need the normalize_features() function from Week 5 that normalizes all feature columns to unit norm. Paste this function below.
def normalize_features(feature_matrix): norms = np.linalg.norm(feature_matrix, axis=0) normalized_features = feature_matrix/norms return (normalized_features, norms)
machine_learning/2_regression/assignment/week6/week-6-local-regression-assignment-exercise.ipynb
tuanavu/coursera-university-of-washington
mit
Split data into training, test, and validation sets
(train_and_validation, test) = sales.random_split(.8, seed=1) # initial train/test split (train, validation) = train_and_validation.random_split(.8, seed=1) # split training set into training and validation sets
machine_learning/2_regression/assignment/week6/week-6-local-regression-assignment-exercise.ipynb
tuanavu/coursera-university-of-washington
mit
Extract features and normalize Using all of the numerical inputs listed in feature_list, transform the training, test, and validation SFrames into Numpy arrays:
feature_list = ['bedrooms', 'bathrooms', 'sqft_living', 'sqft_lot', 'floors', 'waterfront', 'view', 'condition', 'grade', 'sqft_above', 'sqft_...
machine_learning/2_regression/assignment/week6/week-6-local-regression-assignment-exercise.ipynb
tuanavu/coursera-university-of-washington
mit
In computing distances, it is crucial to normalize features. Otherwise, for example, the sqft_living feature (typically on the order of thousands) would exert a much larger influence on distance than the bedrooms feature (typically on the order of ones). We divide each column of the training feature matrix by its 2-nor...
features_train, norms = normalize_features(features_train) # normalize training set features (columns) features_test = features_test / norms # normalize test set by training set norms features_valid = features_valid / norms # normalize validation set by training set norms
machine_learning/2_regression/assignment/week6/week-6-local-regression-assignment-exercise.ipynb
tuanavu/coursera-university-of-washington
mit
Compute a single distance To start, let's just explore computing the "distance" between two given houses. We will take our query house to be the first house of the test set and look at the distance between this house and the 10th house of the training set. To see the features associated with the query house, print the...
print features_test[0]
machine_learning/2_regression/assignment/week6/week-6-local-regression-assignment-exercise.ipynb
tuanavu/coursera-university-of-washington
mit
Now print the 10th row (index 9) of the training feature matrix. Again, you get an 18-dimensional vector with components between 0 and 1.
print features_train[9]
machine_learning/2_regression/assignment/week6/week-6-local-regression-assignment-exercise.ipynb
tuanavu/coursera-university-of-washington
mit
QUIZ QUESTION What is the Euclidean distance between the query house and the 10th house of the training set? Note: Do not use the np.linalg.norm function; use np.sqrt, np.sum, and the power operator (**) instead. The latter approach is more easily adapted to computing multiple distances at once. Slide 16 Euclidean ...
euclidean_distance = np.sqrt(np.sum((features_train[9] - features_test[0])**2)) print euclidean_distance
machine_learning/2_regression/assignment/week6/week-6-local-regression-assignment-exercise.ipynb
tuanavu/coursera-university-of-washington
mit
Compute multiple distances Of course, to do nearest neighbor regression, we need to compute the distance between our query house and all houses in the training set. To visualize this nearest-neighbor search, let's first compute the distance from our query house (features_test[0]) to the first 10 houses of the trainin...
dist_dict = {} for i in range(0,10): dist_dict[i] = np.sqrt(np.sum((features_train[i] - features_test[0])**2)) print (i, np.sqrt(np.sum((features_train[i] - features_test[0])**2)))
machine_learning/2_regression/assignment/week6/week-6-local-regression-assignment-exercise.ipynb
tuanavu/coursera-university-of-washington
mit
QUIZ QUESTION Among the first 10 training houses, which house is the closest to the query house?
print min(dist_dict.items(), key=lambda x: x[1])
machine_learning/2_regression/assignment/week6/week-6-local-regression-assignment-exercise.ipynb
tuanavu/coursera-university-of-washington
mit
It is computationally inefficient to loop over computing distances to all houses in our training dataset. Fortunately, many of the Numpy functions can be vectorized, applying the same operation over multiple values or vectors. We now walk through this process. Consider the following loop that computes the element-wise...
for i in xrange(3): print features_train[i]-features_test[0] # should print 3 vectors of length 18
machine_learning/2_regression/assignment/week6/week-6-local-regression-assignment-exercise.ipynb
tuanavu/coursera-university-of-washington
mit
The subtraction operator (-) in Numpy is vectorized as follows:
print features_train[0:3] - features_test[0]
machine_learning/2_regression/assignment/week6/week-6-local-regression-assignment-exercise.ipynb
tuanavu/coursera-university-of-washington
mit
Note that the output of this vectorized operation is identical to that of the loop above, which can be verified below:
# verify that vectorization works results = features_train[0:3] - features_test[0] print results[0] - (features_train[0]-features_test[0]) # should print all 0's if results[0] == (features_train[0]-features_test[0]) print results[1] - (features_train[1]-features_test[0]) # should print all 0's if results[1] == (feature...
machine_learning/2_regression/assignment/week6/week-6-local-regression-assignment-exercise.ipynb
tuanavu/coursera-university-of-washington
mit
Aside: it is a good idea to write tests like this cell whenever you are vectorizing a complicated operation. Perform 1-nearest neighbor regression Now that we have the element-wise differences, it is not too hard to compute the Euclidean distances between our query house and all of the training houses. First, write a s...
diff = features_train - features_test[0]
machine_learning/2_regression/assignment/week6/week-6-local-regression-assignment-exercise.ipynb
tuanavu/coursera-university-of-washington
mit
To test the code above, run the following cell, which should output a value -0.0934339605842:
print diff[-1].sum() # sum of the feature differences between the query and last training house # should print -0.0934339605842
machine_learning/2_regression/assignment/week6/week-6-local-regression-assignment-exercise.ipynb
tuanavu/coursera-university-of-washington
mit
The next step in computing the Euclidean distances is to take these feature-by-feature differences in diff, square each, and take the sum over feature indices. That is, compute the sum of square feature differences for each training house (row in diff). By default, np.sum sums up everything in the matrix and returns a...
print np.sum(diff**2, axis=1)[15] # take sum of squares across each row, and print the 16th sum print np.sum(diff[15]**2) # print the sum of squares for the 16th row -- should be same as above
machine_learning/2_regression/assignment/week6/week-6-local-regression-assignment-exercise.ipynb
tuanavu/coursera-university-of-washington
mit
With this result in mind, write a single-line expression to compute the Euclidean distances between the query house and all houses in the training set. Assign the result to a variable distances. Hint: Do not forget to take the square root of the sum of squares.
distances = np.sqrt(np.sum(diff**2, axis=1))
machine_learning/2_regression/assignment/week6/week-6-local-regression-assignment-exercise.ipynb
tuanavu/coursera-university-of-washington
mit
To test the code above, run the following cell, which should output a value 0.0237082324496:
print distances[100] # Euclidean distance between the query house and the 101th training house # should print 0.0237082324496
machine_learning/2_regression/assignment/week6/week-6-local-regression-assignment-exercise.ipynb
tuanavu/coursera-university-of-washington
mit
Now you are ready to write a function that computes the distances from a query house to all training houses. The function should take two parameters: (i) the matrix of training features and (ii) the single feature vector associated with the query.
def compute_distances(train_matrix, query_vector): diff = train_matrix - query_vector distances = np.sqrt(np.sum(diff**2, axis=1)) return distances
machine_learning/2_regression/assignment/week6/week-6-local-regression-assignment-exercise.ipynb
tuanavu/coursera-university-of-washington
mit
QUIZ QUESTIONS Take the query house to be third house of the test set (features_test[2]). What is the index of the house in the training set that is closest to this query house? What is the predicted value of the query house based on 1-nearest neighbor regression?
third_house_distance = compute_distances(features_train, features_test[2]) print third_house_distance.argsort()[:1], min(third_house_distance) print third_house_distance[382] print np.argsort(third_house_distance, axis = 0)[:4] print output_train[382]
machine_learning/2_regression/assignment/week6/week-6-local-regression-assignment-exercise.ipynb
tuanavu/coursera-university-of-washington
mit
Perform k-nearest neighbor regression For k-nearest neighbors, we need to find a set of k houses in the training set closest to a given query house. We then make predictions based on these k nearest neighbors. Fetch k-nearest neighbors Using the functions above, implement a function that takes in * the value of k; * ...
def compute_k_nearest_neighbors(k, features_matrix, feature_vector): distances = compute_distances(features_matrix, feature_vector) return np.argsort(distances, axis = 0)[:k]
machine_learning/2_regression/assignment/week6/week-6-local-regression-assignment-exercise.ipynb
tuanavu/coursera-university-of-washington
mit
QUIZ QUESTION Take the query house to be third house of the test set (features_test[2]). What are the indices of the 4 training houses closest to the query house?
print compute_k_nearest_neighbors(4, features_train, features_test[2])
machine_learning/2_regression/assignment/week6/week-6-local-regression-assignment-exercise.ipynb
tuanavu/coursera-university-of-washington
mit
Make a single prediction by averaging k nearest neighbor outputs Now that we know how to find the k-nearest neighbors, write a function that predicts the value of a given query house. For simplicity, take the average of the prices of the k nearest neighbors in the training set. The function should have the following pa...
def compute_distances_k_avg(k, features_matrix, output_values, feature_vector): k_neigbors = compute_k_nearest_neighbors(k, features_matrix, feature_vector) avg_value = np.mean(output_values[k_neigbors]) return avg_value
machine_learning/2_regression/assignment/week6/week-6-local-regression-assignment-exercise.ipynb
tuanavu/coursera-university-of-washington
mit
QUIZ QUESTION Again taking the query house to be third house of the test set (features_test[2]), predict the value of the query house using k-nearest neighbors with k=4 and the simple averaging method described and implemented above.
print compute_distances_k_avg(4, features_train, output_train, features_test[2])
machine_learning/2_regression/assignment/week6/week-6-local-regression-assignment-exercise.ipynb
tuanavu/coursera-university-of-washington
mit
Compare this predicted value using 4-nearest neighbors to the predicted value using 1-nearest neighbor computed earlier. Make multiple predictions Write a function to predict the value of each and every house in a query set. (The query set can be any subset of the dataset, be it the test set or validation set.) The ide...
print features_test[0:10].shape[0] def compute_distances_k_all(k, features_matrix, output_values, feature_vector): num_of_rows = feature_vector.shape[0] predicted_values = [] for i in xrange(num_of_rows): avg_value = compute_distances_k_avg(k, features_train, output_train, features_test[i]) ...
machine_learning/2_regression/assignment/week6/week-6-local-regression-assignment-exercise.ipynb
tuanavu/coursera-university-of-washington
mit
QUIZ QUESTION Make predictions for the first 10 houses in the test set using k-nearest neighbors with k=10. What is the index of the house in this query set that has the lowest predicted value? What is the predicted value of this house?
predicted_values = compute_distances_k_all(10, features_train, output_train, features_test[0:10]) print predicted_values print predicted_values.index(min(predicted_values)) print min(predicted_values)
machine_learning/2_regression/assignment/week6/week-6-local-regression-assignment-exercise.ipynb
tuanavu/coursera-university-of-washington
mit
Choosing the best value of k using a validation set There remains a question of choosing the value of k to use in making predictions. Here, we use a validation set to choose this value. Write a loop that does the following: For k in [1, 2, ..., 15]: Makes predictions for each house in the VALIDATION set using the k-ne...
rss_all = [] for k in range(1,16): predict_value = compute_distances_k_all(k, features_train, output_train, features_valid) residual = (output_valid - predict_value) rss = sum(residual**2) rss_all.append(rss) print rss_all print rss_all.index(min(rss_all))
machine_learning/2_regression/assignment/week6/week-6-local-regression-assignment-exercise.ipynb
tuanavu/coursera-university-of-washington
mit
To visualize the performance as a function of k, plot the RSS on the VALIDATION set for each considered k value:
import matplotlib.pyplot as plt %matplotlib inline kvals = range(1, 16) plt.plot(kvals, rss_all,'bo-')
machine_learning/2_regression/assignment/week6/week-6-local-regression-assignment-exercise.ipynb
tuanavu/coursera-university-of-washington
mit
QUIZ QUESTION What is the RSS on the TEST data using the value of k found above? To be clear, sum over all houses in the TEST set.
predict_value = compute_distances_k_all(14, features_train, output_train, features_test) residual = (output_test - predict_value) rss = sum(residual**2) print rss
machine_learning/2_regression/assignment/week6/week-6-local-regression-assignment-exercise.ipynb
tuanavu/coursera-university-of-washington
mit
Identifying Hardware Changes <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://quantumai.google/cirq/tutorials/google/identifying_hardware_changes"><img src="https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png" />View on QuantumAI</a> </td> <td> ...
try: import cirq except ImportError: !pip install --quiet cirq --pre import matplotlib.pyplot as plt import networkx as nx import numpy as np import cirq import cirq_google as cg
docs/tutorials/google/identifying_hardware_changes.ipynb
quantumlib/Cirq
apache-2.0
Next, authorize to use the Quantum Computing Service with a project_id and processor_id, and get a sampler to run your experiments. Set the number of repetitions you'll use for all experiments. Note: You can select a subset of the qubits to shorten the runtime of the experiment. Note: You need to input a real QCS proje...
from cirq_google.engine.qcs_notebook import get_qcs_objects_for_notebook # Set key variables project_id = "your_project_id_here" #@param {type:"string"} processor_id = "your_processor_id_here" #@param {type:"string"} repetitions = 2000 #@param {type:"integer"} # Get device sampler qcs_objects = get_qcs_objects_for_no...
docs/tutorials/google/identifying_hardware_changes.ipynb
quantumlib/Cirq
apache-2.0
Maintenance Calibration Data Query for the calibration data with cirq_google.get_engine_calibration, select the two metrics by name from the calibration object, and visualize them with its plot() method.
# Retreive maintenance calibration data. calibration = cg.get_engine_calibration(processor_id=processor_id) # Heatmap the two metrics. two_qubit_gate_metric = "two_qubit_parallel_sqrt_iswap_gate_xeb_pauli_error_per_cycle" #@param {type:"string"} readout_metric = "parallel_p11_error" #@param {type:"string"} # Plot hea...
docs/tutorials/google/identifying_hardware_changes.ipynb
quantumlib/Cirq
apache-2.0
You may have already seen this existing maintenance calibration data when you did qubit selection in the first place. Next, you'll run device characterization experiments to collect the same data metrics from the device, to see if their values have changed since the previous calibration. Current Two-Qubit Metric Data w...
"""Setup for parallel XEB experiment.""" from cirq.experiments import random_quantum_circuit_generation as rqcg from itertools import combinations random_seed = 52 # Generate library of two-qubit XEB circuits. circuit_library = rqcg.generate_library_of_2q_circuits( n_library_circuits=20, two_qubit_gate=cirq....
docs/tutorials/google/identifying_hardware_changes.ipynb
quantumlib/Cirq
apache-2.0
Then, run the circuits on the device, combining them into larger circuits and truncating the circuits by length, with cirq.experiments.xeb_sampling.sample_2q_xeb_circuits. Afterwards, run the same circuits on a perfect simulator, and compare them to the sampled results. Finally, fit the collected data to an exponentia...
"""Collect all data by executing circuits.""" from cirq.experiments.xeb_sampling import sample_2q_xeb_circuits from cirq.experiments.xeb_fitting import benchmark_2q_xeb_fidelities, fit_exponential_decays # Run XEB circuits on the processor. sampled_df = sample_2q_xeb_circuits( sampler=sampler, circuits=circuit...
docs/tutorials/google/identifying_hardware_changes.ipynb
quantumlib/Cirq
apache-2.0
Note: The parallel XEB errors are scaled in pxeb_results. This is because the collected fidelities are the estimated depolarization fidelities, not the Pauli error metrics available from the calibration data. See the XEB Theory tutorial for an explanation why, and Calibration Metrics for more information on the differe...
# Run experiment sq_result = cirq.estimate_parallel_single_qubit_readout_errors(sampler, qubits=qubits, repetitions=repetitions) # Use P11 errors p11_results = sq_result.one_state_errors
docs/tutorials/google/identifying_hardware_changes.ipynb
quantumlib/Cirq
apache-2.0
Heatmap Comparisons For each metric, plot the calibration and collected characterization data side by side, on the same scale. Also plot the difference between the two datasets (on a different scale). Two-Qubit Metric Heatmap Comparison
from matplotlib.colors import LogNorm # Plot options. You may need to change these if you data shows a lot of the same colors. vmin = 5e-3 vmax = 3e-2 options = {"norm": LogNorm()} format = "0.3f" fig, (ax1,ax2,ax3) = plt.subplots(ncols=3, figsize=(30, 9)) # Calibration two qubit data calibration.heatmap(two_qubit_g...
docs/tutorials/google/identifying_hardware_changes.ipynb
quantumlib/Cirq
apache-2.0
The large numbers of zero and below values (green and darker colors) in the difference heatmap indicate that the device's two-qubit $\sqrt{i\mathrm{SWAP}}$ gates have improved noticeably across the device. In fact, only a couple qubit pairs towards the bottom of the device have worsened since the previous calibration. ...
# Plot options, with different vmin and vmax for readout data. vmin = 3e-2 vmax = 1.1e-1 options = {"norm": LogNorm()} format = "0.3f" fig, (ax1,ax2,ax3) = plt.subplots(ncols=3, figsize=(30, 9)) # Calibration readout data calibration.heatmap(readout_metric).plot( ax=ax1, title="Calibration", vmin=vmin, vmax=vmax, ...
docs/tutorials/google/identifying_hardware_changes.ipynb
quantumlib/Cirq
apache-2.0
The August birthday problem This article: Attention Deficit–Hyperactivity Disorder and Month of School Enrollment Finds: The rate of claims-based ADHD diagnosis among children in states with a September 1 cutoff was 85.1 per 10,000 children (309 cases among 36,319 children; 95% confidence interval [CI], 75.6 to 94.2) ...
totals = np.array([32690, 31238, 34405, 34565, 34977, 34415, 36577, 36319, 35353, 34405, 31285, 31617]) diagnosed = np.array([265, 280, 307, 312, 317, 287, 320, 309, 225, 240, 232, 243])
examples/august_soln.ipynb
AllenDowney/ThinkBayes2
mit
I'll roll the data so September comes first.
totals = np.roll(totals, -8) diagnosed = np.roll(diagnosed, -8)
examples/august_soln.ipynb
AllenDowney/ThinkBayes2
mit
Here are the diagnosis rates, which we can check against the rates in the table.
rates = diagnosed / totals * 10000 np.round(rates, 1)
examples/august_soln.ipynb
AllenDowney/ThinkBayes2
mit
Here's what the rates look like as a function of months after the September cutoff.
xs = np.arange(12) thinkplot.plot(xs, rates) thinkplot.decorate(xlabel='Months after cutoff', ylabel='Diagnosis rate per 10,000')
examples/august_soln.ipynb
AllenDowney/ThinkBayes2
mit
For the first 9 months, from September to May, we see what we would expect if at least some of the excess diagnoses are due to behavioral differences due to age. For each month of difference in age, we see an increase in the number of diagnoses. This pattern breaks down for the last three months, June, July, and Augus...
import scipy.stats pcount = 1 res = [] for (x, d, t) in zip(xs, diagnosed, totals): a = d + pcount b = t-d + pcount ci = scipy.stats.beta(a, b).ppf([0.025, 0.975]) res.append(ci * 10000)
examples/august_soln.ipynb
AllenDowney/ThinkBayes2
mit
By transposing the results, we can get them into two arrays for plotting.
low, high = np.transpose(res) low high
examples/august_soln.ipynb
AllenDowney/ThinkBayes2
mit
Here's what the plot looks like with error bars.
import matplotlib.pyplot as plt def errorbar(xs, low, high, **options): for x, l, h in zip(xs, low, high): plt.vlines(x, l, h, **options) errorbar(xs, low, high, color='gray', alpha=0.7) thinkplot.plot(xs, rates) thinkplot.decorate(xlabel='Months after cutoff', ylabel='Diagnosis rate pe...
examples/august_soln.ipynb
AllenDowney/ThinkBayes2
mit
It seems like the lower rates in the last 3 months are unlikely to be due to random variation, so it might be good to investigate the effect of "red shirting". But for now I will proceed with a linear logistic model. The following table shows log odds of diagnosis for each month, which I will use to lay out a grid for...
from scipy.special import expit, logit for (x, d, t) in zip(xs, diagnosed, totals): print(x, logit(d/t))
examples/august_soln.ipynb
AllenDowney/ThinkBayes2
mit
Here's a Suite that estimates the parameters of a logistic regression model, b0 and b1.
class August(Suite, Joint): def Likelihood(self, data, hypo): x, d, t = data b0, b1 = hypo p = expit(b0 + b1 * x) like = scipy.stats.binom.pmf(d, t, p) return like
examples/august_soln.ipynb
AllenDowney/ThinkBayes2
mit
The prior distributions are uniform over a grid that covers the most likely values.
from itertools import product b0 = np.linspace(-4.75, -5.1, 101) b1 = np.linspace(-0.05, 0.05, 101) hypos = product(b0, b1) suite = August(hypos);
examples/august_soln.ipynb
AllenDowney/ThinkBayes2
mit
Here's the update.
for data in zip(xs, diagnosed, totals): suite.Update(data)
examples/august_soln.ipynb
AllenDowney/ThinkBayes2
mit
Here's the posterior marginal distribution for b0.
pmf0 = suite.Marginal(0) b0 = pmf0.Mean() print(b0) thinkplot.Pdf(pmf0) thinkplot.decorate(title='Posterior marginal distribution', xlabel='Intercept log odds (b0)', ylabel='Pdf')
examples/august_soln.ipynb
AllenDowney/ThinkBayes2
mit
And the posterior marginal distribution for b1.
pmf1 = suite.Marginal(1) b1 = pmf1.Mean() print(b1) thinkplot.Pdf(pmf1) thinkplot.decorate(title='Posterior marginal distribution', xlabel='Slope log odds (b0)', ylabel='Pdf')
examples/august_soln.ipynb
AllenDowney/ThinkBayes2
mit
Let's see what the posterior regression lines look like, superimposed on the data.
for i in range(100): b0, b1 = suite.Random() ys = expit(b0 + b1 * xs) * 10000 thinkplot.plot(xs, ys, color='green', alpha=0.01) errorbar(xs, low, high, color='gray', alpha=0.7) thinkplot.plot(xs, rates) thinkplot.decorate(xlabel='Months after cutoff', ylabel='Diagnosis rate per 10,0...
examples/august_soln.ipynb
AllenDowney/ThinkBayes2
mit
Most of these regression lines fall within the credible intervals of the observed rates, so in that sense it seems like this model is not ruled out by the data. But it is clear that the lower rates in the last 3 months bring down the estimated slope, so we should probably treat the estimated effect size as a lower boun...
def posterior_predictive(x): pmf = Pmf() for (b0, b1), p in suite.Items(): base = expit(b0 + b1 * x) * 10000 pmf[base] += p return pmf
examples/august_soln.ipynb
AllenDowney/ThinkBayes2
mit
Here are posterior predictive CDFs for diagnosis rates.
pmf0 = posterior_predictive(0) thinkplot.Cdf(pmf0.MakeCdf(), label='September') pmf1 = posterior_predictive(11) thinkplot.Cdf(pmf1.MakeCdf(), label='August') thinkplot.decorate(title='Posterior predictive distribution', xlabel='Diagnosis rate per 10,000', ylabel='CDF') pmf0.Mean...
examples/august_soln.ipynb
AllenDowney/ThinkBayes2
mit
And we can compute the posterior predictive distribution for the difference.
def posterior_predictive_diff(): pmf = Pmf() for (b0, b1), p in suite.Items(): p0 = expit(b0) * 10000 p1 = expit(b0 + b1 * 11) * 10000 diff = p1 - p0 pmf[diff] += p return pmf pmf_diff = posterior_predictive_diff() thinkplot.Cdf(pmf_diff.MakeCdf()) thinkplot.d...
examples/august_soln.ipynb
AllenDowney/ThinkBayes2
mit
To summarize, we can compute the mean and 95% credible interval for this difference.
pmf_diff.Mean() pmf_diff.CredibleInterval(95)
examples/august_soln.ipynb
AllenDowney/ThinkBayes2
mit
A difference of 21 diagnoses, on a base rate of 71 diagnoses, is an increase of 30% (18%, 42%)
pmf_diff.Mean() / pmf0.Mean() pmf_diff.CredibleInterval(95) / pmf0.Mean()
examples/august_soln.ipynb
AllenDowney/ThinkBayes2
mit
如何使用和开发微信聊天机器人的系列教程 A workshop to develop & use an intelligent and interactive chat-bot in WeChat WeChat is a popular social media app, which has more than 800 million monthly active users. <img src='http://www.kudosdata.com/wp-content/uploads/2016/11/cropped-KudosLogo1.png' width=30% style="float: right;"> <img src='r...
# parm_runtime_env_GCP = True parm_runtime_env_GCP = False
wechat_tool/lesson_4.ipynb
telescopeuser/workshop_blog
mit
Using Google Cloud Platform's Machine Learning APIs From the same API console, choose "Dashboard" on the left-hand menu and "Enable API". Enable the following APIs for your project (search for them) if they are not already enabled: <ol> <li> Google Translate API </li> <li> Google Cloud Vision API </li> <li> Google Natu...
# Copyright 2016 Google Inc. # Licensed under the Apache License, Version 2.0 (the "License"); # import subprocess # retcode = subprocess.call(['pip', 'install', '-U', 'google-api-python-client']) # retcode = subprocess.call(['pip', 'install', '-U', 'gTTS']) # Below is for GCP only: install audio conversion tool # re...
wechat_tool/lesson_4.ipynb
telescopeuser/workshop_blog
mit
导入需要用到的一些功能程序库:
import io, os, subprocess, sys, re, codecs, time, datetime, requests, itchat from itchat.content import * from googleapiclient.discovery import build
wechat_tool/lesson_4.ipynb
telescopeuser/workshop_blog
mit
GCP Machine Learning API Key First, visit <a href="http://console.cloud.google.com/apis">API console</a>, choose "Credentials" on the left-hand menu. Choose "Create Credentials" and generate an API key for your application. You should probably restrict it by IP address to prevent abuse, but for now, just leave that f...
# Here I read in my own API_KEY from a file, which is not shared in Github repository: with io.open('../../API_KEY.txt') as fp: for line in fp: APIKEY = line # You need to un-comment below line and replace 'APIKEY' variable with your own GCP API key: # APIKEY='AIzaSyCvxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx' # Below is...
wechat_tool/lesson_4.ipynb
telescopeuser/workshop_blog
mit
多媒体二进制base64码转换 (Define media pre-processing functions)
# Import the base64 encoding library. import base64 # Pass the image data to an encoding function. def encode_image(image_file): with io.open(image_file, "rb") as image_file: image_content = image_file.read() # Python 2 if sys.version_info[0] < 3: return base64.b64encode(image_content) # Python ...
wechat_tool/lesson_4.ipynb
telescopeuser/workshop_blog
mit
机器智能API接口控制参数 (Define control parameters for API)
# API control parameter for Image API: parm_image_maxResults = 10 # max objects or faces to be extracted from image analysis # API control parameter for Language Translation API: parm_translation_origin_language = 'zh' # original language in text: to be overwriten by TEXT_DETECTION parm_translation_target_language = '...
wechat_tool/lesson_4.ipynb
telescopeuser/workshop_blog
mit
定义一个调用自然语言处理接口的小功能
# Running Speech API def KudosData_nlp(text, extractDocumentSentiment, extractEntities, extractEntitySentiment, extractSyntax): # Python 2 # if sys.version_info[0] < 3: # tts = gTTS(text=text2voice.encode('utf-8'), lang=parm_speech_synthesis_language, slow=False) # Python 3 # else: # t...
wechat_tool/lesson_4.ipynb
telescopeuser/workshop_blog
mit
< Start of interactive demo >
text4nlp = 'As a data science consultant and trainer with Kudos Data, Zhan GU (Sam) engages communities and schools ' \ 'to help organizations making sense of their data using advanced data science , machine learning and ' \ 'cloud computing technologies. Inspire next generation of artificial inte...
wechat_tool/lesson_4.ipynb
telescopeuser/workshop_blog
mit
* 消息文字中名称实体的识别 (Name-Entity detection)
# print(responses['entities']) for i in range(len(responses['entities'])): # print(responses['entities'][i]) print('') print(u'[ 实体 {} : {} ]\n 实体类别 : {}\n 重要程度 : {}'.format( i+1 , responses['entities'][i]['name'] , responses['entities'][i]['type'] , responses['entities...
wechat_tool/lesson_4.ipynb
telescopeuser/workshop_blog
mit
* 消息文字中语句的情感分析 (Sentiment analysis, Sentence level)
# print(responses['sentences']) for i in range(len(responses['sentences'])): print('') print(u'[ 语句 {} : {} ]\n( 褒贬程度 : {} | 语彩累积 : {} )'.format( i+1 , responses['sentences'][i]['text']['content'] , responses['sentences'][i]['sentiment']['score'] , responses['sentences'][i]['s...
wechat_tool/lesson_4.ipynb
telescopeuser/workshop_blog
mit
https://cloud.google.com/natural-language/docs/basics score 褒贬程度 of the sentiment ranges between -1.0 (negative) and 1.0 (positive) and corresponds to the overall emotional leaning of the text. magnitude 语彩累积 indicates the overall strength of emotion (both positive and negative) within the given text, between 0.0 and ...
# print(responses['documentSentiment']) print(u'[ 整篇消息 语种 : {} ]\n( 褒贬程度 : {} | 语彩累积 : {} )'.format( responses['language'] , responses['documentSentiment']['score'] , responses['documentSentiment']['magnitude'] ))
wechat_tool/lesson_4.ipynb
telescopeuser/workshop_blog
mit
* 语句的语法分析 (Syntax / Grammar analysis)
for i in range(len(responses['tokens'])): print('') print(responses['tokens'][i]['text']['content']) print(responses['tokens'][i]['partOfSpeech']) print(responses['tokens'][i]['dependencyEdge']) # print(responses['tokens'][i]['text']) # print(responses['tokens'][i]['lemma'])
wechat_tool/lesson_4.ipynb
telescopeuser/workshop_blog
mit
< End of interactive demo > 定义一个输出为NLP分析结果的文本消息的小功能,用于微信回复:
def KudosData_nlp_generate_reply(responses): nlp_reply = u'[ NLP 自然语言处理结果 ]' # 1. 整篇消息文字的情感分析 (Sentiment analysis, Document level) nlp_reply += '\n' nlp_reply += '\n' + u'[ 整篇消息 语种 : {} ]\n( 褒贬程度 : {} | 语彩累积 : {} )'.format( responses['language'] , responses['documentSentimen...
wechat_tool/lesson_4.ipynb
telescopeuser/workshop_blog
mit
用微信App扫QR码图片来自动登录
itchat.auto_login(hotReload=True) # hotReload=True: 退出程序后暂存登陆状态。即使程序关闭,一定时间内重新开启也可以不用重新扫码。 # Obtain my own Nick Name MySelf = itchat.search_friends() NickName4RegEx = '@' + MySelf['NickName'] + '\s*' # 单聊模式,自动进行自然语言分析,以文本形式返回处理结果: @itchat.msg_register([TEXT, MAP, CARD, NOTE, SHARING]) def text_reply(msg): tex...
wechat_tool/lesson_4.ipynb
telescopeuser/workshop_blog
mit
IMDB comments dataset has been stored in the following location
!wc -l data/imdb-comments.json
SparkML - 04 Text_Analysis.ipynb
abulbasar/machine-learning
apache-2.0
There are 50000 lines in the file. Let's the first line
!du -sh data/imdb-comments.json
SparkML - 04 Text_Analysis.ipynb
abulbasar/machine-learning
apache-2.0
Total size of the file is 66MB
!head -n 1 data/imdb-comments.json
SparkML - 04 Text_Analysis.ipynb
abulbasar/machine-learning
apache-2.0
Each line is a self contained json doc. Load the dataset using spark reader specifying the file format as json. As we see above size of the file is 66 MB, we should at least 2 partitons, since I am using dual core system, I will repartition the data to 4. Also will cache the data after repartitioning.
imdb = spark.read.format("json").load("data/imdb-comments.json").repartition(4).cache()
SparkML - 04 Text_Analysis.ipynb
abulbasar/machine-learning
apache-2.0
Find total number of records
imdb.count()
SparkML - 04 Text_Analysis.ipynb
abulbasar/machine-learning
apache-2.0
Print Schema and view the field types
imdb.printSchema()
SparkML - 04 Text_Analysis.ipynb
abulbasar/machine-learning
apache-2.0
Take a look at a few sample data
imdb.show()
SparkML - 04 Text_Analysis.ipynb
abulbasar/machine-learning
apache-2.0
label - column indicate whethet the data belong to training or test bucket. sentiment - column indicates whether the comment carries positive or negative sentiment. This column has been manually curated. Find out for each combination of label and sentimnet how many records are there.
from pyspark.sql.functions import * from pyspark.sql.types import * imdb.groupBy("sentiment").pivot("label").count().show()
SparkML - 04 Text_Analysis.ipynb
abulbasar/machine-learning
apache-2.0
Look at a sample comment value
content = imdb.sample(False, 0.001, 1).first().content content
SparkML - 04 Text_Analysis.ipynb
abulbasar/machine-learning
apache-2.0
Register a UDF function to clean the comment from the html tags. If BeautifulSoup is not installed, you can install it using pip (shell command) $ pip install BeautifulSoup4
from bs4 import BeautifulSoup from pyspark.sql.types import * import re def remove_html_tags(text): text = BeautifulSoup(text, "html5lib").text.lower() #removed html tags text = re.sub("[\W]+", " ", text) return text spark.udf.register("remove_html_tags", remove_html_tags, StringType())
SparkML - 04 Text_Analysis.ipynb
abulbasar/machine-learning
apache-2.0
Test the remove_html_tags function
remove_html_tags(content)
SparkML - 04 Text_Analysis.ipynb
abulbasar/machine-learning
apache-2.0
Apply the the udf on the imdb dataframe.
imdb_clean = imdb.withColumn("content", expr("remove_html_tags(content)")).cache() imdb_clean.sample(False, 0.001, 1).first().content
SparkML - 04 Text_Analysis.ipynb
abulbasar/machine-learning
apache-2.0
Use Tokenizer to split the string into terms. Then use StopWordsRemover to remove stop words like prepositions, apply CountVectorizer to find all distinct terms and found of each term per document.
from pyspark.ml.feature import HashingTF, IDF, Tokenizer, CountVectorizer, StopWordsRemover tokenizer = Tokenizer(inputCol="content", outputCol="terms") terms_data = tokenizer.transform(imdb_clean) print(terms_data.sample(False, 0.001, 1).first().terms) remover = StopWordsRemover(inputCol="terms", outputCol="filtere...
SparkML - 04 Text_Analysis.ipynb
abulbasar/machine-learning
apache-2.0
count_vectorized Dataframe contains a column count_vectors that is a SparseVector representing which term appears and how many times. The key is the index of all unique terms. You can find list of terms count_vectorizer_model.vocabulary. See below.
print(count_vectorizer_model.vocabulary[:100], "\n\nTotal no of terms", len(count_vectorizer_model.vocabulary)) count_vectorized.show()
SparkML - 04 Text_Analysis.ipynb
abulbasar/machine-learning
apache-2.0
SparkVector represents a vector of 103999, that means in the dataset (corpus) there are 103999 unique terms. Per document, only a few will be present. Find density of each count_vectors.
vocab_len = len(count_vectorizer_model.vocabulary) spark.udf.register("density", lambda r: r.numNonzeros() / vocab_len, DoubleType()) count_vectorized.select(expr("density(count_vectors) density")).show()
SparkML - 04 Text_Analysis.ipynb
abulbasar/machine-learning
apache-2.0
Density report shows, the count_vectors has very low density which illustrate the benefit of the choice of DenseVector for this column. Now, calculate tfidf for the document.
idf = IDF(inputCol="count_vectors", outputCol="features") idf_model = idf.fit(count_vectorized) idf_data = idf_model.transform(count_vectorized) idf_data.sample(False, 0.001, 1).first().features idf_data.printSchema()
SparkML - 04 Text_Analysis.ipynb
abulbasar/machine-learning
apache-2.0
Apply StringIndexer to conver the sentiment column from String type to number type - this is prerequisit to apply the LogisticRegression algorithm.
from pyspark.ml.feature import StringIndexer string_indexer = StringIndexer(inputCol="sentiment", outputCol="sentiment_idx") string_indexer_model = string_indexer.fit(idf_data) label_encoded = string_indexer_model.transform(idf_data) label_encoded.select("sentiment", "sentiment_idx").show()
SparkML - 04 Text_Analysis.ipynb
abulbasar/machine-learning
apache-2.0
Split the data into traininf and testing groups with 70/30 ratio. Cache the dataframe so that training runs faster.
training, testing = label_encoded.randomSplit(weights=[0.7, 0.3], seed=1) training.cache() testing.cache()
SparkML - 04 Text_Analysis.ipynb
abulbasar/machine-learning
apache-2.0
Verify that the StringIndex has done the expected job and training and testing data maintain the ratio of positive and negative records as in the whole dataset.
training.groupBy("sentiment_idx", "sentiment").count().show() testing.groupBy("sentiment_idx", "sentiment").count().show()
SparkML - 04 Text_Analysis.ipynb
abulbasar/machine-learning
apache-2.0
Apply LogisticRegression classifier
from pyspark.ml.classification import LogisticRegression lr = LogisticRegression(maxIter=10000, regParam=0.1, elasticNetParam=0.0, featuresCol="features", labelCol="sentiment_idx")
SparkML - 04 Text_Analysis.ipynb
abulbasar/machine-learning
apache-2.0
Show the parameters that the LogisticRegression classifier takes.
print(lr.explainParams()) lr_model = lr.fit(training) lr_model.coefficients[:100]
SparkML - 04 Text_Analysis.ipynb
abulbasar/machine-learning
apache-2.0
From the training summary find out the cost decay of the model.
training_summary = lr_model.summary import pandas as pd import matplotlib.pyplot as plt %matplotlib inline pd.Series(training_summary.objectiveHistory).plot() plt.xlabel("Iteration") plt.ylabel("Cost")
SparkML - 04 Text_Analysis.ipynb
abulbasar/machine-learning
apache-2.0
Find area under the curve. Closer to 1 is better
training_summary.areaUnderROC predictions = lr_model.transform(testing).withColumn("match", expr("prediction == sentiment_idx")) predictions.select("prediction", "sentiment_idx", "sentiment", "match").sample(False, 0.01).show(10) predictions.groupBy("sentiment_idx").pivot("prediction").count().show()
SparkML - 04 Text_Analysis.ipynb
abulbasar/machine-learning
apache-2.0
Find the accuracy of the prediction
accuracy = predictions.select(expr("sum(cast(match as int))")).first()[0] / predictions.count() accuracy
SparkML - 04 Text_Analysis.ipynb
abulbasar/machine-learning
apache-2.0
0. References In general, only standard package documentation has been used throughout. A couple of one-liners adapted from stackoverflow answers noted in code where used. 1. Statistical Test 1.1 Which statistical test did you use to analyze the NYC subway data? Did you use a one-tail or a two-tail P value? What is the...
W, p = sps.shapiro(df.ENTRIESn_hourly.tolist()) print 'Probability that data is the realisation of a gaussian random variable: {:.3f}'.format(p) plt.figure(figsize=[8,5]) sns.distplot(df.ENTRIESn_hourly.tolist(), bins=np.arange(0,10001,500), kde=False) plt.xlim(0,10000) plt.yticks(np.arange(0,16001,4000)) plt.title('H...
intro_to_DS_assignment/Assignment 1 Submission Notebook.ipynb
chris-jd/udacity
mit
1.3 What results did you get from this statistical test? These should include the following numerical values: p-values, as well as the means for each of the two samples under test.
raindata = np.array(df[df.rain==1].ENTRIESn_hourly.tolist()) noraindata = np.array(df[df.rain==0].ENTRIESn_hourly.tolist()) U, p = sps.mannwhitneyu(raindata, noraindata) print 'Results' print '-------' print 'p-value: {:.2f}'.format(p) # Note that p value calculated by scipy is single-tailed print 'Mean with ...
intro_to_DS_assignment/Assignment 1 Submission Notebook.ipynb
chris-jd/udacity
mit
1.4 What is the significance and interpretation of these results? Given the p-value < 0.05, we can reject the null hypothesis that the average ridership is not greater when it is raining, hence the we can accept the alternative hypothesis the average ridership is greater when it rains. 2. Linear Regression
# Because the hour '0' is actually the entries from 20:00 to 24:00, it makes more sense to label it 24 when plotting data df.datetime -= timedelta(seconds=1) df['day']= df.datetime.apply(lambda x: x.day) df['hour'] = df.datetime.apply(lambda x: x.hour+1) df['weekday'] = df.datetime.apply(lambda x: not bool(x.weekday()/...
intro_to_DS_assignment/Assignment 1 Submission Notebook.ipynb
chris-jd/udacity
mit