markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Trigonometric functions Q1. Calculate sine, cosine, and tangent of x, element-wise.
x = np.array([0., 1., 30, 90]) print "sine:", np.sin(x) print "cosine:", np.cos(x) print "tangent:", np.tan(x)
numpy/numpy_exercises_from_kyubyong/Mathematical_functions_solutions.ipynb
mohanprasath/Course-Work
gpl-3.0
Q2. Calculate inverse sine, inverse cosine, and inverse tangent of x, element-wise.
x = np.array([-1., 0, 1.]) print "inverse sine:", np.arcsin(x2) print "inverse cosine:", np.arccos(x2) print "inverse tangent:", np.arctan(x2)
numpy/numpy_exercises_from_kyubyong/Mathematical_functions_solutions.ipynb
mohanprasath/Course-Work
gpl-3.0
Q3. Convert angles from radians to degrees.
x = np.array([-np.pi, -np.pi/2, np.pi/2, np.pi]) out1 = np.degrees(x) out2 = np.rad2deg(x) assert np.array_equiv(out1, out2) print out1
numpy/numpy_exercises_from_kyubyong/Mathematical_functions_solutions.ipynb
mohanprasath/Course-Work
gpl-3.0
Q4. Convert angles from degrees to radians.
x = np.array([-180., -90., 90., 180.]) out1 = np.radians(x) out2 = np.deg2rad(x) assert np.array_equiv(out1, out2) print out1
numpy/numpy_exercises_from_kyubyong/Mathematical_functions_solutions.ipynb
mohanprasath/Course-Work
gpl-3.0
Hyperbolic functions Q5. Calculate hyperbolic sine, hyperbolic cosine, and hyperbolic tangent of x, element-wise.
x = np.array([-1., 0, 1.]) print np.sinh(x) print np.cosh(x) print np.tanh(x)
numpy/numpy_exercises_from_kyubyong/Mathematical_functions_solutions.ipynb
mohanprasath/Course-Work
gpl-3.0
Rounding Q6. Predict the results of these, paying attention to the difference among the family functions.
x = np.array([2.1, 1.5, 2.5, 2.9, -2.1, -2.5, -2.9]) out1 = np.around(x) out2 = np.floor(x) out3 = np.ceil(x) out4 = np.trunc(x) out5 = [round(elem) for elem in x] print out1 print out2 print out3 print out4 print out5
numpy/numpy_exercises_from_kyubyong/Mathematical_functions_solutions.ipynb
mohanprasath/Course-Work
gpl-3.0
Q7. Implement out5 in the above question using numpy.
print np.floor(np.abs(x) + 0.5) * np.sign(x) # Read http://numpy-discussion.10968.n7.nabble.com/why-numpy-round-get-a-different-result-from-python-round-function-td19098.html
numpy/numpy_exercises_from_kyubyong/Mathematical_functions_solutions.ipynb
mohanprasath/Course-Work
gpl-3.0
Sums, products, differences Q8. Predict the results of these.
x = np.array( [[1, 2, 3, 4], [5, 6, 7, 8]]) outs = [np.sum(x), np.sum(x, axis=0), np.sum(x, axis=1, keepdims=True), "", np.prod(x), np.prod(x, axis=0), np.prod(x, axis=1, keepdims=True), "", np.cumsum(x), np.cumsum(x, axis=0), np.cumsum(x, axis=1), "", np.cumprod(x), np.cumprod(x, axis=0), np.cumprod(x, axis=1), "", np.min(x), np.min(x, axis=0), np.min(x, axis=1, keepdims=True), "", np.max(x), np.max(x, axis=0), np.max(x, axis=1, keepdims=True), "", np.mean(x), np.mean(x, axis=0), np.mean(x, axis=1, keepdims=True)] for out in outs: if out == "": print else: print("->", out)
numpy/numpy_exercises_from_kyubyong/Mathematical_functions_solutions.ipynb
mohanprasath/Course-Work
gpl-3.0
Q9. Calculate the difference between neighboring elements, element-wise.
x = np.array([1, 2, 4, 7, 0]) print np.diff(x)
numpy/numpy_exercises_from_kyubyong/Mathematical_functions_solutions.ipynb
mohanprasath/Course-Work
gpl-3.0
Q10. Calculate the difference between neighboring elements, element-wise, and prepend [0, 0] and append[100] to it.
x = np.array([1, 2, 4, 7, 0]) out1 = np.ediff1d(x, to_begin=[0, 0], to_end=[100]) out2 = np.insert(np.append(np.diff(x), 100), 0, [0, 0]) assert np.array_equiv(out1, out2) print out2
numpy/numpy_exercises_from_kyubyong/Mathematical_functions_solutions.ipynb
mohanprasath/Course-Work
gpl-3.0
Q11. Return the cross product of x and y.
x = np.array([1, 2, 3]) y = np.array([4, 5, 6]) print np.cross(x, y)
numpy/numpy_exercises_from_kyubyong/Mathematical_functions_solutions.ipynb
mohanprasath/Course-Work
gpl-3.0
Exponents and logarithms Q12. Compute $e^x$, element-wise.
x = np.array([1., 2., 3.], np.float32) out = np.exp(x) print out
numpy/numpy_exercises_from_kyubyong/Mathematical_functions_solutions.ipynb
mohanprasath/Course-Work
gpl-3.0
Q13. Calculate exp(x) - 1 for all elements in x.
x = np.array([1., 2., 3.], np.float32) out1 = np.expm1(x) out2 = np.exp(x) - 1. assert np.allclose(out1, out2) print out1
numpy/numpy_exercises_from_kyubyong/Mathematical_functions_solutions.ipynb
mohanprasath/Course-Work
gpl-3.0
Q14. Calculate $2^p$ for all p in x.
x = np.array([1., 2., 3.], np.float32) out1 = np.exp2(x) out2 = 2 ** x assert np.allclose(out1, out2) print out1
numpy/numpy_exercises_from_kyubyong/Mathematical_functions_solutions.ipynb
mohanprasath/Course-Work
gpl-3.0
Q15. Compute natural, base 10, and base 2 logarithms of x element-wise.
x = np.array([1, np.e, np.e**2]) print "natural log =", np.log(x) print "common log =", np.log10(x) print "base 2 log =", np.log2(x)
numpy/numpy_exercises_from_kyubyong/Mathematical_functions_solutions.ipynb
mohanprasath/Course-Work
gpl-3.0
Q16. Compute the natural logarithm of one plus each element in x in floating-point accuracy.
x = np.array([1e-99, 1e-100]) print np.log1p(x) # Compare it with np.log(1 +x)
numpy/numpy_exercises_from_kyubyong/Mathematical_functions_solutions.ipynb
mohanprasath/Course-Work
gpl-3.0
Floating point routines Q17. Return element-wise True where signbit is set.
x = np.array([-3, -2, -1, 0, 1, 2, 3]) out1 = np.signbit(x) out2 = x < 0 assert np.array_equiv(out1, out2) print out1
numpy/numpy_exercises_from_kyubyong/Mathematical_functions_solutions.ipynb
mohanprasath/Course-Work
gpl-3.0
Q18. Change the sign of x to that of y, element-wise.
x = np.array([-1, 0, 1]) y = -1.1 print np.copysign(x, y)
numpy/numpy_exercises_from_kyubyong/Mathematical_functions_solutions.ipynb
mohanprasath/Course-Work
gpl-3.0
Arithmetic operations Q19. Add x and y element-wise.
x = np.array([1, 2, 3]) y = np.array([-1, -2, -3]) out1 = np.add(x, y) out2 = x + y assert np.array_equal(out1, out2) print out1
numpy/numpy_exercises_from_kyubyong/Mathematical_functions_solutions.ipynb
mohanprasath/Course-Work
gpl-3.0
Q20. Subtract y from x element-wise.
x = np.array([3, 4, 5]) y = np.array(3) out1 = np.subtract(x, y) out2 = x - y assert np.array_equal(out1, out2) print out1
numpy/numpy_exercises_from_kyubyong/Mathematical_functions_solutions.ipynb
mohanprasath/Course-Work
gpl-3.0
Q21. Multiply x by y element-wise.
x = np.array([3, 4, 5]) y = np.array([1, 0, -1]) out1 = np.multiply(x, y) out2 = x * y assert np.array_equal(out1, out2) print out1
numpy/numpy_exercises_from_kyubyong/Mathematical_functions_solutions.ipynb
mohanprasath/Course-Work
gpl-3.0
Q22. Divide x by y element-wise in two different ways.
x = np.array([3., 4., 5.]) y = np.array([1., 2., 3.]) out1 = np.true_divide(x, y) out2 = x / y assert np.array_equal(out1, out2) print out1 out3 = np.floor_divide(x, y) out4 = x // y assert np.array_equal(out3, out4) print out3 # Note that in Python 2 and 3, the handling of `divide` differs. # See https://docs.scipy.org/doc/numpy/reference/generated/numpy.divide.html#numpy.divide
numpy/numpy_exercises_from_kyubyong/Mathematical_functions_solutions.ipynb
mohanprasath/Course-Work
gpl-3.0
Q23. Compute numerical negative value of x, element-wise.
x = np.array([1, -1]) out1 = np.negative(x) out2 = -x assert np.array_equal(out1, out2) print out1
numpy/numpy_exercises_from_kyubyong/Mathematical_functions_solutions.ipynb
mohanprasath/Course-Work
gpl-3.0
Q24. Compute the reciprocal of x, element-wise.
x = np.array([1., 2., .2]) out1 = np.reciprocal(x) out2 = 1/x assert np.array_equal(out1, out2) print out1
numpy/numpy_exercises_from_kyubyong/Mathematical_functions_solutions.ipynb
mohanprasath/Course-Work
gpl-3.0
Q25. Compute $x^y$, element-wise.
x = np.array([[1, 2], [3, 4]]) y = np.array([[1, 2], [1, 2]]) out = np.power(x, y) print out
numpy/numpy_exercises_from_kyubyong/Mathematical_functions_solutions.ipynb
mohanprasath/Course-Work
gpl-3.0
Q26. Compute the remainder of x / y element-wise in two different ways.
x = np.array([-3, -2, -1, 1, 2, 3]) y = 2 out1 = np.mod(x, y) out2 = x % y assert np.array_equal(out1, out2) print out1 out3 = np.fmod(x, y) print out3
numpy/numpy_exercises_from_kyubyong/Mathematical_functions_solutions.ipynb
mohanprasath/Course-Work
gpl-3.0
Miscellaneous Q27. If an element of x is smaller than 3, replace it with 3. And if an element of x is bigger than 7, replace it with 7.
x = np.arange(10) out1 = np.clip(x, 3, 7) out2 = np.copy(x) out2[out2 < 3] = 3 out2[out2 > 7] = 7 assert np.array_equiv(out1, out2) print out1
numpy/numpy_exercises_from_kyubyong/Mathematical_functions_solutions.ipynb
mohanprasath/Course-Work
gpl-3.0
Q28. Compute the square of x, element-wise.
x = np.array([1, 2, -1]) out1 = np.square(x) out2 = x * x assert np.array_equal(out1, out2) print out1
numpy/numpy_exercises_from_kyubyong/Mathematical_functions_solutions.ipynb
mohanprasath/Course-Work
gpl-3.0
Q29. Compute square root of x element-wise.
x = np.array([1., 4., 9.]) out = np.sqrt(x) print out
numpy/numpy_exercises_from_kyubyong/Mathematical_functions_solutions.ipynb
mohanprasath/Course-Work
gpl-3.0
Q30. Compute the absolute value of x.
x = np.array([[1, -1], [3, -3]]) out = np.abs(x) print out
numpy/numpy_exercises_from_kyubyong/Mathematical_functions_solutions.ipynb
mohanprasath/Course-Work
gpl-3.0
Q31. Compute an element-wise indication of the sign of x, element-wise.
x = np.array([1, 3, 0, -1, -3]) out1 = np.sign(x) out2 = np.copy(x) out2[out2 > 0] = 1 out2[out2 < 0] = -1 assert np.array_equal(out1, out2) print out1
numpy/numpy_exercises_from_kyubyong/Mathematical_functions_solutions.ipynb
mohanprasath/Course-Work
gpl-3.0
Use Dictionary Comprehension
# Display all dictionary entries where the key doesn't start with 'Chris' {keys : Officers[keys] for keys in Officers if not keys.startswith('Chris')}
python/iterating_over_dictionary_keys.ipynb
tpin3694/tpin3694.github.io
mit
Channel library setup
cl = ChannelLibrary(":memory:") pl = PipelineManager() q1 = cl.new_qubit("q1") aps2_1 = cl.new_APS2("BBNAPSa", address="192.168.2.4", trigger_interval=200e-6) aps2_2 = cl.new_APS2("BBNAPSb", address="192.168.2.2") dig_1 = cl.new_Alazar("Alazar_1", address="1", sampling_rate=500e6, record_length=1024) h1 = cl.new_source("Holz_1", "HolzworthHS9000", "HS9004A-009-1", reference='10MHz', power=-30) h2 = cl.new_source("Holz_2", "HolzworthHS9000", "HS9004A-009-2", reference='10MHz', power=-30) cl.set_measure(q1, aps2_1, dig_1.ch("1"), trig_channel=aps2_1.ch("m2"), gate=False, generator=h1) cl.set_control(q1, aps2_2, generator=h2) cl.set_master(aps2_1, aps2_1.ch("m1")) cl["q1"].measure_chan.frequency = 0e6 cl["q1"].measure_chan.autodyne_freq = 10e6
doc/examples/Example-Sweeps.ipynb
BBN-Q/Auspex
apache-2.0
Pipeline setup: Take Note: we use the buffers keyword argument to automatically generate buffers instead of writers. This is sometimes convenient if you don't require data to be written to file. It becomes immediately available in the notebook after running!
pl.create_default_pipeline(buffers=True) pl["q1"].add(Display(label="blub")) pl["q1"]["Demodulate"]["Integrate"].add(Display(label="int", plot_dims=1)) pl.show_pipeline()
doc/examples/Example-Sweeps.ipynb
BBN-Q/Auspex
apache-2.0
Adding experiment sweeps Once a QubitExperiment has been created, we can programmatically add sweeps as shown here.
lengths = np.linspace(20e-9, 2020e-9, 31) exp = QubitExperiment(RabiWidth(q1,lengths),averages=50) exp.set_fake_data(dig_1, np.exp(-lengths/1e-6)*np.cos(1e7*lengths)) # exp.add_qubit_sweep(q1,"measure", "frequency", np.linspace(6.512e9, 6.522e9, 11)) exp.run_sweeps()
doc/examples/Example-Sweeps.ipynb
BBN-Q/Auspex
apache-2.0
We fetch the data and data descriptor directly from the buffer. The data is automatically reshaped to match the experiment axes, and the descriptor enumerates all of the values of these axes for convenience plotting, etc..
data, descriptor = exp.outputs_by_qubit["q1"][0].get_data() descriptor.axes data.shape
doc/examples/Example-Sweeps.ipynb
BBN-Q/Auspex
apache-2.0
We even include a convenience extent function conforming to the infinitely forgettable matplotlib format.
import matplotlib.pyplot as plt %matplotlib inline plt.imshow(np.real(data), aspect='auto', extent=descriptor.extent()) plt.xlabel("Delay (µs)") plt.ylabel("Frequency (GHz)");
doc/examples/Example-Sweeps.ipynb
BBN-Q/Auspex
apache-2.0
Adding Multiple Sweeps An arbitrary number of sweeps can be added. For example:
exp = QubitExperiment(RabiWidth(q1,lengths),averages=50) exp.add_qubit_sweep(q1,"measure", "frequency", np.linspace(6.512e9, 6.522e9, 5)) exp.add_qubit_sweep(q1,"measure", "amplitude", np.linspace(0.0, 1.0, 21))
doc/examples/Example-Sweeps.ipynb
BBN-Q/Auspex
apache-2.0
If we inspect the internal representation of the "output connector" into which the instrument driver will dump its data, we can see all of the axes it contains.
exp.output_connectors["q1"].descriptor.axes
doc/examples/Example-Sweeps.ipynb
BBN-Q/Auspex
apache-2.0
The Dataset The data from UCLA (found at http://www.ats.ucla.edu/stat/data/binary.csv and originally used in this example: http://www.ats.ucla.edu/stat/r/dae/logit.htm) contains 4 columns: * admit - a binary variable describing if the student was admitted into grad school or not * gre - the student's Graduate Record Examination (GRE) score * gpa - the student's grade point average (GPA) * rank - the prestige of the student's undergraduate school, ranked from 1 to 4 The columns will be renamed to "Admit," "GRE," "GPA," and "Prestige" as we import the data to make them more human-friendly. Note that "rank" is renamed to "Prestige" to avoid confusion with a method of pandas.
# As we import the data, we rename the "Rank" column to "Prestige" to avoid confusion with the rank method of pandas df = pd.read_csv("binary.csv", header = 0, names = ["Admit", 'GRE', 'GPA', 'Prestige']) df.head()
Logistic Regression.ipynb
rayjustinhuang/DataAnalysisandMachineLearning
mit
Initial Exploratory Data Analysis We take a look at basic summary statistics, a cross-tabulation, and a histogram to get a general idea of the contents of the data.
# Basic summary of the data df.describe() # Generate a cross-tabulation (frequency table by default) of the factors; here we use prestige pd.crosstab(df['Admit'], df['Prestige'], rownames=['Admission'])
Logistic Regression.ipynb
rayjustinhuang/DataAnalysisandMachineLearning
mit
Based on the cruss-tabulation above, it appears that prestige is a significant factor in admission, with those in schools of rank 1 having more admits than not, and those from schools of rank 4 being largely rejected.
# Generate histograms sns.set_color_codes('muted') df.hist(color='g') plt.show()
Logistic Regression.ipynb
rayjustinhuang/DataAnalysisandMachineLearning
mit
Preprocessing the Data While the data is already very analysis-friendly, we still have to change the categorial variable (prestige) into a binary one to be able to create a logistic regression model.
# Dummy code the rank variable dummy_ranks = pd.get_dummies(df['Prestige'], prefix="Prestige") dummy_ranks.head()
Logistic Regression.ipynb
rayjustinhuang/DataAnalysisandMachineLearning
mit
Given that prestige is a categorical value, we perform dummy coding to convert the values into binary variables.
columns1 = ['Admit', 'GRE', 'GPA'] data1 = df[columns1] columns2 = ['Prestige_1','Prestige_2','Prestige_3'] data2 = dummy_ranks[columns2] data = pd.merge(data1, data2, how="outer", left_index=True, right_index=True) data
Logistic Regression.ipynb
rayjustinhuang/DataAnalysisandMachineLearning
mit
Logistic Regression We will use logistic regression to predict the probability that a particular student gets into grad school.
# Separate independent and dependent variables X = data.ix[:,1:] y = data['Admit'] # Create a logistic regression model initial = LogisticRegression(C = 1000, random_state=0) initial.fit(X,y) # Check model accuracy print("Accuracy Score:", initial.score(X,y)) # What percentage of students actually got into grad school print("Actual probability of admission:", y.mean())
Logistic Regression.ipynb
rayjustinhuang/DataAnalysisandMachineLearning
mit
If you were guessing "no," you would be right around 68.25% of the time. Our model is more accurate than just guessing "no" by around 2.5%. Our model is significantly better than random guessing. To be more precise, it is about 20.75% better than just guessing 50/50.
# View coefficients column_names = list(X.columns) coefficients = np.transpose(initial.coef_) intercept = initial.intercept_ Coeffs = pd.DataFrame(coefficients, column_names, columns=['Coefficients']) Coeffs.append(pd.DataFrame(intercept,['Intercept'], columns=['Coefficients']))
Logistic Regression.ipynb
rayjustinhuang/DataAnalysisandMachineLearning
mit
The coefficients above are telling of the value of the data in the dataset. Every additional point in a candidate's GRE score improves their chance of admission by 0.002; every unit increase in GPA increases a candidate's chance by 0.803. The prestige coefficients are interpreted as showing that being from a school of rank 1 increases your chance of going to grad school by 1.509 versus a student from a rank 4 school. Differences in chances can be determined by subtracting the prestige 1 coefficient from the prestige coefficient of another rank, e.g., being from a school of rank 1 increases your chance of admission by around 0.6662 (calculated from 1.508653-0.842366) versus a student from a rank 2 school. It is important to note that the information mentioned regarding the log odds is contextual to our model. Modeling Using a Training and a Test Set In the real world, we will likely need to create a machine learning model that can take any set of predictor variables and spit out a probability of admission, which means we won't have the privilege of creating a logit model based on an entirely known set of data. We will now create a logistic regression model based on one training set and one test set, with 70% of the data going into the training set and 30% going into the test set, in order to be able to construct a model and test its accuracy on data that was not used to create it.
# Split data into training and test sets, using 30% of the data as the test set X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, random_state = 0) # Fit the logistic regression with lambda = 10^-3 lr = LogisticRegression(C = 1000, random_state=0) lr.fit(X_train, y_train) # View predictions predicted = lr.predict(X_test) print(predicted) # View class probabilities probabilities = lr.predict_proba(X_test) print(probabilities)
Logistic Regression.ipynb
rayjustinhuang/DataAnalysisandMachineLearning
mit
Model Evaluation We now evaluate our logistic regression using some common metrics for assessing model quality.
# Check accuracy print("Accuracy Score:", metrics.accuracy_score(y_test, predicted))
Logistic Regression.ipynb
rayjustinhuang/DataAnalysisandMachineLearning
mit
The accuracy score here is slightly (around 0.083%) better than the optimized logistic regression without the training/test split. Using a well-chosen (completely random) subset of the data, we were able to create a model whose accuracy actually exceeded that of the model created using all of the data. Performance Visualization We use a confusion matrix, classification report, and ROC curve to get a better view of the performance of our model.
# Print confusion matrix and classification report print("Confusion Matrix:\n",metrics.confusion_matrix(y_test, predicted)) print("\nClassification Report:\n",metrics.classification_report(y_test,predicted))
Logistic Regression.ipynb
rayjustinhuang/DataAnalysisandMachineLearning
mit
The confusion matrix shows that out of 82 non-admits, our model got 77 of those right, while 5 of those were false positives. This very good hit rate for 0's is reflected in the high recall of 0.94 for 0's in the classification report. However, the performance of the model is not as good at predicting admits, with only 8 out of 38 admissions correctly being predicted by the model. Again, this is reflected in the low recall 0.21 for 1's. Looking at precision, 72% of 0's are indeed 0's, and 62% of identified 1's are actual 1's. In total, 85 out of 120 results were correctly predicted by the model. Plotting the ROC Curve In order to more ably visualize the effectiveness of our model and support our existing analysis, we use an ROC curve. While scikit-learn already selects a certain balance (0.5 by default for binary classifiers; can be adjusted via the class_weight argument in LogisticRegression) of performance metrics (precision, recall, etc.), it is still good to get a view of the performance tradeoffs inherent in our model, as well as to gain insight for potential model tuning in the future.
fpr, tpr, thresholds = metrics.roc_curve(y_test, probabilities[:,1]) results = pd.DataFrame({'False Positive Rate': fpr, 'True Positive Rate': tpr}) plt.plot(fpr,tpr, color='g', label="Model") plt.plot([0, 1], [0, 1], color='gray', linestyle='--', label="Baseline (Random Guessing)") plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('ROC Curve') plt.legend() plt.show() print("Area Under the Curve:", metrics.roc_auc_score(y_test, probabilities[:,1]))
Logistic Regression.ipynb
rayjustinhuang/DataAnalysisandMachineLearning
mit
As the plot above shows, while our logistic regression model is not really that good -- the area under the curve is calculated to be 0.6784 -- in accordance with the results earlier, it still does better than random guessing. Also note that, in alignment with the 0.5 threshhold used by scikit-learn by default, our true positive rate (recall) of 0.71 matches up with the true positive rate in the graph when the false positive rate is 0.5. Checking Model Prediction Performance We assess the quality our modeling above -- specifically, how effectively it will likely hold up when exponsed to unseen data -- by using cross-validation.
fivefold = cross_val_score(lr, X, y, scoring='accuracy', cv=5) print("Score per fold:", fivefold) print("Mean score:", fivefold.mean()) print("Standard deviation:", fivefold.std())
Logistic Regression.ipynb
rayjustinhuang/DataAnalysisandMachineLearning
mit
Using five-fold cross-validation on our current model results in a similar accuracy score as the one previously derived, which shows that the model we arrived at earlier is not biased toward the training set and will likely generalize well to new data. Improving the Model We will attempt to improve our model by using a variety of techniques, including feature scaling, class weighting, and tuning our hyperparameter C. Our current model will be treated as a baseline. Feature Scaling One aspect of our data is that GRE and GPA scores vary significantly in magnitude (GRE varies from 220 to 800 while GPA varies from 2.26 to 4.0, though both appear to be shaped like normal distributions). Scaling these features may improve the accuracy of our machine learning model.
from sklearn import preprocessing # Isolate columns to scale toscale = X[['GRE','GPA']].astype(float) scaledX = preprocessing.scale(toscale) scaleddata = pd.DataFrame(scaledX, columns=['GRE','GPA']) # Join scaled data with categorical rank columns scaledX = scaleddata.join(data2) scaledX.head() improve1 = cross_val_score(lr, scaledX, y, scoring='accuracy', cv=5) print("Score per fold:", improve1) print("Mean score:", improve1.mean()) print("Standard deviation:", improve1.std())
Logistic Regression.ipynb
rayjustinhuang/DataAnalysisandMachineLearning
mit
The accuracy of our model does not change, but the standard deviation improves a little. This means that our improved model should provide slightly better, or at the very least, more consistent performance. Correcting for Class Imbalance Based on our confusion matrix, the model appeared to be quick to assign values of "1" to actual 0's. By modifying the weighting of the classes ever so slightly, from the default weight of 1 each to adding slightly more weight to 0's, false positives should be penalized more, and we give the model a little more breathing room to make mistakes in favor of providing 1's (while the data most certainly shows that it's more likely to get a 0 than a 1, our model still appears to predict too much in favor of 0).
lrweighted = LogisticRegression(C = 1000, random_state=0, class_weight={0:0.505,1:0.495}) improve2 = cross_val_score(lrweighted, scaledX, y, scoring='accuracy', cv=5) print("Score per fold:", improve2) print("Mean score:", improve2.mean()) print("Standard deviation:", improve2.std())
Logistic Regression.ipynb
rayjustinhuang/DataAnalysisandMachineLearning
mit
Our mean score shows a slight improvement. Our standard deviation is slightly higher than that of the previous, feature-scaled model, but it is still lower than our original model. Hyperparameter Tuning We will check if results change based on the $\lambda$ parameter used in regularization. Note that in scikit-learn's logistic regression, $C = \frac{1}{\lambda}$.
tens = [10**i for i in range(-5,6)] for i in tens: if i == 1000: continue testlr = LogisticRegression(C = i, random_state=0, class_weight={0:0.505,1:0.495}) testcrossval = cross_val_score(testlr, scaledX, y, scoring='accuracy', cv=5) print('For C = {}:'.format(i)) print(' Score per fold:', testcrossval) print(' Mean score:', testcrossval.mean()) print(' Standard deviation:', testcrossval.std())
Logistic Regression.ipynb
rayjustinhuang/DataAnalysisandMachineLearning
mit
Given that $C = \frac{1}{\lambda}$, it makes sense that, after a certain value of $C$ (in this case, 100), the model no longer improves because the penalty to the logistic regression objective function is minimal. As such, we will not need to change our current $C$ value. Testing the Revised Model We now check our new model using a training and test set.
# Create new train and test sets and fit our revised model to it X_train2, X_test2, y_train2, y_test2 = train_test_split(scaledX, y, test_size = 0.3, random_state = 0) newlr = LogisticRegression(C = 1000, random_state=0, class_weight={0:0.505,1:0.495}) newlr.fit(X_train2, y_train2) # Check for metrics on the new predicted probabilities newpredictions = newlr.predict(X_test2) newprobabilities = newlr.predict_proba(X_test2) print("Accuracy Score:", newlr.score(X_test2, y_test2),"\n") print("Confusion Matrix:\n",metrics.confusion_matrix(y_test2, newpredictions)) print("\nClassification Report:\n",metrics.classification_report(y_test2, newpredictions))
Logistic Regression.ipynb
rayjustinhuang/DataAnalysisandMachineLearning
mit
In alignment with our expectations based on our model tuning, all metrics have shown an improvement over our original model. In comparison with our original model, the predictions for non-admits are the same, and we now have two more correctly classified admits than in the previous model, which is obviously an improvement.
# Plot a new ROC curve for the revised model fpr2, tpr2, thresholds2 = metrics.roc_curve(y_test2, newprobabilities[:,1]) results2 = pd.DataFrame({'False Positive Rate': fpr2, 'True Positive Rate': tpr2}) plt.plot(fpr,tpr,color='darkgray', label="Original Model") plt.plot(fpr2,tpr2, color='g', label="Revised Model") plt.plot([0, 1], [0, 1], color='gray', linestyle='--', label="Baseline (Random Guessing)") plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('ROC Curve for Revised Model') plt.legend() plt.show() print("Area Under the Curve:", metrics.roc_auc_score(y_test2, newprobabilities[:,1]))
Logistic Regression.ipynb
rayjustinhuang/DataAnalysisandMachineLearning
mit
2D transformations
def plot2d(x, fmt="ok"): plt.axis('equal') plt.axis([-5, 5, -5, 5]) plt.xticks(np.arange(-5, 5, 1)) plt.yticks(np.arange(-5, 5, 1)) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.plot(x[:,0], x[:,1], fmt) plt.grid()
nb_sci_maths/maths_euclidean_geometry_en.ipynb
jdhp-docs/python_notebooks
mit
Rotation around the origin $$ R(\theta) = \begin{pmatrix} \cos \theta & -\sin \theta \ \sin \theta & \cos \theta \end{pmatrix} $$ <img src="https://upload.wikimedia.org/wikipedia/commons/thumb/6/64/Mog_rotacion_vector.jpg/263px-Mog_rotacion_vector.jpg"></img> $$ \begin{pmatrix} A'{x} \ A'{y} \end{pmatrix} = \begin{pmatrix} \cos \theta & -\sin \theta \ \sin \theta &\cos \theta \ \end{pmatrix} \begin{pmatrix} A_{x} \ A_{y} \end{pmatrix} $$ See: https://fr.wikipedia.org/wiki/Matrice_de_rotation
# Define initial points A = np.array([[0., 0.], [1., 0.], [1., 1.], [0., 0.]]) # Define the rotation angle theta = np.radians(30) # Define the rotation matrix R = np.array([[np.cos(theta), -np.sin(theta)], [np.sin(theta), np.cos(theta)]]) # Rotate points Aprime = np.dot(R, A.T).T # Print and plot print(A) print(Aprime) plot2d(A, fmt="-ok") plot2d(Aprime, fmt="-or")
nb_sci_maths/maths_euclidean_geometry_en.ipynb
jdhp-docs/python_notebooks
mit
3D transformations
def plot3d(x, axis=None, fmt="ok"): if axis is None: fig = plt.figure() axis = axes3d.Axes3D(fig) axis.scatter(x[:,0], x[:,1], x[:,2], fmt) axis.plot(x[:,0], x[:,1], x[:,2], fmt)
nb_sci_maths/maths_euclidean_geometry_en.ipynb
jdhp-docs/python_notebooks
mit
Rotation around the x axis $$ R_{\mathbf {x}}(\theta) = \begin{pmatrix} 1& 0 & 0 \ 0& \cos \theta & -\sin \theta \ 0& \sin \theta & \cos \theta \end{pmatrix} $$ See: https://fr.wikipedia.org/wiki/Matrice_de_rotation Rotation around the y axis $$ R_{\mathbf {y}}(\theta ) = \begin{pmatrix} \cos \theta & 0 & \sin \theta \ 0 & 1 & 0 \ -\sin \theta & 0 & \cos \theta \end{pmatrix} $$ See: https://fr.wikipedia.org/wiki/Matrice_de_rotation Rotation around the z axis $$ R_{\mathbf {z}}(\theta ) = \begin{pmatrix} \cos \theta & -\sin \theta & 0 \ \sin \theta & \cos \theta & 0 \ 0 & 0 & 1 \end{pmatrix} $$ See: https://fr.wikipedia.org/wiki/Matrice_de_rotation Example
# Define initial points A = np.array([[0., 0., 0.], [1., 0., 0.], [0., 0., 0.], [0., 1., 0.], [0., 0., 0.], [0., 0., 1.]]) # Define the rotation angle theta = np.radians(90) # Define the rotation matrices Rx = np.array([[1., 0., 0.], [0., np.cos(theta), -np.sin(theta)], [0., np.sin(theta), np.cos(theta)]]) Ry = np.array([[np.cos(theta), 0., np.sin(theta)], [0., 1., 0. ], [-np.sin(theta), 0., np.cos(theta)]]) Rz = np.array([[np.cos(theta), -np.sin(theta), 0.], [np.sin(theta), np.cos(theta), 0.], [0., 0., 1.]]) # Rotate points Ax = np.dot(Rx, A.T).T Ay = np.dot(Ry, A.T).T Az = np.dot(Rz, A.T).T # Plot fig = plt.figure() ax = axes3d.Axes3D(fig) plot3d(A, axis=ax, fmt="-ok") plot3d(Ax, axis=ax, fmt=":or") plot3d(Ay, axis=ax, fmt=":og") plot3d(Az, axis=ax, fmt=":ob") ax.text(1, 0, 0, "x", color="r") ax.text(0, 1, 0, "y", color="g") ax.text(0, 0, 1, "z", color="b")
nb_sci_maths/maths_euclidean_geometry_en.ipynb
jdhp-docs/python_notebooks
mit
Rotation around a given axis Rotation around the axis defined by the unit vector $\overrightarrow{u} \begin{pmatrix} u_{x} \ u_{y} \ u_{z} \end{pmatrix}$ (i.e. with $u_{x}^{2}+u_{y}^{2}+u_{z}^{2}=1$) by an angle $\theta$. $$ R = \begin{pmatrix} u_{x}^{2}(1-c)+c & u_{x}u_{y}(1-c)-u_{z}s & u_{x}u_{z}(1-c)+u_{y}s \ u_{x}u_{y}(1-c)+u_{z}s & u_{y}^{2}(1-c)+c & u_{y}u_{z}(1-c)-u_{x}s \ u_{x}u_{z}(1-c)-u_{y}s & u_{y}u_{z}(1-c)+u_{x}s & u_{z}^{2}(1-c)+c \end{pmatrix} $$ where $c = \cos(\theta)$ and $s = \sin(\theta)$ See https://fr.wikipedia.org/wiki/Matrice_de_rotation
# Define initial points A = np.array([[0., 0., 0.], [1., 0., 0.], [0., 0., 0.], [0., 1., 0.], [0., 0., 0.], [0., 0., 1.]]) # Define the rotation angle theta = np.radians(10) u = np.array([1., 1., 0.]) ux, uy, uz = u[0], u[1], u[2] c = np.cos(theta) s = np.sin(theta) # Define the rotation matrices R = np.array([[ux**2 * (1-c) + c, ux*uy * (1-c) - uz*s, ux*uz * (1-c) + uy*s], [ux*uy * (1-c) + uz*s, ux**2 * (1-c) + c, uy*uz * (1-c) - ux*s], [ux*uz * (1-c) - uy*s, uy*uz * (1-c) + ux*s, uz**2 * (1-c) + c]]) # Rotate points Ar = np.dot(R, A.T).T # Plot fig = plt.figure() ax = axes3d.Axes3D(fig) plot3d(A, axis=ax, fmt="-ok") plot3d(np.array([np.zeros(3), u]), axis=ax, fmt="--ok") plot3d(Ar, axis=ax, fmt=":or") ax.text(1, 0, 0, "x", color="k") ax.text(0, 1, 0, "y", color="k") ax.text(0, 0, 1, "z", color="k")
nb_sci_maths/maths_euclidean_geometry_en.ipynb
jdhp-docs/python_notebooks
mit
Projections Project 2D points on a line Line equation
@interact(a=(-5., 5., 0.1), b=(-5., 5., 0.1), c=(-5., 5., 0.1)) def plot(a, b, c): plt.axis('equal') plt.axis([-5, 5, -5, 5]) plt.xticks(np.arange(-5,5,1)) plt.yticks(np.arange(-5,5,1)) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') x = np.array([-10., 10.]) f = lambda x: a/(-b) * x + c/(-b) try: plt.plot(x, f(x)) except ZeroDivisionError: print("b should not be equal to 0") plt.grid()
nb_sci_maths/maths_euclidean_geometry_en.ipynb
jdhp-docs/python_notebooks
mit
Distance from a point to a line Line defined by an equation In the case of a line in the plane given by the equation $ax + by + c = 0$, where $a$, $b$ and $c$ are real constants with $a$ and $b$ not both zero, the distance from the line to a point $(x_0,y_0)$ is $$\operatorname{distance}(ax+by+c=0, (x_0, y_0)) = \frac{|ax_0+by_0+c|}{\sqrt{a^2+b^2}}.$$ The point on this line which is closest to $(x_0,y_0)$ has coordinates: $$x = \frac{b(bx_0 - ay_0)-ac}{a^2 + b^2}$$ and $$y = \frac{a(-bx_0 + ay_0) - bc}{a^2+b^2}.$$ For mor information, see https://en.wikipedia.org/wiki/Distance_from_a_point_to_a_line.
# Setup the plot def plot(a, b, c, p, p2): plt.axis('equal') plt.axis([-5, 5, -5, 5]) plt.xticks(np.arange(-5,5,1)) plt.yticks(np.arange(-5,5,1)) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') x = np.array([-10., 10.]) f = lambda x: a/(-b) * x + c/(-b) plt.plot(x, f(x)) plt.scatter(*p) # Plot the projection point plt.scatter(*p2) plt.plot((p2[0], p[0]), (p2[1], p[1])) #plt.arrow(*p2, *p) # TODO: doesn't work... plt.grid() # Define the distance and projection functions def distance(a, b, c, p): d1 = (a*p[0] + b*p[1] + c) d2 = math.sqrt(math.pow(a, 2.) + math.pow(b, 2.)) d = abs(d1)/d2 return d def projection(a, b, c, p): p2 = ((b*(b*p[0] - a*p[1]) - a*c)/(math.pow(a,2.)+math.pow(b,2.)), (a*(-b*p[0] + a*p[1]) - b*c)/(math.pow(a,2.)+math.pow(b,2.))) return p2 # Define the line and the point a = 2. b = 1. c = -2. p = (-4., 2.) # Compute the distance and the projection point on the line d = distance(a, b, c, p) p2 = projection(a, b, c, p) print("Distance:", d) print("Projection point:", p2) # Plot the line and the point plot(a, b, c, p, p2)
nb_sci_maths/maths_euclidean_geometry_en.ipynb
jdhp-docs/python_notebooks
mit
Line defined by two points If the line passes through two points $P_1 = (x_1, y_1)$ and $P_2 = (x_2, y_2)$ then the distance of $(x_0, y_0)$ from the line is: $$\operatorname{distance}(P_1, P_2, (x_0, y_0)) = \frac{|(y_2-y_1)x_0-(x_2-x_1)y_0+x_2 y_1-y_2 x_1|}{\sqrt{(y_2-y_1)^2+(x_2-x_1)^2}}.$$ The denominator of this expression is the distance between $P_1$ and $P_2$. The numerator is twice the area of the triangle with its vertices at the three points, $(x_0, y_0)$, $P_1$ and $P_2$. The expression is equivalent to $h=\frac{2A}{b}$, which can be obtained by rearranging the standard formula for the area of a triangle: $A = \frac{1}{2} bh$, where $b$ is the length of a side, and $h$ is the perpendicular height from the opposite vertex.
# TODO...
nb_sci_maths/maths_euclidean_geometry_en.ipynb
jdhp-docs/python_notebooks
mit
Line defined by a point and an angle
def angle_point_to_equation(angle_degree, p): angle_radian = math.radians(angle_degree) a = math.tan(angle_radian) b = -1 c = -math.tan(angle_radian) * p[0] + p[1] return a, b, c angle_degree = 30 p0 = (3, 2) a, b, c = angle_point_to_equation(angle_degree, p0) p = (-4., 2.) # Compute the distance and the projection point on the line d = distance(a, b, c, p) p2 = projection(a, b, c, p) print("Distance:", d) print("Projection point:", p2) # Plot the line and the point plot(a, b, c, p, p2) plt.scatter(*p0)
nb_sci_maths/maths_euclidean_geometry_en.ipynb
jdhp-docs/python_notebooks
mit
Project 3D points on a plane without perspective TODO... Project 3D points on a plane with perspective The following variables are defined to describe this transformation: $\mathbf{a}_{x,y,z}$: the 3D position of a point A that is to be projected. $\mathbf{c}_{x,y,z}$: the 3D position of a point C representing the camera. $\mathbf{\theta}_{x,y,z}$: the orientation of the camera (represented by Tait–Bryan angles). $\mathbf{e}_{x,y,z}$: the viewer's position relative to the display surface which goes through point C representing the camera (i.e. $\mathbf{e}$ doesn't change with the position of the camera C and typically $\mathbf{e} = (0, 0, -1)^T$ for a view angle of $2*\cos^{-1}(1/\sqrt{2})$ radians when the projection is made on the x/y plane). Which results in: $\mathbf {b} _{x,y}$: the 2D projection of a $\mathbf{a}$ $$ \begin{bmatrix} \mathbf {d} {x} \ \mathbf {d} {y} \ \mathbf {d} {z} \ \end{bmatrix} = \begin{bmatrix} 1 & 0 & 0 \ 0 & \cos(\mathbf{\theta}{x}) & \sin(\mathbf{\theta}{x}) \ 0 & -\sin(\mathbf{\theta}{x}) & \cos(\mathbf{\theta}{x}) \ \end{bmatrix} ~ \begin{bmatrix} \cos(\mathbf{\theta}{y}) & 0 & -\sin(\mathbf{\theta}{y}) \ 0 & 1 & 0 \ \sin(\mathbf{\theta}{y}) & 0 & \cos(\mathbf{\theta}{y}) \ \end{bmatrix} ~ \begin{bmatrix} \cos(\mathbf{\theta}{z}) & \sin(\mathbf{\theta}{z}) & 0 \ -\sin(\mathbf{\theta}{z}) & \cos(\mathbf{\theta}{z}) & 0 \ 0 & 0 & 1 \ \end{bmatrix} ~ \left( \begin{bmatrix} \mathbf{a}{x} \ \mathbf{a}{y} \ \mathbf{a}{z} \ \end{bmatrix} - \begin{bmatrix} \mathbf{c}{x} \ \mathbf{c}{y} \ \mathbf{c}_{z} \ \end{bmatrix} \right) $$ The transformed point $\boldsymbol d$ of $\boldsymbol d$ can then be projected onto the 2D plane using the formula (here, x/y is used as the projection plane; literature also may use x/z): $$ \begin{array}{lcl} \mathbf{b}_x & = & \frac{\mathbf{e}_z}{\mathbf{d}_z} \mathbf{d}_x - \mathbf{e}_x \ \mathbf{b}_y & = & \frac{\mathbf{e}_z}{\mathbf{d}_z} \mathbf{d}_y - \mathbf{e}_y \ \end{array} $$ Or, in matrix form using <a href="https://en.wikipedia.org/wiki/Homogeneous_coordinates">homogeneous coordinates</a>, the system $$ \begin{bmatrix} \mathbf{f}_x \ \mathbf{f}_y \ \mathbf{f}_z \ \mathbf{f}_w \end{bmatrix} = \begin{bmatrix} 1 & 0 & -\frac{\mathbf{e}_x}{\mathbf{e}_z} & 0 \ 0 & 1 & -\frac{\mathbf{e}_y}{\mathbf{e}_z} & 0 \ 0 & 0 & 1 & 0 \ 0 & 0 & \frac{1}{\mathbf{e}_z} & 0 \end{bmatrix} \begin{bmatrix} \mathbf{d}_x \ \mathbf{d}_y \ \mathbf{d}_z \ 1 \end{bmatrix} $$ in conjunction with an argument using similar triangles, leads to division by the homogeneous coordinate, giving $$ \begin{array}{lcl} \mathbf{b}_x & = & \mathbf{f}_x / \mathbf{f}_w \ \mathbf{b}_y & = & \mathbf{f}_y / \mathbf{f}_w \ \end{array} $$ The distance of the viewer from the display surface, $\mathbf{e}{z}$, directly relates to the field of view, where $\alpha = 2 \cdot \tan^{-1}(\frac{1}{\mathbf{e}{z}})$ is the viewed angle. Note: here we assume that you map the points $(-1,-1)$ and $(1,1)$ to the corners of your viewing surface. See: https://en.wikipedia.org/wiki/3D_projection#Perspective_projection
# Define initial points to project a = np.array([0., 1., 2.]) # Define camera's position c = np.array([0., 0., 0.]) # Define viewer's position e = np.array([0., 0., -1.]) # Define the orientation of the camera theta = np.array([np.radians(0), np.radians(0), np.radians(0)]) theta_x, theta_y, theta_z = theta[0], theta[1], theta[2] # Define the rotation matrices Rx = np.array([[1., 0., 0.], [0., np.cos(theta_x), np.sin(theta_x)], [0., -np.sin(theta_x), np.cos(theta_x)]]) Ry = np.array([[np.cos(theta_y), 0., -np.sin(theta_y)], [0., 1., 0. ], [np.sin(theta_y), 0., np.cos(theta_y)]]) Rz = np.array([[np.cos(theta_z), np.sin(theta_z), 0.], [-np.sin(theta_z), np.cos(theta_z), 0.], [0., 0., 1.]]) d = np.dot(Rx, Ry) d = np.dot(d, Rz) d = np.dot(d, a-c) ## TODO: which version is correct ? The one above or the one below ? #d = a - c #d = np.dot(Rz, d) #d = np.dot(Ry, d) #d = np.dot(Rx, d) print("d:", d) b = np.array([e[2]/d[2] * d[0] - e[0], e[2]/d[2] * d[1] - e[1]]) print("b:", b) # Alternative to compute b Rf = np.array([[1., 0., -e[0]/e[2], 0.], [0., 1., -e[1]/e[2], 0.], [0., 0., 1., 0.], [0., 0., 1./e[2], 0.]]) f = np.dot(Rf, np.concatenate([d, np.ones(1)])) b = np.array([f[0]/f[3], f[1]/f[3]]) print("b:", b) plot2d(np.array([b, b]), "ok")
nb_sci_maths/maths_euclidean_geometry_en.ipynb
jdhp-docs/python_notebooks
mit
Multiple points version
@interact(theta_x=(-90., 90., 1.), theta_y=(-90., 90., 1.), theta_z=(-90., 90., 1.)) def projection(theta_x, theta_y, theta_z): # Define initial points to project A = np.array([[-1., 0., 1.], [ 1., 0., 1.], [-1., 0., 2.], [ 1., 0., 2.], [-1., 0., 5.], [ 1., 0., 5.], [-1., 0., 15.], [ 1., 0., 15.]]) # Define camera's position c = np.array([0., -2., 0.]) C = np.tile(c, (A.shape[0], 1)) # Define viewer's position e = np.array([0., 0., -1.]) # Define the orientation of the camera theta = np.radians(np.array([theta_x, theta_y, theta_z])) theta_x, theta_y, theta_z = theta[0], theta[1], theta[2] # Define the rotation matrices Rx = np.array([[1., 0., 0.], [0., np.cos(theta_x), np.sin(theta_x)], [0., -np.sin(theta_x), np.cos(theta_x)]]) Ry = np.array([[np.cos(theta_y), 0., -np.sin(theta_y)], [0., 1., 0. ], [np.sin(theta_y), 0., np.cos(theta_y)]]) Rz = np.array([[np.cos(theta_z), np.sin(theta_z), 0.], [-np.sin(theta_z), np.cos(theta_z), 0.], [0., 0., 1.]]) d = np.dot(Rx, Ry) d = np.dot(d, Rz) d = np.dot(d, (A-C).T) ## TODO: which version is correct ? The one above or the one below ? #d = a - c #d = np.dot(Rz, d) #d = np.dot(Ry, d) #d = np.dot(Rx, d) print("d:", d) b = np.array([e[2]/d[2] * d[0] - e[0], e[2]/d[2] * d[1] - e[1]]) print("b:", b) # Alternative to compute b Rf = np.array([[1., 0., -e[0]/e[2], 0.], [0., 1., -e[1]/e[2], 0.], [0., 0., 1., 0.], [0., 0., 1./e[2], 0.]]) # Add a line of ones d = np.vstack([d, np.ones(d.shape[1])]) f = np.dot(Rf, d) b = np.array([f[0]/f[3], f[1]/f[3]]) print("b:", b) plot2d(b.T, "ok") plot2d(b.T, "-k")
nb_sci_maths/maths_euclidean_geometry_en.ipynb
jdhp-docs/python_notebooks
mit
Resources The Spark Programming Guide Accumulators Spark functions such as map can use variables defined in the driver program, but they make local copies of the variable that are not passed back to the driver program. Accumulators are shared variables that allow the aggregation of results from workers back to the driver program, for example, as an event counter. Suppose we want to count the number of rows of data with missing information. The most efficient way is to use an accumulator.
ulysses = sc.textFile('data/Ulysses.txt') ulysses.take(10)
notebook/18C_Efficiency_In_Spark.ipynb
cliburn/sta-663-2017
mit
Event counting Notice that we have some empty lines. We want to count the number of non-empty lines.
num_lines = sc.accumulator(0) def tokenize(line): table = dict.fromkeys(map(ord, string.punctuation)) return line.translate(table).lower().strip().split() def tokenize_count(line): global num_lines if line: num_lines += 1 return tokenize(line) counter = ulysses.flatMap(lambda line: tokenize_count(line)).countByValue() counter['circle'] num_lines.value
notebook/18C_Efficiency_In_Spark.ipynb
cliburn/sta-663-2017
mit
Broadcast Variables Sometimes we need to send a large read only variable to all workers. For example, we might want to share a large feature matrix to all workers as a part of a machine learning application. This same variable will be sent separately for each parallel operation unless you use a broadcast variable. Also, the default variable passing mechanism is optimized for small variables and can be slow when the variable is large.
from itertools import count table = dict(zip(string.ascii_letters, count())) def weight_first(line, table): words = tokenize(line) return sum(table.get(word[0], 0) for word in words if word.isalpha()) def weight_last(line, table): words = tokenize(line) return sum(table.get(word[-1], 0) for word in words if word.isalpha())
notebook/18C_Efficiency_In_Spark.ipynb
cliburn/sta-663-2017
mit
The dictionary table is sent out twice to worker nodes, one for each call
ulysses.map(lambda line: weight_first(line, table)).sum() ulysses.map(lambda line: weight_last(line, table)).sum()
notebook/18C_Efficiency_In_Spark.ipynb
cliburn/sta-663-2017
mit
Converting to use broadast variables is simple and more efficient Use SparkContext.broadcast() to create a broadcast variable Where you would use var, use var.value The broadcast variable is sent once to each node and can be re-used
table_bc = sc.broadcast(table) def weight_first_bc(line, table): words = tokenize(line) return sum(table.value.get(word[0], 0) for word in words if word.isalpha()) def weight_last_bc(line, table): words = tokenize(line) return sum(table.value.get(word[-1], 0) for word in words if word.isalpha())
notebook/18C_Efficiency_In_Spark.ipynb
cliburn/sta-663-2017
mit
table_bc is sent to nodes only once. Although it looks like table_bc is being passed to each function, all that is passed is a path to the table. The worker checks if the path has been cached and uses the cache instead of loading from the path.
ulysses.map(lambda line: weight_first_bc(line, table_bc)).sum() ulysses.map(lambda line: weight_last_bc(line, table_bc)).sum()
notebook/18C_Efficiency_In_Spark.ipynb
cliburn/sta-663-2017
mit
The Spark Shuffle and Partitioning Some events trigger the redistribution of data across partitions, and involves the (expensive) copying of data across executors and machines. This is known as the shuffle. For example, if we do a reduceByKey operation on key-value pair RDD, Spark needs to collect all pairs with the same key in the same partition to do the reduction. For key-value RDDs, you have some control over the partitioning of the RDDs. In particular, you can ask Spark to partition a set of keys so that they are guaranteed to appear together on some node. This can minimize a lot of data transfer. For example, suppose you have a large key-value RDD consisting of user_name: comments from a web user community. Every night, you want to update with new user comments with a join operation
def fake_data(n, val): users = list(map(''.join, np.random.choice(list(string.ascii_lowercase), (n,2)))) comments = [val]*n return tuple(zip(users, comments)) data = fake_data(10000, 'a') list(data)[:10] rdd = sc.parallelize(data).reduceByKey(lambda x, y: x+y) new_data = fake_data(1000, 'b') list(new_data)[:10] rdd_new = sc.parallelize(new_data).reduceByKey(lambda x, y: x+y).cache() rdd_updated = rdd.join(rdd_new) rdd_updated.take(10)
notebook/18C_Efficiency_In_Spark.ipynb
cliburn/sta-663-2017
mit
Using partitionBy The join operation will hash all the keys of both rdd and rdd_nerw, sending keys with the same hashes to the same node for the actual join operation. There is a lot of unnecessary data transfer. Since rdd is a much larger data set than rdd_new, we can instead fix the partitioning of rdd and just transfer the keys of rdd_new. This is done by rdd.partitionBy(numPartitions) where numPartitions should be at least twice the number of cores.
rdd2 = sc.parallelize(data).reduceByKey(lambda x, y: x+y) rdd2 = rdd2.partitionBy(10).cache() rdd2_updated = rdd2.join(rdd_new) rdd2_updated.take(10)
notebook/18C_Efficiency_In_Spark.ipynb
cliburn/sta-663-2017
mit
Piping to External Programs Suppose it is more convenient or efficient to write a function in some other language to process data. We can pipe data from Spark to the external program (script) that performs the calculation via standard input and output. The example below shows using a C++ program to calculate the sum of squares for collections of numbers.
%%file foo.cpp #include <iostream> #include <sstream> #include <string> #include <numeric> #include <vector> using namespace std; double sum_squares(double x, double y) { return x + y*y; }; int main() { string s; while (cin) { getline(cin, s); stringstream stream(s); vector<double> v; while(1) { double u; stream >> u; if(!stream) break; v.push_back(u); } if (v.size()) { double x = accumulate(v.begin(), v.end(), 0.0, sum_squares); cout << x << endl; } } } ! g++ foo.cpp -o foo xs = np.random.random((10, 3)) np.savetxt('numbers.txt', xs)
notebook/18C_Efficiency_In_Spark.ipynb
cliburn/sta-663-2017
mit
Feed data via re-direction
%%bash ./foo < numbers.txt
notebook/18C_Efficiency_In_Spark.ipynb
cliburn/sta-663-2017
mit
Feed data via piping
%%bash cat numbers.txt | ./foo !head numbers.txt rdd = sc.textFile('numbers.txt') from pyspark import SparkFiles def prepare(line): """Each line contains numbers separated by a space.""" return ' '.join(line.split()) + '\n' # pipe data to external function func = './foo' sc.addFile(func) ss = rdd.map(lambda s: prepare(s)).pipe(SparkFiles.get(func)) np.array(ss.collect(), dtype='float') np.sum(xs**2, 1)
notebook/18C_Efficiency_In_Spark.ipynb
cliburn/sta-663-2017
mit
Version
%load_ext version_information %version_information pyspark, numpy
notebook/18C_Efficiency_In_Spark.ipynb
cliburn/sta-663-2017
mit
The iris data set is labeled into three classes; there are four measurements that were taken for each iris. While we will not be using the labeled data for clustering, it does provide us a convenient comparison and visualization of the data as it was provided. In this example I use Seaborn for the visualization of the data. <sub><sup>(As an aside, the approach taken here of using all the data for visualization does not scale to large datasets. One approach to dealing with large data sets is to sample the data in H2O and then transfer the sample of data to the Python environment for plotting).</sup></sub>
try: imp.find_module('pandas') can_pandas = True import pandas as pd except: can_pandas = False try: imp.find_module('seaborn') can_seaborn = True import seaborn as sns except: can_seaborn = False %matplotlib inline if can_seaborn: sns.set() if can_seaborn: sns.set_context("notebook") sns.pairplot(iris.as_data_frame(), vars=["sepal_len", "sepal_wid", "petal_len", "petal_wid"], hue="class");
h2o-py/demos/kmeans_aic_bic_diagnostics.ipynb
pchmieli/h2o-3
apache-2.0
Finally, I will plot the predicted cluster membership using the same layout as on the original data earlier in the notebook.
if can_seaborn: sns.pairplot(iris.as_data_frame(), vars=["sepal_len", "sepal_wid", "petal_len", "petal_wid"], hue="Predicted");
h2o-py/demos/kmeans_aic_bic_diagnostics.ipynb
pchmieli/h2o-3
apache-2.0
Data
from quantopian.pipeline.data.builtin import USEquityPricing
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/10-Quantopian-Platform/06-Pipelines.ipynb
arcyfelix/Courses
apache-2.0
Factors Remember, Factors take in an asset and a timestamp and return some numerical value.
from quantopian.pipeline.factors import BollingerBands,SimpleMovingAverage,EWMA SimpleMovingAverage(inputs = [USEquityPricing.close], window_length = 30) def make_pipeline(): mean_close_30 = SimpleMovingAverage(inputs = [USEquityPricing.close], window_length = 30) return Pipeline(columns = { '30 Day Mean Close':mean_close_30 }) results = run_pipeline(make_pipeline(), '2017-01-01', '2017-01-01') results.head(20) def make_pipeline(): mean_close_30 = SimpleMovingAverage(inputs = [USEquityPricing.close], window_length = 30) latest_close = USEquityPricing.close.latest return Pipeline(columns = { '30 Day Mean Close':mean_close_30, 'Latest Close':latest_close }) results = run_pipeline(make_pipeline(), '2017-01-01', '2017-01-01') results.head(10)
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/10-Quantopian-Platform/06-Pipelines.ipynb
arcyfelix/Courses
apache-2.0
Combining Factors
def make_pipeline(): mean_close_10 = SimpleMovingAverage(inputs = [USEquityPricing.close], window_length = 10) mean_close_30 = SimpleMovingAverage(inputs = [USEquityPricing.close], window_length = 30) latest_close = USEquityPricing.close.latest percent_difference = (mean_close_10-mean_close_30) / mean_close_30 return Pipeline(columns = { 'Percent Difference':percent_difference, '30 Day Mean Close':mean_close_30, 'Latest Close':latest_close }) results = run_pipeline(make_pipeline(), '2017-01-01', '2017-01-01') results.head()
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/10-Quantopian-Platform/06-Pipelines.ipynb
arcyfelix/Courses
apache-2.0
Filters and Screens Filters take in an asset and a timestamp and return a boolean
last_close_price = USEquityPricing.close.latest close_price_filter = last_close_price > 20 close_price_filter def make_pipeline(): mean_close_10 = SimpleMovingAverage(inputs = [USEquityPricing.close], window_length = 10) mean_close_30 = SimpleMovingAverage(inputs = [USEquityPricing.close], window_length = 30) latest_close = USEquityPricing.close.latest percent_difference = (mean_close_10-mean_close_30) / mean_close_30 perc_diff_check = percent_difference > 0 return Pipeline(columns = { 'Percent Difference':percent_difference, '30 Day Mean Close':mean_close_30, 'Latest Close':latest_close, 'Positive Percent Diff': perc_diff_check }) results = run_pipeline(make_pipeline(), '2017-01-01', '2017-01-01') results.head()
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/10-Quantopian-Platform/06-Pipelines.ipynb
arcyfelix/Courses
apache-2.0
Screens
def make_pipeline(): mean_close_10 = SimpleMovingAverage(inputs = [USEquityPricing.close], window_length = 10) mean_close_30 = SimpleMovingAverage(inputs = [USEquityPricing.close], window_length = 30) latest_close = USEquityPricing.close.latest percent_difference = (mean_close_10-mean_close_30) / mean_close_30 perc_diff_check = percent_difference > 0 return Pipeline(columns = { 'Percent Difference':percent_difference, '30 Day Mean Close':mean_close_30, 'Latest Close':latest_close, 'Positive Percent Diff': perc_diff_check}, screen=perc_diff_check) results = run_pipeline(make_pipeline(), '2017-01-01', '2017-01-01') results.head()
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/10-Quantopian-Platform/06-Pipelines.ipynb
arcyfelix/Courses
apache-2.0
Reverse a screen
def make_pipeline(): mean_close_10 = SimpleMovingAverage(inputs = [USEquityPricing.close], window_length = 10) mean_close_30 = SimpleMovingAverage(inputs = [USEquityPricing.close], window_length = 30) latest_close = USEquityPricing.close.latest percent_difference = (mean_close_10-mean_close_30) / mean_close_30 perc_diff_check = percent_difference > 0 return Pipeline(columns = { 'Percent Difference':percent_difference, '30 Day Mean Close':mean_close_30, 'Latest Close':latest_close, 'Positive Percent Diff': perc_diff_check}, screen = ~perc_diff_check) results = run_pipeline(make_pipeline(), '2017-01-01', '2017-01-01') results.head()
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/10-Quantopian-Platform/06-Pipelines.ipynb
arcyfelix/Courses
apache-2.0
Combine Filters
def make_pipeline(): mean_close_10 = SimpleMovingAverage(inputs = [USEquityPricing.close], window_length = 10) mean_close_30 = SimpleMovingAverage(inputs = [USEquityPricing.close], window_length = 30) latest_close = USEquityPricing.close.latest percent_difference = (mean_close_10-mean_close_30) / mean_close_30 perc_diff_check = percent_difference > 0 small_price = latest_close < 5 final_filter = perc_diff_check & small_price return Pipeline(columns = { 'Percent Difference':percent_difference, '30 Day Mean Close':mean_close_30, 'Latest Close':latest_close, 'Positive Percent Diff': perc_diff_check}, screen = final_filter) results = run_pipeline(make_pipeline(), '2017-01-01', '2017-01-01') results.head()
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/10-Quantopian-Platform/06-Pipelines.ipynb
arcyfelix/Courses
apache-2.0
Masking Sometimes we want to ignore certain assets when computing pipeline expresssions. There are two common cases where ignoring assets is useful: * We want to compute an expression that's computationally expensive, and we know we only care about results for certain assets. * We want to compute an expression that performs comparisons between assets, but we only want those comparisons to be performed against a subset of all assets.
def make_pipeline(): # Create Filters for Masks First latest_close = USEquityPricing.close.latest small_price = latest_close < 5 # Pass in the mask mean_close_10 = SimpleMovingAverage(inputs = [USEquityPricing.close], window_length = 10, mask = small_price) mean_close_30 = SimpleMovingAverage(inputs = [USEquityPricing.close], window_length = 30, mask = small_price) percent_difference = (mean_close_10-mean_close_30) / mean_close_30 perc_diff_check = percent_difference > 0 final_filter = perc_diff_check return Pipeline(columns = { 'Percent Difference':percent_difference, '30 Day Mean Close':mean_close_30, 'Latest Close':latest_close, 'Positive Percent Diff': perc_diff_check}, screen = final_filter) results = run_pipeline(make_pipeline(), '2017-01-01', '2017-01-01') results.head() len(results)
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/10-Quantopian-Platform/06-Pipelines.ipynb
arcyfelix/Courses
apache-2.0
Classifiers A classifier is a function from an asset and a moment in time to a categorical output such as a string or integer label.
from quantopian.pipeline.data import morningstar from quantopian.pipeline.classifiers.morningstar import Sector morningstar_sector = Sector() exchange = morningstar.share_class_reference.exchange_id.latest exchange
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/10-Quantopian-Platform/06-Pipelines.ipynb
arcyfelix/Courses
apache-2.0
Classifier Methods eq (equals) isnull startswith
nyse_filter = exchange.eq('NYS') def make_pipeline(): # Create Filters for Masks First latest_close = USEquityPricing.close.latest small_price = latest_close < 5 # Classifier nyse_filter = exchange.eq('NYS') # Pass in the mask mean_close_10 = SimpleMovingAverage(inputs = [USEquityPricing.close], window_length = 10, mask = small_price) mean_close_30 = SimpleMovingAverage(inputs = [USEquityPricing.close], window_length = 30, mask = small_price) percent_difference = (mean_close_10-mean_close_30) / mean_close_30 perc_diff_check = percent_difference > 0 final_filter = perc_diff_check & nyse_filter return Pipeline(columns = { 'Percent Difference':percent_difference, '30 Day Mean Close':mean_close_30, 'Latest Close':latest_close, 'Positive Percent Diff': perc_diff_check}, screen=final_filter) results = run_pipeline(make_pipeline(), '2017-01-01', '2017-01-01') results.head() len(results)
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/10-Quantopian-Platform/06-Pipelines.ipynb
arcyfelix/Courses
apache-2.0
Pipelines in Quantopian IDE
from quantopian.pipeline import Pipeline from quantopian.algorithm import attach_pipeline, pipeline_output def initialize(context): my_pipe = make_pipeline() attach_pipeline(my_pipe, 'my_pipeline') def make_pipeline(): return Pipeline() def before_trading_start(context, data): # Store our pipeline output DataFrame in context. context.output = pipeline_output('my_pipeline')
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/10-Quantopian-Platform/06-Pipelines.ipynb
arcyfelix/Courses
apache-2.0
function declarations
def get_geodataframe(queryset, modification=None, crs={'+init':'epsg:31254'}): query = queryset.query.sql_with_params() if modification: query = (modification, query[1]) return geopandas.read_postgis(query[0], connection, geom_col='geometry', params=query[1], index_col='id', crs=crs)
Compute location distance error.ipynb
jacobdein/alpine-soundscapes
mit
load locations from database
generated = get_geodataframe(SampleLocation.objects.all()) actual = get_geodataframe(Site.objects.filter(id__lte=30))
Compute location distance error.ipynb
jacobdein/alpine-soundscapes
mit
loop through locations and compute distance
distance_array = numpy.zeros(30) distances = pandas.DataFrame({'id': generated.index, 'name': actual.sort_index().name, 'distance': distance_array}).set_index('id') for i in range(1, 31): x1 = generated[generated.index == i].geometry.as_matrix()[0].coords.xy[0][0] x2 = actual[actual.index == i].geometry.as_matrix()[0].coords.xy[0][0] y1 = generated[generated.index == i].geometry.as_matrix()[0].coords.xy[1][0] y2 = actual[actual.index == i].geometry.as_matrix()[0].coords.xy[1][0] distance_array[i - 1] = numpy.sqrt((x2 - x1)**2 + (y2 - y1)**2) distances['distance'] = distance_array distances
Compute location distance error.ipynb
jacobdein/alpine-soundscapes
mit
compute the distance mean and standard deviation
distances.distance.mean().round(0) distances.distance.std().round(0)
Compute location distance error.ipynb
jacobdein/alpine-soundscapes
mit
Examine a single patient
patientunitstayid = 242040 query = query_schema + """ select * from treatment where patientunitstayid = {} order by treatmentoffset """.format(patientunitstayid) df = pd.read_sql_query(query, con) df.head() plt.figure() treatments = df['treatmentstring'].unique() for i, t in enumerate(treatments): idx = df['treatmentstring'] == t df_plt = df.loc[idx, :] plt.plot( df_plt['treatmentoffset'], i*np.ones(df_plt.shape[0]), 'o', label=t) plt.xlabel('Minutes since ICU admission') plt.ylabel('Treatments provided') plt.yticks(np.arange(len(treatments)), treatments) plt.show()
notebooks/treatment.ipynb
mit-eicu/eicu-code
mit
Above we can see that most of these treatments were documented as being given twice (or perhaps continuously given over the duration - the interpretation is not clear). Only milrinone and cardioversion were given once around 620 minutes after ICU admission. Hospitals with data available
query = query_schema + """ with t as ( select distinct patientunitstayid from treatment ) select pt.hospitalid , count(distinct pt.patientunitstayid) as number_of_patients , count(distinct t.patientunitstayid) as number_of_patients_with_tbl from patient pt left join t on pt.patientunitstayid = t.patientunitstayid group by pt.hospitalid """.format(patientunitstayid) df = pd.read_sql_query(query, con) df['data completion'] = df['number_of_patients_with_tbl'] / df['number_of_patients'] * 100.0 df.sort_values('number_of_patients_with_tbl', ascending=False, inplace=True) df.head(n=10) df[['data completion']].vgplot.hist(bins=10, var_name='Number of hospitals', value_name='Percent of patients with data')
notebooks/treatment.ipynb
mit-eicu/eicu-code
mit
Assuming that the Neurokernel source has been cloned to ~/neurokernel, we first consider the two csv files that specify neurons and synapses in the lamina model.
%cd -q ~/neurokernel/examples/vision/data
notebooks/vision.ipynb
neurokernel/vision
bsd-3-clause
The neuron model types and their parameter values are specified in the file neuron_type_lamina.csv. The entire list of neurons is shown below. Note that for convenience, the LPU contains neurons from both the retina and lamina. Descriptions of each of the columns follow: name - Name of neuron morphological type. model - Specifies the name of the class implementing the neuron model. columnar - A value of 1 in column indicates that the neuron/element is a columnar neuron/element; a number greater than 1 indicates the total number of non-columnar neurons of the specified name in the lamina. dummy - Used to indicate alpha-profiles. input - Indicates whether the neuron is an input axon from another LPU. output - Indicates whether the neuron is an output neuron. extern - Specifies if the neuron receives external input, e.g., a video signal. public - Specifies if the neuron may communicate with other LPUs via Neurokernel's API. The remaining columns in the table are neuron model parameters.
import pandas as pd pd.set_option('display.max_columns', 20) mpl.rcParams['figure.figsize'] = (17, 7) neuron_data = pd.read_csv("./neuron_types_lamina.csv") neuron_data = neuron_data.dropna(axis=1) neuron_data.head(n=100)
notebooks/vision.ipynb
neurokernel/vision
bsd-3-clause