markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Trigonometric functions Q1. Calculate sine, cosine, and tangent of x, element-wise.
x = np.array([0., 1., 30, 90]) print "sine:", np.sin(x) print "cosine:", np.cos(x) print "tangent:", np.tan(x)
numpy/numpy_exercises_from_kyubyong/Mathematical_functions_solutions.ipynb
mohanprasath/Course-Work
gpl-3.0
Q2. Calculate inverse sine, inverse cosine, and inverse tangent of x, element-wise.
x = np.array([-1., 0, 1.]) print "inverse sine:", np.arcsin(x2) print "inverse cosine:", np.arccos(x2) print "inverse tangent:", np.arctan(x2)
numpy/numpy_exercises_from_kyubyong/Mathematical_functions_solutions.ipynb
mohanprasath/Course-Work
gpl-3.0
Q3. Convert angles from radians to degrees.
x = np.array([-np.pi, -np.pi/2, np.pi/2, np.pi]) out1 = np.degrees(x) out2 = np.rad2deg(x) assert np.array_equiv(out1, out2) print out1
numpy/numpy_exercises_from_kyubyong/Mathematical_functions_solutions.ipynb
mohanprasath/Course-Work
gpl-3.0
Q4. Convert angles from degrees to radians.
x = np.array([-180., -90., 90., 180.]) out1 = np.radians(x) out2 = np.deg2rad(x) assert np.array_equiv(out1, out2) print out1
numpy/numpy_exercises_from_kyubyong/Mathematical_functions_solutions.ipynb
mohanprasath/Course-Work
gpl-3.0
Hyperbolic functions Q5. Calculate hyperbolic sine, hyperbolic cosine, and hyperbolic tangent of x, element-wise.
x = np.array([-1., 0, 1.]) print np.sinh(x) print np.cosh(x) print np.tanh(x)
numpy/numpy_exercises_from_kyubyong/Mathematical_functions_solutions.ipynb
mohanprasath/Course-Work
gpl-3.0
Rounding Q6. Predict the results of these, paying attention to the difference among the family functions.
x = np.array([2.1, 1.5, 2.5, 2.9, -2.1, -2.5, -2.9]) out1 = np.around(x) out2 = np.floor(x) out3 = np.ceil(x) out4 = np.trunc(x) out5 = [round(elem) for elem in x] print out1 print out2 print out3 print out4 print out5
numpy/numpy_exercises_from_kyubyong/Mathematical_functions_solutions.ipynb
mohanprasath/Course-Work
gpl-3.0
Q7. Implement out5 in the above question using numpy.
print np.floor(np.abs(x) + 0.5) * np.sign(x) # Read http://numpy-discussion.10968.n7.nabble.com/why-numpy-round-get-a-different-result-from-python-round-function-td19098.html
numpy/numpy_exercises_from_kyubyong/Mathematical_functions_solutions.ipynb
mohanprasath/Course-Work
gpl-3.0
Sums, products, differences Q8. Predict the results of these.
x = np.array( [[1, 2, 3, 4], [5, 6, 7, 8]]) outs = [np.sum(x), np.sum(x, axis=0), np.sum(x, axis=1, keepdims=True), "", np.prod(x), np.prod(x, axis=0), np.prod(x, axis=1, keepdims=True), "", np.cumsum(x), np.cumsum(x, axis=0), np....
numpy/numpy_exercises_from_kyubyong/Mathematical_functions_solutions.ipynb
mohanprasath/Course-Work
gpl-3.0
Q9. Calculate the difference between neighboring elements, element-wise.
x = np.array([1, 2, 4, 7, 0]) print np.diff(x)
numpy/numpy_exercises_from_kyubyong/Mathematical_functions_solutions.ipynb
mohanprasath/Course-Work
gpl-3.0
Q10. Calculate the difference between neighboring elements, element-wise, and prepend [0, 0] and append[100] to it.
x = np.array([1, 2, 4, 7, 0]) out1 = np.ediff1d(x, to_begin=[0, 0], to_end=[100]) out2 = np.insert(np.append(np.diff(x), 100), 0, [0, 0]) assert np.array_equiv(out1, out2) print out2
numpy/numpy_exercises_from_kyubyong/Mathematical_functions_solutions.ipynb
mohanprasath/Course-Work
gpl-3.0
Q11. Return the cross product of x and y.
x = np.array([1, 2, 3]) y = np.array([4, 5, 6]) print np.cross(x, y)
numpy/numpy_exercises_from_kyubyong/Mathematical_functions_solutions.ipynb
mohanprasath/Course-Work
gpl-3.0
Exponents and logarithms Q12. Compute $e^x$, element-wise.
x = np.array([1., 2., 3.], np.float32) out = np.exp(x) print out
numpy/numpy_exercises_from_kyubyong/Mathematical_functions_solutions.ipynb
mohanprasath/Course-Work
gpl-3.0
Q13. Calculate exp(x) - 1 for all elements in x.
x = np.array([1., 2., 3.], np.float32) out1 = np.expm1(x) out2 = np.exp(x) - 1. assert np.allclose(out1, out2) print out1
numpy/numpy_exercises_from_kyubyong/Mathematical_functions_solutions.ipynb
mohanprasath/Course-Work
gpl-3.0
Q14. Calculate $2^p$ for all p in x.
x = np.array([1., 2., 3.], np.float32) out1 = np.exp2(x) out2 = 2 ** x assert np.allclose(out1, out2) print out1
numpy/numpy_exercises_from_kyubyong/Mathematical_functions_solutions.ipynb
mohanprasath/Course-Work
gpl-3.0
Q15. Compute natural, base 10, and base 2 logarithms of x element-wise.
x = np.array([1, np.e, np.e**2]) print "natural log =", np.log(x) print "common log =", np.log10(x) print "base 2 log =", np.log2(x)
numpy/numpy_exercises_from_kyubyong/Mathematical_functions_solutions.ipynb
mohanprasath/Course-Work
gpl-3.0
Q16. Compute the natural logarithm of one plus each element in x in floating-point accuracy.
x = np.array([1e-99, 1e-100]) print np.log1p(x) # Compare it with np.log(1 +x)
numpy/numpy_exercises_from_kyubyong/Mathematical_functions_solutions.ipynb
mohanprasath/Course-Work
gpl-3.0
Floating point routines Q17. Return element-wise True where signbit is set.
x = np.array([-3, -2, -1, 0, 1, 2, 3]) out1 = np.signbit(x) out2 = x < 0 assert np.array_equiv(out1, out2) print out1
numpy/numpy_exercises_from_kyubyong/Mathematical_functions_solutions.ipynb
mohanprasath/Course-Work
gpl-3.0
Q18. Change the sign of x to that of y, element-wise.
x = np.array([-1, 0, 1]) y = -1.1 print np.copysign(x, y)
numpy/numpy_exercises_from_kyubyong/Mathematical_functions_solutions.ipynb
mohanprasath/Course-Work
gpl-3.0
Arithmetic operations Q19. Add x and y element-wise.
x = np.array([1, 2, 3]) y = np.array([-1, -2, -3]) out1 = np.add(x, y) out2 = x + y assert np.array_equal(out1, out2) print out1
numpy/numpy_exercises_from_kyubyong/Mathematical_functions_solutions.ipynb
mohanprasath/Course-Work
gpl-3.0
Q20. Subtract y from x element-wise.
x = np.array([3, 4, 5]) y = np.array(3) out1 = np.subtract(x, y) out2 = x - y assert np.array_equal(out1, out2) print out1
numpy/numpy_exercises_from_kyubyong/Mathematical_functions_solutions.ipynb
mohanprasath/Course-Work
gpl-3.0
Q21. Multiply x by y element-wise.
x = np.array([3, 4, 5]) y = np.array([1, 0, -1]) out1 = np.multiply(x, y) out2 = x * y assert np.array_equal(out1, out2) print out1
numpy/numpy_exercises_from_kyubyong/Mathematical_functions_solutions.ipynb
mohanprasath/Course-Work
gpl-3.0
Q22. Divide x by y element-wise in two different ways.
x = np.array([3., 4., 5.]) y = np.array([1., 2., 3.]) out1 = np.true_divide(x, y) out2 = x / y assert np.array_equal(out1, out2) print out1 out3 = np.floor_divide(x, y) out4 = x // y assert np.array_equal(out3, out4) print out3 # Note that in Python 2 and 3, the handling of `divide` differs. # See https://docs.scip...
numpy/numpy_exercises_from_kyubyong/Mathematical_functions_solutions.ipynb
mohanprasath/Course-Work
gpl-3.0
Q23. Compute numerical negative value of x, element-wise.
x = np.array([1, -1]) out1 = np.negative(x) out2 = -x assert np.array_equal(out1, out2) print out1
numpy/numpy_exercises_from_kyubyong/Mathematical_functions_solutions.ipynb
mohanprasath/Course-Work
gpl-3.0
Q24. Compute the reciprocal of x, element-wise.
x = np.array([1., 2., .2]) out1 = np.reciprocal(x) out2 = 1/x assert np.array_equal(out1, out2) print out1
numpy/numpy_exercises_from_kyubyong/Mathematical_functions_solutions.ipynb
mohanprasath/Course-Work
gpl-3.0
Q25. Compute $x^y$, element-wise.
x = np.array([[1, 2], [3, 4]]) y = np.array([[1, 2], [1, 2]]) out = np.power(x, y) print out
numpy/numpy_exercises_from_kyubyong/Mathematical_functions_solutions.ipynb
mohanprasath/Course-Work
gpl-3.0
Q26. Compute the remainder of x / y element-wise in two different ways.
x = np.array([-3, -2, -1, 1, 2, 3]) y = 2 out1 = np.mod(x, y) out2 = x % y assert np.array_equal(out1, out2) print out1 out3 = np.fmod(x, y) print out3
numpy/numpy_exercises_from_kyubyong/Mathematical_functions_solutions.ipynb
mohanprasath/Course-Work
gpl-3.0
Miscellaneous Q27. If an element of x is smaller than 3, replace it with 3. And if an element of x is bigger than 7, replace it with 7.
x = np.arange(10) out1 = np.clip(x, 3, 7) out2 = np.copy(x) out2[out2 < 3] = 3 out2[out2 > 7] = 7 assert np.array_equiv(out1, out2) print out1
numpy/numpy_exercises_from_kyubyong/Mathematical_functions_solutions.ipynb
mohanprasath/Course-Work
gpl-3.0
Q28. Compute the square of x, element-wise.
x = np.array([1, 2, -1]) out1 = np.square(x) out2 = x * x assert np.array_equal(out1, out2) print out1
numpy/numpy_exercises_from_kyubyong/Mathematical_functions_solutions.ipynb
mohanprasath/Course-Work
gpl-3.0
Q29. Compute square root of x element-wise.
x = np.array([1., 4., 9.]) out = np.sqrt(x) print out
numpy/numpy_exercises_from_kyubyong/Mathematical_functions_solutions.ipynb
mohanprasath/Course-Work
gpl-3.0
Q30. Compute the absolute value of x.
x = np.array([[1, -1], [3, -3]]) out = np.abs(x) print out
numpy/numpy_exercises_from_kyubyong/Mathematical_functions_solutions.ipynb
mohanprasath/Course-Work
gpl-3.0
Q31. Compute an element-wise indication of the sign of x, element-wise.
x = np.array([1, 3, 0, -1, -3]) out1 = np.sign(x) out2 = np.copy(x) out2[out2 > 0] = 1 out2[out2 < 0] = -1 assert np.array_equal(out1, out2) print out1
numpy/numpy_exercises_from_kyubyong/Mathematical_functions_solutions.ipynb
mohanprasath/Course-Work
gpl-3.0
Use Dictionary Comprehension
# Display all dictionary entries where the key doesn't start with 'Chris' {keys : Officers[keys] for keys in Officers if not keys.startswith('Chris')}
python/iterating_over_dictionary_keys.ipynb
tpin3694/tpin3694.github.io
mit
Channel library setup
cl = ChannelLibrary(":memory:") pl = PipelineManager() q1 = cl.new_qubit("q1") aps2_1 = cl.new_APS2("BBNAPSa", address="192.168.2.4", trigger_interval=200e-6) aps2_2 = cl.new_APS2("BBNAPSb", address="192.168.2.2") dig_1 = cl.new_Alazar("Alazar_1", address="1", sampling_rate=500e6, record_length=1024) h1 = cl.new_sou...
doc/examples/Example-Sweeps.ipynb
BBN-Q/Auspex
apache-2.0
Pipeline setup: Take Note: we use the buffers keyword argument to automatically generate buffers instead of writers. This is sometimes convenient if you don't require data to be written to file. It becomes immediately available in the notebook after running!
pl.create_default_pipeline(buffers=True) pl["q1"].add(Display(label="blub")) pl["q1"]["Demodulate"]["Integrate"].add(Display(label="int", plot_dims=1)) pl.show_pipeline()
doc/examples/Example-Sweeps.ipynb
BBN-Q/Auspex
apache-2.0
Adding experiment sweeps Once a QubitExperiment has been created, we can programmatically add sweeps as shown here.
lengths = np.linspace(20e-9, 2020e-9, 31) exp = QubitExperiment(RabiWidth(q1,lengths),averages=50) exp.set_fake_data(dig_1, np.exp(-lengths/1e-6)*np.cos(1e7*lengths)) # exp.add_qubit_sweep(q1,"measure", "frequency", np.linspace(6.512e9, 6.522e9, 11)) exp.run_sweeps()
doc/examples/Example-Sweeps.ipynb
BBN-Q/Auspex
apache-2.0
We fetch the data and data descriptor directly from the buffer. The data is automatically reshaped to match the experiment axes, and the descriptor enumerates all of the values of these axes for convenience plotting, etc..
data, descriptor = exp.outputs_by_qubit["q1"][0].get_data() descriptor.axes data.shape
doc/examples/Example-Sweeps.ipynb
BBN-Q/Auspex
apache-2.0
We even include a convenience extent function conforming to the infinitely forgettable matplotlib format.
import matplotlib.pyplot as plt %matplotlib inline plt.imshow(np.real(data), aspect='auto', extent=descriptor.extent()) plt.xlabel("Delay (µs)") plt.ylabel("Frequency (GHz)");
doc/examples/Example-Sweeps.ipynb
BBN-Q/Auspex
apache-2.0
Adding Multiple Sweeps An arbitrary number of sweeps can be added. For example:
exp = QubitExperiment(RabiWidth(q1,lengths),averages=50) exp.add_qubit_sweep(q1,"measure", "frequency", np.linspace(6.512e9, 6.522e9, 5)) exp.add_qubit_sweep(q1,"measure", "amplitude", np.linspace(0.0, 1.0, 21))
doc/examples/Example-Sweeps.ipynb
BBN-Q/Auspex
apache-2.0
If we inspect the internal representation of the "output connector" into which the instrument driver will dump its data, we can see all of the axes it contains.
exp.output_connectors["q1"].descriptor.axes
doc/examples/Example-Sweeps.ipynb
BBN-Q/Auspex
apache-2.0
The Dataset The data from UCLA (found at http://www.ats.ucla.edu/stat/data/binary.csv and originally used in this example: http://www.ats.ucla.edu/stat/r/dae/logit.htm) contains 4 columns: * admit - a binary variable describing if the student was admitted into grad school or not * gre - the student's Graduate Record Ex...
# As we import the data, we rename the "Rank" column to "Prestige" to avoid confusion with the rank method of pandas df = pd.read_csv("binary.csv", header = 0, names = ["Admit", 'GRE', 'GPA', 'Prestige']) df.head()
Logistic Regression.ipynb
rayjustinhuang/DataAnalysisandMachineLearning
mit
Initial Exploratory Data Analysis We take a look at basic summary statistics, a cross-tabulation, and a histogram to get a general idea of the contents of the data.
# Basic summary of the data df.describe() # Generate a cross-tabulation (frequency table by default) of the factors; here we use prestige pd.crosstab(df['Admit'], df['Prestige'], rownames=['Admission'])
Logistic Regression.ipynb
rayjustinhuang/DataAnalysisandMachineLearning
mit
Based on the cruss-tabulation above, it appears that prestige is a significant factor in admission, with those in schools of rank 1 having more admits than not, and those from schools of rank 4 being largely rejected.
# Generate histograms sns.set_color_codes('muted') df.hist(color='g') plt.show()
Logistic Regression.ipynb
rayjustinhuang/DataAnalysisandMachineLearning
mit
Preprocessing the Data While the data is already very analysis-friendly, we still have to change the categorial variable (prestige) into a binary one to be able to create a logistic regression model.
# Dummy code the rank variable dummy_ranks = pd.get_dummies(df['Prestige'], prefix="Prestige") dummy_ranks.head()
Logistic Regression.ipynb
rayjustinhuang/DataAnalysisandMachineLearning
mit
Given that prestige is a categorical value, we perform dummy coding to convert the values into binary variables.
columns1 = ['Admit', 'GRE', 'GPA'] data1 = df[columns1] columns2 = ['Prestige_1','Prestige_2','Prestige_3'] data2 = dummy_ranks[columns2] data = pd.merge(data1, data2, how="outer", left_index=True, right_index=True) data
Logistic Regression.ipynb
rayjustinhuang/DataAnalysisandMachineLearning
mit
Logistic Regression We will use logistic regression to predict the probability that a particular student gets into grad school.
# Separate independent and dependent variables X = data.ix[:,1:] y = data['Admit'] # Create a logistic regression model initial = LogisticRegression(C = 1000, random_state=0) initial.fit(X,y) # Check model accuracy print("Accuracy Score:", initial.score(X,y)) # What percentage of students actually got into grad scho...
Logistic Regression.ipynb
rayjustinhuang/DataAnalysisandMachineLearning
mit
If you were guessing "no," you would be right around 68.25% of the time. Our model is more accurate than just guessing "no" by around 2.5%. Our model is significantly better than random guessing. To be more precise, it is about 20.75% better than just guessing 50/50.
# View coefficients column_names = list(X.columns) coefficients = np.transpose(initial.coef_) intercept = initial.intercept_ Coeffs = pd.DataFrame(coefficients, column_names, columns=['Coefficients']) Coeffs.append(pd.DataFrame(intercept,['Intercept'], columns=['Coefficients']))
Logistic Regression.ipynb
rayjustinhuang/DataAnalysisandMachineLearning
mit
The coefficients above are telling of the value of the data in the dataset. Every additional point in a candidate's GRE score improves their chance of admission by 0.002; every unit increase in GPA increases a candidate's chance by 0.803. The prestige coefficients are interpreted as showing that being from a school of ...
# Split data into training and test sets, using 30% of the data as the test set X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, random_state = 0) # Fit the logistic regression with lambda = 10^-3 lr = LogisticRegression(C = 1000, random_state=0) lr.fit(X_train, y_train) # View predictions p...
Logistic Regression.ipynb
rayjustinhuang/DataAnalysisandMachineLearning
mit
Model Evaluation We now evaluate our logistic regression using some common metrics for assessing model quality.
# Check accuracy print("Accuracy Score:", metrics.accuracy_score(y_test, predicted))
Logistic Regression.ipynb
rayjustinhuang/DataAnalysisandMachineLearning
mit
The accuracy score here is slightly (around 0.083%) better than the optimized logistic regression without the training/test split. Using a well-chosen (completely random) subset of the data, we were able to create a model whose accuracy actually exceeded that of the model created using all of the data. Performance Visu...
# Print confusion matrix and classification report print("Confusion Matrix:\n",metrics.confusion_matrix(y_test, predicted)) print("\nClassification Report:\n",metrics.classification_report(y_test,predicted))
Logistic Regression.ipynb
rayjustinhuang/DataAnalysisandMachineLearning
mit
The confusion matrix shows that out of 82 non-admits, our model got 77 of those right, while 5 of those were false positives. This very good hit rate for 0's is reflected in the high recall of 0.94 for 0's in the classification report. However, the performance of the model is not as good at predicting admits, with onl...
fpr, tpr, thresholds = metrics.roc_curve(y_test, probabilities[:,1]) results = pd.DataFrame({'False Positive Rate': fpr, 'True Positive Rate': tpr}) plt.plot(fpr,tpr, color='g', label="Model") plt.plot([0, 1], [0, 1], color='gray', linestyle='--', label="Baseline (Random Guessing)") plt.xlabel('False Positive Rate') ...
Logistic Regression.ipynb
rayjustinhuang/DataAnalysisandMachineLearning
mit
As the plot above shows, while our logistic regression model is not really that good -- the area under the curve is calculated to be 0.6784 -- in accordance with the results earlier, it still does better than random guessing. Also note that, in alignment with the 0.5 threshhold used by scikit-learn by default, our true...
fivefold = cross_val_score(lr, X, y, scoring='accuracy', cv=5) print("Score per fold:", fivefold) print("Mean score:", fivefold.mean()) print("Standard deviation:", fivefold.std())
Logistic Regression.ipynb
rayjustinhuang/DataAnalysisandMachineLearning
mit
Using five-fold cross-validation on our current model results in a similar accuracy score as the one previously derived, which shows that the model we arrived at earlier is not biased toward the training set and will likely generalize well to new data. Improving the Model We will attempt to improve our model by using a...
from sklearn import preprocessing # Isolate columns to scale toscale = X[['GRE','GPA']].astype(float) scaledX = preprocessing.scale(toscale) scaleddata = pd.DataFrame(scaledX, columns=['GRE','GPA']) # Join scaled data with categorical rank columns scaledX = scaleddata.join(data2) scaledX.head() improve1 = cross_val_...
Logistic Regression.ipynb
rayjustinhuang/DataAnalysisandMachineLearning
mit
The accuracy of our model does not change, but the standard deviation improves a little. This means that our improved model should provide slightly better, or at the very least, more consistent performance. Correcting for Class Imbalance Based on our confusion matrix, the model appeared to be quick to assign values of ...
lrweighted = LogisticRegression(C = 1000, random_state=0, class_weight={0:0.505,1:0.495}) improve2 = cross_val_score(lrweighted, scaledX, y, scoring='accuracy', cv=5) print("Score per fold:", improve2) print("Mean score:", improve2.mean()) print("Standard deviation:", improve2.std())
Logistic Regression.ipynb
rayjustinhuang/DataAnalysisandMachineLearning
mit
Our mean score shows a slight improvement. Our standard deviation is slightly higher than that of the previous, feature-scaled model, but it is still lower than our original model. Hyperparameter Tuning We will check if results change based on the $\lambda$ parameter used in regularization. Note that in scikit-learn's ...
tens = [10**i for i in range(-5,6)] for i in tens: if i == 1000: continue testlr = LogisticRegression(C = i, random_state=0, class_weight={0:0.505,1:0.495}) testcrossval = cross_val_score(testlr, scaledX, y, scoring='accuracy', cv=5) print('For C = {}:'.format(i)) print(' Score per fold:',...
Logistic Regression.ipynb
rayjustinhuang/DataAnalysisandMachineLearning
mit
Given that $C = \frac{1}{\lambda}$, it makes sense that, after a certain value of $C$ (in this case, 100), the model no longer improves because the penalty to the logistic regression objective function is minimal. As such, we will not need to change our current $C$ value. Testing the Revised Model We now check our new ...
# Create new train and test sets and fit our revised model to it X_train2, X_test2, y_train2, y_test2 = train_test_split(scaledX, y, test_size = 0.3, random_state = 0) newlr = LogisticRegression(C = 1000, random_state=0, class_weight={0:0.505,1:0.495}) newlr.fit(X_train2, y_train2) # Check for metrics on the new pred...
Logistic Regression.ipynb
rayjustinhuang/DataAnalysisandMachineLearning
mit
In alignment with our expectations based on our model tuning, all metrics have shown an improvement over our original model. In comparison with our original model, the predictions for non-admits are the same, and we now have two more correctly classified admits than in the previous model, which is obviously an improvem...
# Plot a new ROC curve for the revised model fpr2, tpr2, thresholds2 = metrics.roc_curve(y_test2, newprobabilities[:,1]) results2 = pd.DataFrame({'False Positive Rate': fpr2, 'True Positive Rate': tpr2}) plt.plot(fpr,tpr,color='darkgray', label="Original Model") plt.plot(fpr2,tpr2, color='g', label="Revised Model") p...
Logistic Regression.ipynb
rayjustinhuang/DataAnalysisandMachineLearning
mit
2D transformations
def plot2d(x, fmt="ok"): plt.axis('equal') plt.axis([-5, 5, -5, 5]) plt.xticks(np.arange(-5, 5, 1)) plt.yticks(np.arange(-5, 5, 1)) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.plot(x[:,0], x[:,1], fmt) plt.grid()
nb_sci_maths/maths_euclidean_geometry_en.ipynb
jdhp-docs/python_notebooks
mit
Rotation around the origin $$ R(\theta) = \begin{pmatrix} \cos \theta & -\sin \theta \ \sin \theta & \cos \theta \end{pmatrix} $$ <img src="https://upload.wikimedia.org/wikipedia/commons/thumb/6/64/Mog_rotacion_vector.jpg/263px-Mog_rotacion_vector.jpg"></img> $$ \begin{pmatrix} A'{x} \ A'{y} \end{pmatri...
# Define initial points A = np.array([[0., 0.], [1., 0.], [1., 1.], [0., 0.]]) # Define the rotation angle theta = np.radians(30) # Define the rotation matrix R = np.array([[np.cos(theta), -np.sin(theta)], [np.sin(theta), np.cos(theta)]]) # Rotate points Aprim...
nb_sci_maths/maths_euclidean_geometry_en.ipynb
jdhp-docs/python_notebooks
mit
3D transformations
def plot3d(x, axis=None, fmt="ok"): if axis is None: fig = plt.figure() axis = axes3d.Axes3D(fig) axis.scatter(x[:,0], x[:,1], x[:,2], fmt) axis.plot(x[:,0], x[:,1], x[:,2], fmt)
nb_sci_maths/maths_euclidean_geometry_en.ipynb
jdhp-docs/python_notebooks
mit
Rotation around the x axis $$ R_{\mathbf {x}}(\theta) = \begin{pmatrix} 1& 0 & 0 \ 0& \cos \theta & -\sin \theta \ 0& \sin \theta & \cos \theta \end{pmatrix} $$ See: https://fr.wikipedia.org/wiki/Matrice_de_rotation Rotation around the y axis $$ R_{\mathbf {y}}(\theta ) = \begin{pmatrix} \cos ...
# Define initial points A = np.array([[0., 0., 0.], [1., 0., 0.], [0., 0., 0.], [0., 1., 0.], [0., 0., 0.], [0., 0., 1.]]) # Define the rotation angle theta = np.radians(90) # Define the rotation matrices Rx = np.array([[1., 0., 0.], ...
nb_sci_maths/maths_euclidean_geometry_en.ipynb
jdhp-docs/python_notebooks
mit
Rotation around a given axis Rotation around the axis defined by the unit vector $\overrightarrow{u} \begin{pmatrix} u_{x} \ u_{y} \ u_{z} \end{pmatrix}$ (i.e. with $u_{x}^{2}+u_{y}^{2}+u_{z}^{2}=1$) by an angle $\theta$. $$ R = \begin{pmatrix} u_{x}^{2}(1-c)+c & u_{x}u_{y}(1-c)-u_{z}s & u_{x}u_{z}(1-c)+u_{y...
# Define initial points A = np.array([[0., 0., 0.], [1., 0., 0.], [0., 0., 0.], [0., 1., 0.], [0., 0., 0.], [0., 0., 1.]]) # Define the rotation angle theta = np.radians(10) u = np.array([1., 1., 0.]) ux, uy, uz = u[0], u[1], u[2] c = np.cos(theta...
nb_sci_maths/maths_euclidean_geometry_en.ipynb
jdhp-docs/python_notebooks
mit
Projections Project 2D points on a line Line equation
@interact(a=(-5., 5., 0.1), b=(-5., 5., 0.1), c=(-5., 5., 0.1)) def plot(a, b, c): plt.axis('equal') plt.axis([-5, 5, -5, 5]) plt.xticks(np.arange(-5,5,1)) plt.yticks(np.arange(-5,5,1)) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') x = np.array([-10., 10.]) f = lambda x: a/(-b...
nb_sci_maths/maths_euclidean_geometry_en.ipynb
jdhp-docs/python_notebooks
mit
Distance from a point to a line Line defined by an equation In the case of a line in the plane given by the equation $ax + by + c = 0$, where $a$, $b$ and $c$ are real constants with $a$ and $b$ not both zero, the distance from the line to a point $(x_0,y_0)$ is $$\operatorname{distance}(ax+by+c=0, (x_0, y_0)) = \frac{...
# Setup the plot def plot(a, b, c, p, p2): plt.axis('equal') plt.axis([-5, 5, -5, 5]) plt.xticks(np.arange(-5,5,1)) plt.yticks(np.arange(-5,5,1)) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') x = np.array([-10., 10.]) f = lambda x: a/(-b) * x + c/(-b) plt.plot(x, f(x...
nb_sci_maths/maths_euclidean_geometry_en.ipynb
jdhp-docs/python_notebooks
mit
Line defined by two points If the line passes through two points $P_1 = (x_1, y_1)$ and $P_2 = (x_2, y_2)$ then the distance of $(x_0, y_0)$ from the line is: $$\operatorname{distance}(P_1, P_2, (x_0, y_0)) = \frac{|(y_2-y_1)x_0-(x_2-x_1)y_0+x_2 y_1-y_2 x_1|}{\sqrt{(y_2-y_1)^2+(x_2-x_1)^2}}.$$ The denominator of this e...
# TODO...
nb_sci_maths/maths_euclidean_geometry_en.ipynb
jdhp-docs/python_notebooks
mit
Line defined by a point and an angle
def angle_point_to_equation(angle_degree, p): angle_radian = math.radians(angle_degree) a = math.tan(angle_radian) b = -1 c = -math.tan(angle_radian) * p[0] + p[1] return a, b, c angle_degree = 30 p0 = (3, 2) a, b, c = angle_point_to_equation(angle_degree, p0) p = (-4., 2.) # Compute t...
nb_sci_maths/maths_euclidean_geometry_en.ipynb
jdhp-docs/python_notebooks
mit
Project 3D points on a plane without perspective TODO... Project 3D points on a plane with perspective The following variables are defined to describe this transformation: $\mathbf{a}_{x,y,z}$: the 3D position of a point A that is to be projected. $\mathbf{c}_{x,y,z}$: the 3D position of a point C representing the cam...
# Define initial points to project a = np.array([0., 1., 2.]) # Define camera's position c = np.array([0., 0., 0.]) # Define viewer's position e = np.array([0., 0., -1.]) # Define the orientation of the camera theta = np.array([np.radians(0), np.radians(0), np.radians(0)]) theta_...
nb_sci_maths/maths_euclidean_geometry_en.ipynb
jdhp-docs/python_notebooks
mit
Multiple points version
@interact(theta_x=(-90., 90., 1.), theta_y=(-90., 90., 1.), theta_z=(-90., 90., 1.)) def projection(theta_x, theta_y, theta_z): # Define initial points to project A = np.array([[-1., 0., 1.], [ 1., 0., 1.], [-1., 0., 2.], [ 1., 0., 2.], ...
nb_sci_maths/maths_euclidean_geometry_en.ipynb
jdhp-docs/python_notebooks
mit
Resources The Spark Programming Guide Accumulators Spark functions such as map can use variables defined in the driver program, but they make local copies of the variable that are not passed back to the driver program. Accumulators are shared variables that allow the aggregation of results from workers back to the driv...
ulysses = sc.textFile('data/Ulysses.txt') ulysses.take(10)
notebook/18C_Efficiency_In_Spark.ipynb
cliburn/sta-663-2017
mit
Event counting Notice that we have some empty lines. We want to count the number of non-empty lines.
num_lines = sc.accumulator(0) def tokenize(line): table = dict.fromkeys(map(ord, string.punctuation)) return line.translate(table).lower().strip().split() def tokenize_count(line): global num_lines if line: num_lines += 1 return tokenize(line) counter = ulysses.flatMap(lambda line: ...
notebook/18C_Efficiency_In_Spark.ipynb
cliburn/sta-663-2017
mit
Broadcast Variables Sometimes we need to send a large read only variable to all workers. For example, we might want to share a large feature matrix to all workers as a part of a machine learning application. This same variable will be sent separately for each parallel operation unless you use a broadcast variable. Also...
from itertools import count table = dict(zip(string.ascii_letters, count())) def weight_first(line, table): words = tokenize(line) return sum(table.get(word[0], 0) for word in words if word.isalpha()) def weight_last(line, table): words = tokenize(line) return sum(table.get(word[-1], 0) for word in w...
notebook/18C_Efficiency_In_Spark.ipynb
cliburn/sta-663-2017
mit
The dictionary table is sent out twice to worker nodes, one for each call
ulysses.map(lambda line: weight_first(line, table)).sum() ulysses.map(lambda line: weight_last(line, table)).sum()
notebook/18C_Efficiency_In_Spark.ipynb
cliburn/sta-663-2017
mit
Converting to use broadast variables is simple and more efficient Use SparkContext.broadcast() to create a broadcast variable Where you would use var, use var.value The broadcast variable is sent once to each node and can be re-used
table_bc = sc.broadcast(table) def weight_first_bc(line, table): words = tokenize(line) return sum(table.value.get(word[0], 0) for word in words if word.isalpha()) def weight_last_bc(line, table): words = tokenize(line) return sum(table.value.get(word[-1], 0) for word in words if word.isalpha())
notebook/18C_Efficiency_In_Spark.ipynb
cliburn/sta-663-2017
mit
table_bc is sent to nodes only once. Although it looks like table_bc is being passed to each function, all that is passed is a path to the table. The worker checks if the path has been cached and uses the cache instead of loading from the path.
ulysses.map(lambda line: weight_first_bc(line, table_bc)).sum() ulysses.map(lambda line: weight_last_bc(line, table_bc)).sum()
notebook/18C_Efficiency_In_Spark.ipynb
cliburn/sta-663-2017
mit
The Spark Shuffle and Partitioning Some events trigger the redistribution of data across partitions, and involves the (expensive) copying of data across executors and machines. This is known as the shuffle. For example, if we do a reduceByKey operation on key-value pair RDD, Spark needs to collect all pairs with the sa...
def fake_data(n, val): users = list(map(''.join, np.random.choice(list(string.ascii_lowercase), (n,2)))) comments = [val]*n return tuple(zip(users, comments)) data = fake_data(10000, 'a') list(data)[:10] rdd = sc.parallelize(data).reduceByKey(lambda x, y: x+y) new_data = fake_data(1000, 'b') list(new_da...
notebook/18C_Efficiency_In_Spark.ipynb
cliburn/sta-663-2017
mit
Using partitionBy The join operation will hash all the keys of both rdd and rdd_nerw, sending keys with the same hashes to the same node for the actual join operation. There is a lot of unnecessary data transfer. Since rdd is a much larger data set than rdd_new, we can instead fix the partitioning of rdd and just trans...
rdd2 = sc.parallelize(data).reduceByKey(lambda x, y: x+y) rdd2 = rdd2.partitionBy(10).cache() rdd2_updated = rdd2.join(rdd_new) rdd2_updated.take(10)
notebook/18C_Efficiency_In_Spark.ipynb
cliburn/sta-663-2017
mit
Piping to External Programs Suppose it is more convenient or efficient to write a function in some other language to process data. We can pipe data from Spark to the external program (script) that performs the calculation via standard input and output. The example below shows using a C++ program to calculate the sum of...
%%file foo.cpp #include <iostream> #include <sstream> #include <string> #include <numeric> #include <vector> using namespace std; double sum_squares(double x, double y) { return x + y*y; }; int main() { string s; while (cin) { getline(cin, s); stringstream stream(s); vector<doubl...
notebook/18C_Efficiency_In_Spark.ipynb
cliburn/sta-663-2017
mit
Feed data via re-direction
%%bash ./foo < numbers.txt
notebook/18C_Efficiency_In_Spark.ipynb
cliburn/sta-663-2017
mit
Feed data via piping
%%bash cat numbers.txt | ./foo !head numbers.txt rdd = sc.textFile('numbers.txt') from pyspark import SparkFiles def prepare(line): """Each line contains numbers separated by a space.""" return ' '.join(line.split()) + '\n' # pipe data to external function func = './foo' sc.addFile(func) ss = rdd.map(lamb...
notebook/18C_Efficiency_In_Spark.ipynb
cliburn/sta-663-2017
mit
Version
%load_ext version_information %version_information pyspark, numpy
notebook/18C_Efficiency_In_Spark.ipynb
cliburn/sta-663-2017
mit
The iris data set is labeled into three classes; there are four measurements that were taken for each iris. While we will not be using the labeled data for clustering, it does provide us a convenient comparison and visualization of the data as it was provided. In this example I use Seaborn for the visualization of th...
try: imp.find_module('pandas') can_pandas = True import pandas as pd except: can_pandas = False try: imp.find_module('seaborn') can_seaborn = True import seaborn as sns except: can_seaborn = False %matplotlib inline if can_seaborn: sns.set() if can_seaborn: sns.set_conte...
h2o-py/demos/kmeans_aic_bic_diagnostics.ipynb
pchmieli/h2o-3
apache-2.0
Finally, I will plot the predicted cluster membership using the same layout as on the original data earlier in the notebook.
if can_seaborn: sns.pairplot(iris.as_data_frame(), vars=["sepal_len", "sepal_wid", "petal_len", "petal_wid"], hue="Predicted");
h2o-py/demos/kmeans_aic_bic_diagnostics.ipynb
pchmieli/h2o-3
apache-2.0
Data
from quantopian.pipeline.data.builtin import USEquityPricing
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/10-Quantopian-Platform/06-Pipelines.ipynb
arcyfelix/Courses
apache-2.0
Factors Remember, Factors take in an asset and a timestamp and return some numerical value.
from quantopian.pipeline.factors import BollingerBands,SimpleMovingAverage,EWMA SimpleMovingAverage(inputs = [USEquityPricing.close], window_length = 30) def make_pipeline(): mean_close_30 = SimpleMovingAverage(inputs = [USEquityPricing.close], wind...
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/10-Quantopian-Platform/06-Pipelines.ipynb
arcyfelix/Courses
apache-2.0
Combining Factors
def make_pipeline(): mean_close_10 = SimpleMovingAverage(inputs = [USEquityPricing.close], window_length = 10) mean_close_30 = SimpleMovingAverage(inputs = [USEquityPricing.close], window_length = 30) latest_close = USEquit...
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/10-Quantopian-Platform/06-Pipelines.ipynb
arcyfelix/Courses
apache-2.0
Filters and Screens Filters take in an asset and a timestamp and return a boolean
last_close_price = USEquityPricing.close.latest close_price_filter = last_close_price > 20 close_price_filter def make_pipeline(): mean_close_10 = SimpleMovingAverage(inputs = [USEquityPricing.close], window_length = 10) mean_close_30 = SimpleMovingAverage(inputs =...
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/10-Quantopian-Platform/06-Pipelines.ipynb
arcyfelix/Courses
apache-2.0
Screens
def make_pipeline(): mean_close_10 = SimpleMovingAverage(inputs = [USEquityPricing.close], window_length = 10) mean_close_30 = SimpleMovingAverage(inputs = [USEquityPricing.close], window_length = 30) latest_close = USEquit...
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/10-Quantopian-Platform/06-Pipelines.ipynb
arcyfelix/Courses
apache-2.0
Reverse a screen
def make_pipeline(): mean_close_10 = SimpleMovingAverage(inputs = [USEquityPricing.close], window_length = 10) mean_close_30 = SimpleMovingAverage(inputs = [USEquityPricing.close], window_length = 30) latest_close = USEquit...
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/10-Quantopian-Platform/06-Pipelines.ipynb
arcyfelix/Courses
apache-2.0
Combine Filters
def make_pipeline(): mean_close_10 = SimpleMovingAverage(inputs = [USEquityPricing.close], window_length = 10) mean_close_30 = SimpleMovingAverage(inputs = [USEquityPricing.close], window_length = 30) latest_close = USEquit...
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/10-Quantopian-Platform/06-Pipelines.ipynb
arcyfelix/Courses
apache-2.0
Masking Sometimes we want to ignore certain assets when computing pipeline expresssions. There are two common cases where ignoring assets is useful: * We want to compute an expression that's computationally expensive, and we know we only care about results for certain assets. * We want to compute an expression that per...
def make_pipeline(): # Create Filters for Masks First latest_close = USEquityPricing.close.latest small_price = latest_close < 5 # Pass in the mask mean_close_10 = SimpleMovingAverage(inputs = [USEquityPricing.close], window_length = 10, ...
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/10-Quantopian-Platform/06-Pipelines.ipynb
arcyfelix/Courses
apache-2.0
Classifiers A classifier is a function from an asset and a moment in time to a categorical output such as a string or integer label.
from quantopian.pipeline.data import morningstar from quantopian.pipeline.classifiers.morningstar import Sector morningstar_sector = Sector() exchange = morningstar.share_class_reference.exchange_id.latest exchange
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/10-Quantopian-Platform/06-Pipelines.ipynb
arcyfelix/Courses
apache-2.0
Classifier Methods eq (equals) isnull startswith
nyse_filter = exchange.eq('NYS') def make_pipeline(): # Create Filters for Masks First latest_close = USEquityPricing.close.latest small_price = latest_close < 5 # Classifier nyse_filter = exchange.eq('NYS') # Pass in the mask mean_close_10 = SimpleMovingAverage(inputs = [USE...
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/10-Quantopian-Platform/06-Pipelines.ipynb
arcyfelix/Courses
apache-2.0
Pipelines in Quantopian IDE
from quantopian.pipeline import Pipeline from quantopian.algorithm import attach_pipeline, pipeline_output def initialize(context): my_pipe = make_pipeline() attach_pipeline(my_pipe, 'my_pipeline') def make_pipeline(): return Pipeline() def before_trading_start(context, data): # Store our pipeline ou...
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/10-Quantopian-Platform/06-Pipelines.ipynb
arcyfelix/Courses
apache-2.0
function declarations
def get_geodataframe(queryset, modification=None, crs={'+init':'epsg:31254'}): query = queryset.query.sql_with_params() if modification: query = (modification, query[1]) return geopandas.read_postgis(query[0], connection, geom_col='geometry', ...
Compute location distance error.ipynb
jacobdein/alpine-soundscapes
mit
load locations from database
generated = get_geodataframe(SampleLocation.objects.all()) actual = get_geodataframe(Site.objects.filter(id__lte=30))
Compute location distance error.ipynb
jacobdein/alpine-soundscapes
mit
loop through locations and compute distance
distance_array = numpy.zeros(30) distances = pandas.DataFrame({'id': generated.index, 'name': actual.sort_index().name, 'distance': distance_array}).set_index('id') for i in range(1, 31): x1 = generated[generated.index == i].geometry.as_matrix()[0].coords.xy[0][0] x2 = actual[actual.index == i].geometry.as_matr...
Compute location distance error.ipynb
jacobdein/alpine-soundscapes
mit
compute the distance mean and standard deviation
distances.distance.mean().round(0) distances.distance.std().round(0)
Compute location distance error.ipynb
jacobdein/alpine-soundscapes
mit
Examine a single patient
patientunitstayid = 242040 query = query_schema + """ select * from treatment where patientunitstayid = {} order by treatmentoffset """.format(patientunitstayid) df = pd.read_sql_query(query, con) df.head() plt.figure() treatments = df['treatmentstring'].unique() for i, t in enumerate(treatments): idx = df['tr...
notebooks/treatment.ipynb
mit-eicu/eicu-code
mit
Above we can see that most of these treatments were documented as being given twice (or perhaps continuously given over the duration - the interpretation is not clear). Only milrinone and cardioversion were given once around 620 minutes after ICU admission. Hospitals with data available
query = query_schema + """ with t as ( select distinct patientunitstayid from treatment ) select pt.hospitalid , count(distinct pt.patientunitstayid) as number_of_patients , count(distinct t.patientunitstayid) as number_of_patients_with_tbl from patient pt left join t on pt.patientunitstayid = t.patientunitsta...
notebooks/treatment.ipynb
mit-eicu/eicu-code
mit
Assuming that the Neurokernel source has been cloned to ~/neurokernel, we first consider the two csv files that specify neurons and synapses in the lamina model.
%cd -q ~/neurokernel/examples/vision/data
notebooks/vision.ipynb
neurokernel/vision
bsd-3-clause
The neuron model types and their parameter values are specified in the file neuron_type_lamina.csv. The entire list of neurons is shown below. Note that for convenience, the LPU contains neurons from both the retina and lamina. Descriptions of each of the columns follow: name - Name of neuron morphological type. model...
import pandas as pd pd.set_option('display.max_columns', 20) mpl.rcParams['figure.figsize'] = (17, 7) neuron_data = pd.read_csv("./neuron_types_lamina.csv") neuron_data = neuron_data.dropna(axis=1) neuron_data.head(n=100)
notebooks/vision.ipynb
neurokernel/vision
bsd-3-clause